From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Eli Zaretskii Newsgroups: gmane.emacs.devel Subject: Re: Some experience with the igc branch Date: Sat, 28 Dec 2024 20:02:33 +0200 Message-ID: <865xn3sn2e.fsf@gnu.org> References: <87o713wwsi.fsf@telefonica.net> <86ldw40xbo.fsf@gnu.org> <87cyhf8xw5.fsf@protonmail.com> <86y103zm4m.fsf@gnu.org> <8734ia79jq.fsf@protonmail.com> <86pllexwru.fsf@gnu.org> <87ldw15h6s.fsf@protonmail.com> <86a5chw20u.fsf@gnu.org> <871pxt5b8u.fsf@protonmail.com> Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="32461"; mail-complaints-to="usenet@ciao.gmane.io" Cc: gerd.moellmann@gmail.com, ofv@wanadoo.es, emacs-devel@gnu.org, eller.helmut@gmail.com, acorallo@gnu.org To: Pip Cet Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Sat Dec 28 19:03:36 2024 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1tRb9z-0008J8-Q4 for ged-emacs-devel@m.gmane-mx.org; Sat, 28 Dec 2024 19:03:35 +0100 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tRb9G-0004vN-Hp; Sat, 28 Dec 2024 13:02:50 -0500 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tRb97-0004ud-NZ for emacs-devel@gnu.org; Sat, 28 Dec 2024 13:02:42 -0500 Original-Received: from fencepost.gnu.org ([2001:470:142:3::e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tRb96-00078Z-Rd; Sat, 28 Dec 2024 13:02:40 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=gnu.org; s=fencepost-gnu-org; h=References:Subject:In-Reply-To:To:From:Date: mime-version; bh=8G5Kr5DIypbWjr0rw6PAMu7N89owwYy9280KrzPybbM=; b=PL+3KF2Bwcs9 FRY6PUzzpga98rrpLF8cVrniEruCUlnC6eWa2OMF1TVZVr/I0GYWvB46rBBDgPgsjWtJ8inYfafLP OEhTRVkLLt7MEd+mf2vnZAg8wPUYhYr5v3wZ/QfREXMWhI7HEDhmVvd//u3UTDMYzYOf1MRKVM3H3 KiIOcQ3YfxAXaywv3KGk9+C2XvtmJgoYNHo3BCWbZCbt1Oj+WGBJQuy1sW/xzbgQ/9MDiqK+yyLDR 82YT6TO957tlxt4VBIw74vxDJ+Ym3mZARDrr84zNbxQMS4GhcZIRGFk/x2/p1eSx+UWoniCRdL/5C O7sM6onV7WqD+hfzrak8uw==; In-Reply-To: <871pxt5b8u.fsf@protonmail.com> (message from Pip Cet on Fri, 27 Dec 2024 16:42:48 +0000) X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:327281 Archived-At: > Date: Fri, 27 Dec 2024 16:42:48 +0000 > From: Pip Cet > Cc: gerd.moellmann@gmail.com, ofv@wanadoo.es, emacs-devel@gnu.org, eller.helmut@gmail.com, acorallo@gnu.org > > "Eli Zaretskii" writes: > > >> All redirected MPS calls wait synchronously for the allocation thread to > >> respond. > >> > >> This includes the MPS SIGSEGV handler, which calls into MPS, so it must > >> be directed to another thread. > > > > MPS SIGSEGV handler is invoked when the Lisp machine touches objects > > which were relocated by MPS, right? > > What exactly does the allocation thread do when that happens? > > Attempt to trigger another fault at the same address, which calls into > MPS, which eventually does whatever is necessary to advance to a state > where there is no longer a memory barrier. Of course we could call the > MPS signal handler directly from the allocation thread rather than > triggering another fault. (MPS allows for the possibility that the > memory barrier is no longer in place by the time the arena lock has been > acquired, and it has to, for multi-threaded operation.) > > What precisely MPS does is an implementation detail, and may be > complicated (the instruction emulation code which causes so much trouble > for weak objects, for example). > > I also think it's an implementation detail what MPS uses memory barriers > for: I don't think the current code uses superfluous memory barriers to > gather statistics, for example, but we certainly cannot assume that will > never happen. I think you lost me. Let me try to explain what I was asking about. MPS SIGSEGV is triggered when the main thread touches memory that was relocated by MPS. With the current way we interface with MPS, where the main thread calls MPS directly, MPS sets up SIGSEGV to invoke its (MPS's) own handler, which then handles the memory access. By contrast, under your proposal, MPS should be called from a separate thread. However, the way we currently process signals, the signals are delivered to the main thread. So we should install our own SIGSEGV handler which will run in the context of the main thread, and should somehow redirect the handling of this SIGSEGV to the MPS thread, right? So now the main thread calls pthread_kill to deliver the SIGSEGV to the MPS thread, but what will the MPS thread do with that? how will it know which MPS function to call? > >> 3. we don't allocate memory > > > > Why can't GC happen when we don't allocate memory? > > > >> 4. we don't trigger memory barriers > > > > Same question here. > > I meant all four conditions are necessary, not that any one of thew > mould be sufficient. > > GC can happen if another thread triggers a memory barrier OR another > thread allocates OR we hit a memory barrier OR we allocate. The > question is whether it is ever useful to assume that GC can happen ONLY > in these four cases. GC can also happen when Emacs is idle, and at that time there's no allocations.