From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Pip Cet via "Emacs development discussions." Newsgroups: gmane.emacs.devel Subject: Re: Some experience with the igc branch Date: Sat, 28 Dec 2024 21:05:43 +0000 Message-ID: <871pxrecy7.fsf@protonmail.com> References: <87o713wwsi.fsf@telefonica.net> <87cyhf8xw5.fsf@protonmail.com> <86y103zm4m.fsf@gnu.org> <8734ia79jq.fsf@protonmail.com> <86pllexwru.fsf@gnu.org> <87ldw15h6s.fsf@protonmail.com> <86a5chw20u.fsf@gnu.org> <871pxt5b8u.fsf@protonmail.com> <865xn3sn2e.fsf@gnu.org> Reply-To: Pip Cet Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="30927"; mail-complaints-to="usenet@ciao.gmane.io" Cc: gerd.moellmann@gmail.com, ofv@wanadoo.es, emacs-devel@gnu.org, eller.helmut@gmail.com, acorallo@gnu.org To: Eli Zaretskii Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Sun Dec 29 06:39:14 2024 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1tRm1A-0007pG-5q for ged-emacs-devel@m.gmane-mx.org; Sun, 29 Dec 2024 06:39:12 +0100 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tRm0I-0004Br-4I; Sun, 29 Dec 2024 00:38:18 -0500 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tRe0Q-0001lR-RM for emacs-devel@gnu.org; Sat, 28 Dec 2024 16:05:55 -0500 Original-Received: from mail-10629.protonmail.ch ([79.135.106.29]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tRe0N-0002gO-Ll for emacs-devel@gnu.org; Sat, 28 Dec 2024 16:05:54 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com; s=protonmail3; t=1735419948; x=1735679148; bh=DytpuocpuBAtBqZCl9Q1fNisQVahAz+1HSItwTohmNc=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector:List-Unsubscribe:List-Unsubscribe-Post; b=y+ZfJ2WHfMVT65IIzJEuw7gLigvwug2c/WE8qHyHBy41TqI5Sl2e9hWjRr3nEInNX bbpH5jCE90dRVzTRYGeqHFEYDG7gFUMa7GCaSoKYCWtB6QqXTUTZaXtmEaZWjXsR5+ yl52u1vilHQpram9NI+9Kuy86ZZvoHWEMWvjlCAEnKvNA/adRAYC9aY89wK7/FSAwL a4VUM5a46GZaVdm7BROP+1lSxt8okEIN3GUM6TJp7v1FWB6ZxqP2Zo7QKKw8d6sZJX f8bqDcD1wdIV4j3YzI7cviZ4HeS6+uM6XqJznI3L/Pt1KEeIRTsgLKr4xO6zRPXoTw 9S5mNEE33BW8A== In-Reply-To: <865xn3sn2e.fsf@gnu.org> Feedback-ID: 112775352:user:proton X-Pm-Message-ID: 6fd669852eac3b8692937019c461c25485fa1b66 Received-SPF: pass client-ip=79.135.106.29; envelope-from=pipcet@protonmail.com; helo=mail-10629.protonmail.ch X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Sun, 29 Dec 2024 00:38:16 -0500 X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:327308 Archived-At: "Eli Zaretskii" writes: >> Date: Fri, 27 Dec 2024 16:42:48 +0000 >> From: Pip Cet >> Cc: gerd.moellmann@gmail.com, ofv@wanadoo.es, emacs-devel@gnu.org, eller= .helmut@gmail.com, acorallo@gnu.org >> >> "Eli Zaretskii" writes: >> >> >> All redirected MPS calls wait synchronously for the allocation thread= to >> >> respond. >> >> >> >> This includes the MPS SIGSEGV handler, which calls into MPS, so it mu= st >> >> be directed to another thread. >> > >> > MPS SIGSEGV handler is invoked when the Lisp machine touches objects >> > which were relocated by MPS, right? >> > What exactly does the allocation thread do when that happens? >> >> Attempt to trigger another fault at the same address, which calls into >> MPS, which eventually does whatever is necessary to advance to a state >> where there is no longer a memory barrier. Of course we could call the >> MPS signal handler directly from the allocation thread rather than >> triggering another fault. (MPS allows for the possibility that the >> memory barrier is no longer in place by the time the arena lock has been >> acquired, and it has to, for multi-threaded operation.) >> >> What precisely MPS does is an implementation detail, and may be >> complicated (the instruction emulation code which causes so much trouble >> for weak objects, for example). >> >> I also think it's an implementation detail what MPS uses memory barriers >> for: I don't think the current code uses superfluous memory barriers to >> gather statistics, for example, but we certainly cannot assume that will >> never happen. > > I think you lost me. Let me try to explain what I was asking about. > > MPS SIGSEGV is triggered when the main thread touches memory that was > relocated by MPS. (well, the memory hasn't been relocated, it just contains invalid pointers). > With the current way we interface with MPS, where > the main thread calls MPS directly, MPS sets up SIGSEGV to invoke its > (MPS's) own handler, which then handles the memory access. It removes the barrier and returns, making the main thread try again. > By > contrast, under your proposal, MPS should be called from a separate > thread. However, the way we currently process signals, the signals > are delivered to the main thread. So we should install our own > SIGSEGV handler which will run in the context of the main thread, and > should somehow redirect the handling of this SIGSEGV to the MPS > thread, right? Correct. > So now the main thread calls pthread_kill to deliver > the SIGSEGV to the MPS thread, No, that wouldn't work. We need the signal handler to have access to the siginfo_t data, and pthread_kill provides no way to include that information. Instead, we call the SIGSEGV handler directly on the other thread, passing in the same siginfo structure. (My original code simply accessed a byte at the fault address; however, reading the byte isn't sufficient, and writing it risks exposing inadmissible intermediate states to other threads, so now we call the sa_sigaction function directly). >=C2=A0but what will the MPS thread do with that? Call the MPS SIGSEGV handler, which knows what to do based (currently) only on the address. > how will it know which MPS function to call? The MPS SIGSEGV handler is obtained by calling sigaction. >> >> 3. we don't allocate memory >> > >> > Why can't GC happen when we don't allocate memory? >> > >> >> 4. we don't trigger memory barriers >> > >> > Same question here. >> >> I meant all four conditions are necessary, not that any one of thew >> mould be sufficient. >> >> GC can happen if another thread triggers a memory barrier OR another >> thread allocates OR we hit a memory barrier OR we allocate. The >> question is whether it is ever useful to assume that GC can happen ONLY >> in these four cases. > > GC can also happen when Emacs is idle, and at that time there's no > allocations. If you want to spell it out, sure, but there's no way for the main thread to become idle without potentially allocating memory, so this fifth condition is redundant. Pip