From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Pip Cet via "Emacs development discussions." Newsgroups: gmane.emacs.devel Subject: Re: Some experience with the igc branch Date: Sat, 28 Dec 2024 14:04:31 +0000 Message-ID: <87h66nnbuy.fsf@protonmail.com> References: <87o713wwsi.fsf@telefonica.net> <867c7lw081.fsf@gnu.org> <86cyhcum5d.fsf@gnu.org> Reply-To: Pip Cet Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="39506"; mail-complaints-to="usenet@ciao.gmane.io" Cc: Eli Zaretskii , stefankangas@gmail.com, ofv@wanadoo.es, emacs-devel@gnu.org, eller.helmut@gmail.com, acorallo@gnu.org To: =?utf-8?Q?Gerd_M=C3=B6llmann?= Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Sat Dec 28 15:09:20 2024 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1tRXVH-000A5w-BH for ged-emacs-devel@m.gmane-mx.org; Sat, 28 Dec 2024 15:09:19 +0100 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tRXV1-0004Jy-Av; Sat, 28 Dec 2024 09:09:03 -0500 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tRXQm-0002kT-EP for emacs-devel@gnu.org; Sat, 28 Dec 2024 09:04:40 -0500 Original-Received: from mail-40134.protonmail.ch ([185.70.40.134]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tRXQk-0000zS-GH for emacs-devel@gnu.org; Sat, 28 Dec 2024 09:04:40 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com; s=protonmail3; t=1735394675; x=1735653875; bh=m7gW14/vxngZm69p+NMJVRBjyf74EyDIRDYAaxJEiKg=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector:List-Unsubscribe:List-Unsubscribe-Post; b=dv8Yw+KIoH7FN/qCDRsGfhZURnHHM+5qLqQTPuz0plDfSqMfEp44ZKXmwnbTp7z/A 0p2FK5QJ7s8e9rgE8aRO1wsH7SdhipHFyk8oqATJyELl6rO7o/rTjSRv7NLAYM2DT6 SYoTsJTi/AVSCdRhIJySelG0UKLpCEe8NQ8BGmL16HFZd4wLpyTzM0jHc738E42E2S mDiHalL/KE5w7+0Viw0NmQC5Ad2UuZY7+JWeKMMgGW5nPcaiCohnglc70MnLouDTEd 7LIy6s61N8Jvvd1OF87fTbVXEJyAvL0n0+TvMjRbGdum4f4gdszUSAOKe9+eIaqNRC XmB9AroWxN/Mg== In-Reply-To: Feedback-ID: 112775352:user:proton X-Pm-Message-ID: a1d6bcc428988bdb0ff45649392eae2fd23917e3 Received-SPF: pass client-ip=185.70.40.134; envelope-from=pipcet@protonmail.com; helo=mail-40134.protonmail.ch X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Sat, 28 Dec 2024 09:09:00 -0500 X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:327265 Archived-At: Gerd M=C3=B6llmann writes: > Eli Zaretskii writes: > >>> From: Gerd M=C3=B6llmann >>> Cc: stefankangas@gmail.com, pipcet@protonmail.com, ofv@wanadoo.es, >>> emacs-devel@gnu.org, eller.helmut@gmail.com, acorallo@gnu.org >>> Date: Fri, 27 Dec 2024 19:21:30 +0100 >>> >>> Eli Zaretskii writes: >>> >>> > - Concurrent. The GC runs in its own thread. There are no explici= t >>> > calls to start GC, and Emacs doesn't have to wait for the GC to >>> > complete. >>> > >>> > Pip says this is not true? I also thought MPS GC runs concurrently i= n >>> > its own thread. >>> >>> What Pip said was very easy to misunderstand, to say the least :-). No, >>> MPS is concurrent, period. There are situations in which MPS can, in >>> addition, use the main thread. And it's still concurrent, period. >> >> How can you see which thread runs MPS? Where should I put a >> breakpoint to see that (IOW, what are the entry points into MPS GC >> code)? I'd suggest ArenaEnter or MessagePost. >> If I run Emacs with a breakpoint in process_one_message (after >> enabling garbage-collection-messages), all I ever see is GC triggered >> by igc_on_idle, which AFAIU is only one of the way GC can be >> triggered. Where are the entry points for the other GC triggers? I'm >> asking because I'd like to run Emacs like that and see which thread(s) >> run GC. > > I wonder if your interpretation is right here. AFAIR, > process_one_message is always called from igc_on_idle. IOW, we handle There's a second call path when we create finalizable objects (maybe_process_messages). > messages from MPS only when Emacs thinks it's idle, and that is always > in the main thread. My understanding is, also, that process_one_message doesn't trigger GC, it handles messages produced by GCs triggered in other places. > The messages are produced and put into the MPS message queue in the MPS > thread, usually. Or maybe, I don't know that for a fact, also in the > main thread, when allocation points run out of memory, or when we do an > mps_arena_step. The arena step thing is only done if igc-step-interval > is non-zero, which is not the default. (I'm personally using 0.05 =3D 50 > ms, BTW.) It's usually the main thread here. > How to get hold of the MPS thread I don't know. I just see one thread > more when using igc than with the old GC. Maybe one could sett a My understanding is that's the exception handling thread, which only ever runs when another thread hits a memory barrier and is suspended waiting for its resolution; as with my patch, this is about separate stacks (and signal handling contexts), not about parallelism. So it seems we're miscommunicating about these "MPS threads". What are they? Where are they created? What do they do? If we can't answer that, it'll be harder to decide what to do about signal handlers calling into MPS. Pip