From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Eli Zaretskii Newsgroups: gmane.emacs.devel Subject: Re: Some experience with the igc branch Date: Sat, 28 Dec 2024 15:13:25 +0200 Message-ID: <86jzbkrlvu.fsf@gnu.org> References: <87o713wwsi.fsf@telefonica.net> <867c7lw081.fsf@gnu.org> <87seq93uo7.fsf@protonmail.com> <865xn4w963.fsf@gnu.org> <87wmfkm1fg.fsf@protonmail.com> Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="9442"; mail-complaints-to="usenet@ciao.gmane.io" Cc: gerd.moellmann@gmail.com, stefankangas@gmail.com, ofv@wanadoo.es, emacs-devel@gnu.org, eller.helmut@gmail.com, acorallo@gnu.org To: Pip Cet Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Sat Dec 28 14:14:23 2024 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1tRWe6-0002IW-Tj for ged-emacs-devel@m.gmane-mx.org; Sat, 28 Dec 2024 14:14:23 +0100 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tRWdR-00030W-2J; Sat, 28 Dec 2024 08:13:41 -0500 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tRWdG-0002zt-Sy for emacs-devel@gnu.org; Sat, 28 Dec 2024 08:13:31 -0500 Original-Received: from fencepost.gnu.org ([2001:470:142:3::e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tRWdG-0002ju-0e; Sat, 28 Dec 2024 08:13:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=gnu.org; s=fencepost-gnu-org; h=References:Subject:In-Reply-To:To:From:Date: mime-version; bh=uDNRkmMOqiaFZ5Kx5g2KrpjV+9PBpcf3PjbtigyeZaw=; b=jDQ0N/ONirYG jfh1x2g9lXIfVjuU68udJY7ietBwicdVGz/c39SXPIuw1GJQFv/9D9VcG0KVRKROLMzqCkgOH+0qc reqG5Fzflp/DLv47xoQr52HPklryxUrGyg/ki9T+rEHSi6DzUjvxQlHtGu3C6dCke9vexAL0Octfc BbqhGgYaFFdtYlrUiJyZEu1yypxRysblh11M+rJoV5EG37gnqXSuXQep9tGsRdlSLtLV/lVLWx8Ee Ugivy4GS44h8L6BjISEUlQzI+qafLG20Bp2Hm3OINGPX0i2AaJ2c/jxg9ymDaXbKWme7YPvl5AUbe w9oe2hBpiyuhdDIA448lXA==; In-Reply-To: <87wmfkm1fg.fsf@protonmail.com> (message from Pip Cet on Sat, 28 Dec 2024 12:35:09 +0000) X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:327261 Archived-At: > Date: Sat, 28 Dec 2024 12:35:09 +0000 > From: Pip Cet > Cc: gerd.moellmann@gmail.com, stefankangas@gmail.com, ofv@wanadoo.es, emacs-devel@gnu.org, eller.helmut@gmail.com, acorallo@gnu.org > > "Eli Zaretskii" writes: > > >> I'm a bit confused. Right now, on scratch/igc, on GNU/Linux, for Emacs > >> in batch mode, it isn't technically true. This causes the signal > >> handler issue, which we're trying to solve. > > > > The signal handler issue is because igc can happen on our main thread > > Yes. > > > as well. > > It always happens on the main thread if that's all we've got. Even > macOS emulated SIGSEGV suspends the main thread while the message is > being handled, then resumes it afterwards. > > > IOW, there are two possible triggers for igc, > > Three, if you count the idle work. > > > and one of them is concurrent. > > I'd prefer to avoid that word. There are facts we need to establish, > and "concurrent" isn't well-defined enough to be helpful here. > > On single-threaded batch-mode GNU/Linux Emacs, on scratch/igc, no second > thread is created for MPS. My understanding is this is a useful, > common, and intentional scenario for running MPS, not a bug or an > accident. > > Of course we're free to change that, and run MPS from another thread, > but that's not a no-brainer. > > >> > I also thought MPS GC runs concurrently in its own thread. > >> > >> That's what you should think: GC can strike at any time. > > > > The same is true with the old GC. > > The old GC emphatically could not "strike at any time". There is plenty > of code that assumes it doesn't strike. Some of it might even be > correct. > > >> If your code assumes it can't, it's broken. > > > > I disagree. Sometimes you need to do stuff that cannot allow GC, and > > that's okay if we have means to prevent GC when we need that. > > So now you're saying it's okay for code to assume GC can't strike, after > agreeing a few lines up that it's not okay for code to do so. Which one > is it? > > >> As far as everybody but igc.c is concerned, it's safer to assume that GC > >> runs on a separate thread. > > > > We are not talking about assumptions here, we are talking about facts. > > The fact is we don't even know whether GC is usually on a "separate" > thread on macOS and Windows. On GNU/Linux, assuming it does leads to > bugs. > > > If igc is concurrent, it means it runs on a separate thread. If it > > doesn't run on a separate thread, it's not concurrent. > > Those two statements are equivalent. They're not sufficient to define > "concurrent"-as-Eli-understands-it, just for establishing a necessary > condition for that. > > If we agree on that condition, then no, MPS is not always > concurrent-as-Eli-understands-it. > > > We need to establish which is the truth, so we understand what we are > > dealing with. > > Why? Whatever the truth is, we can safely assume it's an implementation > detail and not rely on it. > > We don't need to agree on a definition of concurrency. > > We don't need to agree on what's likely, just that single-thread > operation of MPS and parallel MPS threads are both possible and not > bugs. You respond as if what I wrote had as its purpose to attack or offend you. It wasn't; all I want is to establish the truth. Please read what I write with that in mind, and drop the attitude, because it doesn't help.