From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Pip Cet via "Emacs development discussions." Newsgroups: gmane.emacs.devel Subject: Re: igc, macOS avoiding signals Date: Thu, 02 Jan 2025 17:56:47 +0000 Message-ID: <87ldvtksm6.fsf@protonmail.com> References: <799DDBC5-2C14-4476-B1E0-7BA2FE9E7901@toadstyle.org> <87bjwrgbky.fsf@gmail.com> <865xmznb5c.fsf@gnu.org> <86wmfdk2lx.fsf@gnu.org> <87seq1l7ik.fsf@protonmail.com> <86ldvtjk72.fsf@gnu.org> Reply-To: Pip Cet Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="10802"; mail-complaints-to="usenet@ciao.gmane.io" Cc: stefankangas@gmail.com, gerd.moellmann@gmail.com, eller.helmut@gmail.com, emacs-devel@gnu.org To: Eli Zaretskii Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Thu Jan 02 19:29:07 2025 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1tTPwR-0002gO-QB for ged-emacs-devel@m.gmane-mx.org; Thu, 02 Jan 2025 19:29:07 +0100 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tTPvK-0004wg-Dy; Thu, 02 Jan 2025 13:27:58 -0500 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tTPRP-0003vt-GJ for emacs-devel@gnu.org; Thu, 02 Jan 2025 12:57:04 -0500 Original-Received: from mail-4322.protonmail.ch ([185.70.43.22]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tTPRN-0003SN-10 for emacs-devel@gnu.org; Thu, 02 Jan 2025 12:57:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com; s=protonmail3; t=1735840610; x=1736099810; bh=K5uXmnawSRApHlc4N3px58GjDauiAyYYUKAGTIo+kKc=; h=Date:To:From:Cc:Subject:Message-ID:In-Reply-To:References: Feedback-ID:From:To:Cc:Date:Subject:Reply-To:Feedback-ID: Message-ID:BIMI-Selector:List-Unsubscribe:List-Unsubscribe-Post; b=iebfo0p9qkBIL1tALF0nwJxh66b+qKpuwk7k33lKR2eI3bRbuirHT31/h83nVkqiB Em1Pro+kz5pUU3pAAL+VCjEwfEnTDmx8z5EbDlxsg4YJgrt+lALgf/H8n15XHXz0B+ xU6s8ziN7/WAN4Tt/PI/R7fypk0stoy/OI9p0kxrM9dOb/0/zUrDW9BhuMuj9ahvnt iqB4IVvzdXYkitcfuUBWaqfQnc+haJFPp14I+9QQpt83foiHOqhGHicyR4m6pFZlaP rH+cE+omT1Lx2VtzDkrlXSCrgiJa4VgZbnwLhdehB4u0WRNX0UhDZk3M0re3OLWtX/ irb3WF5ZC90jg== In-Reply-To: <86ldvtjk72.fsf@gnu.org> Feedback-ID: 112775352:user:proton X-Pm-Message-ID: e90e575aa00b1acbe269093917d945a26bd6c716 Received-SPF: pass client-ip=185.70.43.22; envelope-from=pipcet@protonmail.com; helo=mail-4322.protonmail.ch X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, FREEMAIL_REPLY=1, RCVD_IN_MSPIKE_H2=-0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Thu, 02 Jan 2025 13:27:53 -0500 X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:327580 Archived-At: "Eli Zaretskii" writes: >> Date: Thu, 02 Jan 2025 12:34:58 +0000 >> From: Pip Cet >> Cc: Stefan Kangas , gerd.moellmann@gmail.com, el= ler.helmut@gmail.com, emacs-devel@gnu.org >> >> > We don't know the answer, AFAIU. We could tell them that if they >> > prefer a patch, we can send one. >> >> I see no requirement to modify MPS, so far. > > There's no requirement, but it would be silly, IMO, to try to solve > these issues without ever talking to the MPS developers. We have > nothing to lose, but it's possible that they will point out other (Or we'll point them to a discussion which accuses MPS of being "unreasonable" and pretty much demands MPS is fundamentally changed to match our alleged requirements, and they'll get the wrong idea and we'll lose whatever goodwill we might have had.) > solutions. Why give up that up front? It makes no sense to me. I never said you shouldn't talk to the MPS developers! I'm not going to unless I can present them with a patch that I could at least imagine I'd consider if positions were reversed (unless I were asked to do so, which seems unlikely :-) ). I think it would be good to treat their time as valuable enough not to simply point them to a very long thread, but that's just my opinion. >> >> If we do decide to contact them, I'm afraid that I don't have suffici= ent >> >> context to accurately describe the proposed callback. I would need t= o >> >> ask someone to summarize the idea in sufficient detail so that we can >> >> start a conversation. >> > >> > The problem is that evidently (at least on Posix platforms), if a >> > program that uses MPS runs application code from a SIGPROF or a >> > SIGALRM or a SIGCHLD signal handler can trigger a recursive access to >> > the MPS arena, >> >> Let's be specific here: it's about accessing MPS memory, not about >> allocating memory. > > I agree. But then I didn't say anything about allocations. (... that's what "being unspecific" means, isn't it?) >> >=C2=A0which causes a fatal signal if that happens while MPS >> > holds the arena lock. >> >> I don't know what "fatal signal" is supposed ot mean in this case, to >> the MPS folks. > > It's an accepted terminology, but we could explain if needed. My text > was for Stefan (who does know what "fatal signal" means), not for > quoting it verbatim in a message to MPS. As I explained, it's also not the only problem. A "fatal signal" is the best of three possible undesirable outcomes. >> > So we want to ask for a callback when MPS is >> > about to lock the arena, and another callback immediately after it >> > releases the lock. >> >> That's the first time I hear about the "about to lock the arena" >> callback. Wouldn't hurt, of course, but it's also a new idea. > > It was always the idea. I'm sorry, but I really don't see how you can make that statement. Your most recent proposal said nothing about that part of your idea, see: https://mail.gnu.org/archive/html/emacs-devel/2024-12/msg01537.html If you expected me to be smart enough to read that email and conclude that you want another callback which would block signals, and you meant "unblock the signal" when you wrote "run the handler's body", I'm not. >> "Immediately", of course, may be misleading: if another thread is >> waiting for the lock, the lock will not be available until that thread >> is done with it. > > Which other thread? There can be only one Lisp thread running at any > given time. I thought we agreed we should trigger GC from a separate POSIX thread for at least some builds (to debug things). Anyway, I don't see why we should present a solution for a locking problem that assumes things are single-threaded anyway. >> We could block the appropriate signals before (in some cases, quite a >> while before) we take the arena lock and unblock them after we release >> it. That's not obviously the best solution, but it's the only one this >> change would enable, AFAICS. > > The problem is, we don't know when the arena will be locked, thus the > request for a callback. The problem is that these precise callbacks would enable ONLY this solution, while a different callback mechanism might enable others. I also think it would be better to simply set a custom lock/unlock function which is run INSTEAD of the lockix.c code, which would give us options to fine-tune the behavior in case of lock contention (it's perfectly okay to run signal handlers "while" trying to grab the lock, which potentially takes a while. They might finish, or they might need the arena lock before we do. Most importantly, that would give us a timeout mechanism for interrupting lengthy scans for user interaction. I think this is highly relevant; either we need to split long vectors, or we need a way to interrupt scanning, and this is precisely the point where we could do so. If all we have is a pre-lock callback, we can't interrupt scanning, and then we're forced to split long vectors...). The main problem, to me, is that the single-lock design of MPS might change to a multi-lock design, and what do we do then? Do we need per-thread per-lock storage, or does the per-thread signal mask suffice? >> > Alternatively, if MPS already has a solution for such applications >> > that use signals, we'd like to hear what they suggest. >> >> That's a useful question to ask, of course. My understanding is that >> MPS is configured mostly at build time, and your idea would amount to >> creating a replacement for lockix.c and lockw3.c which allows blocking >> signals. > > Once again, let's hear what the MPS developers have to say about that. As I said, whoever wants to hit "send" can do so. Not me, for now. >> > As background, you can point them to this discussion: >> > >> > https://lists.gnu.org/archive/html/emacs-devel/2024-06/msg00568.html >> >> I think the polite thing to do would be to agree on a short but accurate >> summary of what it is we want, explaining why it would be helpful (it >> may simplify things but isn't required for correctness). > > That discussion has several backtraces which might be useful for them > to better understand the issue. It also contains the "unreasonable" thing, IIRC, and it's quite long. But, again, my consent is certainly not required :-) All that said, while I don't think it's the best idea, if you insist on this MPS change and do the signal-blocking thing, that would, at least, finally settle the question. Strictly speaking, we can merge without even splitting long vectors (some people will decide to give it a try, create a billion-entry vector, observe the totally unusable Emacs session that leaves them with, and decide MPS isn't ready). Pip