From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.org!.POSTED!not-for-mail From: Eli Zaretskii Newsgroups: gmane.emacs.devel Subject: Re: wait_reading_process_ouput hangs in certain cases (w/ patches) Date: Sat, 18 Nov 2017 16:51:38 +0200 Message-ID: <83ine7fxmt.fsf@gnu.org> References: <831slp98ut.fsf@gnu.org> <83tvyj62qg.fsf@gnu.org> <83r2tetf90.fsf@gnu.org> <5150d198-8dd3-9cf4-5914-b7e945294452@binary-island.eu> <83tvy7s6wi.fsf@gnu.org> <83inemrqid.fsf@gnu.org> <398f8d17-b727-d5d6-4a31-772448c7ca0d@binary-island.eu> <56e722a6-95a4-0e42-185c-f26845d4f4bf@binary-island.eu> <21237e45-a353-92f9-01ec-7b51640d2031@cs.ucla.edu> <83vaickfu2.fsf@gnu.org> <83tvxwkexg.fsf@gnu.org> <03261534-6bf5-1a5d-915f-d3c55aaa35e9@binary-island.eu> <206ebefa-7583-f049-140c-c8fd041b0719@cs.ucla.edu> Reply-To: Eli Zaretskii NNTP-Posting-Host: blaine.gmane.org X-Trace: blaine.gmane.org 1511016731 6338 195.159.176.226 (18 Nov 2017 14:52:11 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Sat, 18 Nov 2017 14:52:11 +0000 (UTC) Cc: eggert@cs.ucla.edu, emacs-devel@gnu.org To: Matthias Dahl Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane.org@gnu.org Sat Nov 18 15:52:07 2017 Return-path: Envelope-to: ged-emacs-devel@m.gmane.org Original-Received: from lists.gnu.org ([208.118.235.17]) by blaine.gmane.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eG4TU-0001G2-Cb for ged-emacs-devel@m.gmane.org; Sat, 18 Nov 2017 15:52:04 +0100 Original-Received: from localhost ([::1]:50285 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eG4Tb-0001V9-Hg for ged-emacs-devel@m.gmane.org; Sat, 18 Nov 2017 09:52:11 -0500 Original-Received: from eggs.gnu.org ([2001:4830:134:3::10]:56230) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eG4TT-0001Us-IR for emacs-devel@gnu.org; Sat, 18 Nov 2017 09:52:04 -0500 Original-Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eG4TO-0006lu-S0 for emacs-devel@gnu.org; Sat, 18 Nov 2017 09:52:03 -0500 Original-Received: from fencepost.gnu.org ([2001:4830:134:3::e]:57769) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eG4TO-0006lh-O8; Sat, 18 Nov 2017 09:51:58 -0500 Original-Received: from [176.228.60.248] (port=4808 helo=home-c4e4a596f7) by fencepost.gnu.org with esmtpsa (TLS1.2:RSA_AES_256_CBC_SHA1:256) (Exim 4.82) (envelope-from ) id 1eG4TN-0006l7-2a; Sat, 18 Nov 2017 09:51:58 -0500 In-reply-to: (message from Matthias Dahl on Sat, 18 Nov 2017 15:24:26 +0100) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2001:4830:134:3::e X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane.org@gnu.org Original-Sender: "Emacs-devel" Xref: news.gmane.org gmane.emacs.devel:220264 Archived-At: > Cc: emacs-devel@gnu.org > From: Matthias Dahl > Date: Sat, 18 Nov 2017 15:24:26 +0100 > > On 16/11/17 17:46, Paul Eggert wrote: > > > Sure, but how do we know that the data read while running timers and > > filters was being read on behalf of our caller? Perhaps a timer or > > filter fired off some Elisp function that decided to read data for its > > own purposes, unrelated to our caller. We wouldn't want to count the > > data read by that function as being data of interest to our caller. > > I had considered that when I debugged the bug but think about it for a > moment. If you treat the process as a shared resource, it is your sole > responsibility to take care of proper management and synchronization of > the process as well. > > If a wait_... is in progress for process A which is the response to some > interaction A* (w/ filter F1), then if the timers get processed during > our wait and end up with another interaction B* (w/ filter F2) to > process A that will cause havoc either way. They will probably read the > data that was destined for filter F1 or things get messed up even more > horribly. I think the normal situation is where each process has only one filter, and therefore even if the output of the process was read by some unrelated call to wait_reading_process_output, that output was processed by the correct filter. IOW, there should be no problems with the actual processing of the process output, the problem is with the caller of accept-process-output etc., which must receive an indication that some output was received and processed. And that's what the proposed change is trying to solve -- to prevent that indication from being lost due to recursive calls to wait_reading_process_output. > We could, by the way, avoid this whole problem and dilemma if we shift > the processing of timers to _AFTER_ we are finished with everything. But > this brings in new problems, like if we have to wait too long for the > data to become available, timers would get delayed quite a bit. And they > would only fire once, no matter how much time has passed. So this is not > ideal as well. No, this will introduce much worse problems.