From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Stefan Monnier Newsgroups: gmane.emacs.devel Subject: Re: continuation passing in Emacs vs. JUST-THIS-ONE Date: Mon, 10 Apr 2023 22:53:31 -0400 Message-ID: References: <87leizif4r.fsf@logand.com> Mime-Version: 1.0 Content-Type: text/plain Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="22455"; mail-complaints-to="usenet@ciao.gmane.io" User-Agent: Gnus/5.13 (Gnus v5.13) Cc: Jim Porter , Karthik Chikmagalur , Thomas Koch , "emacs-devel@gnu.org" To: Tomas Hlavaty Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Tue Apr 11 04:54:35 2023 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1pm49S-0005gi-Ni for ged-emacs-devel@m.gmane-mx.org; Tue, 11 Apr 2023 04:54:34 +0200 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pm48h-000835-Th; Mon, 10 Apr 2023 22:53:47 -0400 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pm48g-00082w-RY for emacs-devel@gnu.org; Mon, 10 Apr 2023 22:53:46 -0400 Original-Received: from mailscanner.iro.umontreal.ca ([132.204.25.50]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pm48e-0000Dj-Nj for emacs-devel@gnu.org; Mon, 10 Apr 2023 22:53:46 -0400 Original-Received: from pmg2.iro.umontreal.ca (localhost.localdomain [127.0.0.1]) by pmg2.iro.umontreal.ca (Proxmox) with ESMTP id 2322C80B0E; Mon, 10 Apr 2023 22:53:42 -0400 (EDT) Original-Received: from mail01.iro.umontreal.ca (unknown [172.31.2.1]) by pmg2.iro.umontreal.ca (Proxmox) with ESMTP id 685AD800AE; Mon, 10 Apr 2023 22:53:40 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=iro.umontreal.ca; s=mail; t=1681181620; bh=n2zmo46A4TdV6rqppJUFgWT/MwtoW50sPxsAgcK9tDI=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=UB1qpxXXQ0ddPMs4wXEgoxZUQuW7yBPms1OCW1UPs7OIByD8OieXVSBGY1rKmAEFW WBsEY+o1xxnU0eWRQwYlFe+Mf7LPUVYTkPHhLAIT9OSmXdTcro8ZVgusEvHDE/oNUE 6n96e7FA5ZYvDiBOAkH09vbI5OZqMpOfovZNDmYz6ia2qc+akoBFeeFviikWttfVaQ 5ggqABH8q+DwAceNDmaoNQFawMSz1TsegViRAeMAyFpA1IOPZE3TAEWXh6X0PQ58bK 8ihYNUo+RhIUbjlcTjx97eC3F0EN5TxVOR7xgXP65Fxs6J1EcqYRtd2L40vtqYxL4f sH+TTBHVfBbsw== Original-Received: from pastel (unknown [45.72.217.176]) by mail01.iro.umontreal.ca (Postfix) with ESMTPSA id 0C7F8122518; Mon, 10 Apr 2023 22:53:40 -0400 (EDT) In-Reply-To: <87leizif4r.fsf@logand.com> (Tomas Hlavaty's message of "Mon, 10 Apr 2023 23:47:16 +0200") Received-SPF: pass client-ip=132.204.25.50; envelope-from=monnier@iro.umontreal.ca; helo=mailscanner.iro.umontreal.ca X-Spam_score_int: -42 X-Spam_score: -4.3 X-Spam_bar: ---- X-Spam_report: (-4.3 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:305230 Archived-At: >> IOW, your `await` is completely different from Javascript's `await`. > It depends what do you mean exactly and why do you bring javascript as > relevant here. Because that's the kind of model `futur.el` is trying to implement (where `futur-let*` corresponds loosely to `await`, just without the auto-CPS-conversion). > Also Emacs does not have such sophisticated event loop like javascript. Not sure what you mean by that. >> And the use `await` above means that your Emacs will block while waiting >> for one result. `futur-let*` instead lets you compose async operations >> without blocking Emacs, and thus works more like Javascript's `await`. > Blocking the current thread for one result is fine, because all the > futures already run in other threads in "background" so there is nothing > else to do. You can't know that. There can be other async processes whose filters should be run, timers to be executed, other threads to run, ... > If you mean that you want to use the editor at the same time, just run > the example in another thread. The idea is to use `futur.el` *instead of* threads. > But then you have to look for the result in the *Message* buffer. > If I actually want to get the same behaviour as C-x C-e > (eval-last-sexp) then I want await to block Emacs; and this is what > await at top-level does. Indeed, there are various cases where you do want to wait (which is why I provide `futur-wait`). But its use should be fairly limited (to the "top level"). > No, the iter case does map directly to futures: > > (await > (async-iter > (let ((a (async-iter > (message "a1") > (await-iter (sleep-iter3 3)) > (message "a2") > 1)) > (b (async-iter > (message "b1") > (let ((c (async-iter > (message "c1") > (await-iter (sleep-iter3 3)) > (message "c2") > 2))) > (message "b2") > (+ 3 (await-iter c)))))) > (+ (await-iter a) (await-iter b))))) I must say I don't understand this example: in which sense is it using "iter"? I don't see any `iter-yield`. > The difference with for example javascript is that I drive the polling > loop explicitly here, while javascript queues the continuations in the > event loop implicitly. `futur.el` also "queues the continuations in the event loop". >>> Calling await immediately after async is useless (simply use blocking >>> call). The point of future is to make the distance between those calls >>> as big as possible so that the sum of times in the sequential case is >>> replaced with max of times in the parallel case. >> You're looking for parallelism. I'm not. > What do you mean exactly? That `futur.el` is not primarily concerned with allowing you to run several subprocesses to exploit your multiple cores. It's instead primarily concerned with making it easier to write asynchronous code. One of the intended use case would be for completion tables to return futures (which, in many cases, will have already been computed synchronously, but not always). > I am asking because: > > https://wiki.haskell.org/Parallelism_vs._Concurrency > > Warning: Not all programmers agree on the meaning of the terms > 'parallelism' and 'concurrency'. They may define them in different > ways or do not distinguish them at all. Yet I have never heard of anyone disagree with the definitions given at the beginning of that very same page. More specifically those who may disagree are those who didn't know there was a distinction :-) > But it seems that you insist on composing promises sequentially: No, I'm merely making it easy to do that. > Also futur.el does seems to run callbacks synchronously: I don't think so: it runs them via `funcall-later`. > In this javascript example, a and b appear to run in parallel (shall I > say concurrently?): > > function sleep(sec) { > return new Promise(resolve => { > setTimeout(() => {resolve(sec);}, sec * 1000); > }); > } > async function test() { > const a = sleep(9); > const b = sleep(8); > const z = await a + await b; > console.log(z); > } > test(); > > Here the console log will show 17 after 9sec. > It will not show 17 after 17sec. > > Can futur.el do that? Of course. You could do something like (futur-let* ((a (futur-let* ((_ <- (futur-process-make :command '("sleep" "9")))) 9)) (b (futur-let* ((_ <- (futur-process-make :command '("sleep" "8")))) 8)) (a-val <- a) (b-val <- b)) (message "Result = %s" (+ a-val b-val)))) > Sure, if the consumer does not really need the value of the result of > the asynchronous computation, just plug in a callback that does > something later. How do you plug in a callback in code A which waits for code B to finish when code A doesn't know if code B is doing its computation synchronously or not, and if B does it asynchronously, A doesn't know if it's done via timers, via some kind of hooks, via a subprocess which will end when the computation is done, via a subprocess which will be kept around for other purposes after the computation is done, etc... ? That's what `futur.el` is about: abstracting away those differences between the uniform API of a "future". > In your example, you immediately return a lie and then > fix it later asynchronously from a callback. Yes. That's not due to `futur.el`, tho: it's due to the conflicting requirements of jit-lock and the need to make a costly computation in a subprocess in order to know what needs to be highlighted and how. > Maybe it is confusing because you describe what the producer does, but > not what the consumer does. And in your example, it does not matter > what value the consumer receives because the callback will be able to > fix it later. In your example, there is no consumer that needs the > value of the future. Yes, there is a consumer which will "backpatch" the highlighting. But since it's done behind the back of jit-lock, we need to write it by hand. >> When writing the code by hand, for the cases targeted by my library, you >> *have* to use process sentinels. `futur.el` just provides a fairly thin >> layer on top. Lisp can't just "figure those out" for you. > > async-process uses process sentinel but this is just an implementation > detail specific to asynchronous processes. It does not have to leak out > of the future/async/await "abstraction". Indeed, the users of the future won't know whether it's waiting for some process to complete or for something else. They'll just call `futur-let*` or `futur-wait` or somesuch. > futur.el is completely broken, Indeed, it's work in progress, not at all usable as of now. > I think that your confusion is caused by the decision that > futur-process-make yields exit code. That is wrong, exit code is > logically not the resolved value (promise resolution), it indicates > failure (promise rejection). Not necessarily, it all depends on what the process is doing. Similarly the "intended return value" of a process will depend on what the process does. In some cases it will be the stdout, but I see no reason to restrict my fundamental function to such a choice. It's easy to build on top of `futur-process-make` a higher-level function which returns the stdout as the result of the future. Stefan