* continuation passing in Emacs vs. JUST-THIS-ONE
@ 2023-03-11 12:53 Thomas Koch
2023-03-12 1:45 ` Jim Porter
` (4 more replies)
0 siblings, 5 replies; 53+ messages in thread
From: Thomas Koch @ 2023-03-11 12:53 UTC (permalink / raw)
To: emacs-devel@gnu.org
TL;DR: (Why) is there no standard way for continuation passing style[1] ("event driven") programming in Emacs?
During the investigation of an Emacs freeze[1] while starting eglot over tramp, we made a couple of observations. It was proposed to me to share these observations with you. I don't know elips or Emacs internals though, so apologies for any errors.
[1] https://debbugs.gnu.org/61350
Bug #61350 happens because Tramp calls accept-process-output with JUST-THIS-ONE set to t. Tramp does this since Bug #12145[2]. However it seems, that this bug should have been rather fixed in function `find-dired` instead. (See following separate email on this.)
[2] https://debbugs.gnu.org/12145
The JUST-THIS-ONE argument was introduced with this entry in etc/NEWS.22, emphasis from me:
"""
*** Function 'accept-process-output' has a new optional fourth arg
JUST-THIS-ONE. If non-nil, only output from the specified process
is handled, suspending output from other processes. If value is an
integer, also inhibit running timers. THIS FEATURE IS GENERALLY NOT
RECOMMENDED, but may be necessary for specific applications, such as
speech synthesis.
"""
The argument was discussed here:
https://lists.gnu.org/archive/html/emacs-devel/2004-08/msg00141.html
and introduced in this commit:
https://git.savannah.gnu.org/cgit/emacs.git/commit/?id=107ed38d4bdec03002b2a23619e205722cd5b8d1
I don't even think that the original motivation for introducing JUST-THIS-ONE was valid. Unfortunately there was not much discussion about it. It was argued, that it would be hard to make a process filter function reentrant. And I think that this was an invalid root cause analysis to start with.
First, the emacs manual says[3]: "Note that if any of those functions are called by the filter, the filter may be called recursively." - So one should make the filter reentrant, If I understand correctly.
[3] https://www.gnu.org/software/emacs/manual/html_node/elisp/Filter-Functions.html
Second, the manual further says: "Quitting is normally inhibited within a filter function". This indicates to me, that a filter function should be (mostly) "side effect free" besides putting its input somewhere (e.g. in a buffer or message queue) and trigger an event if there is enough input for further processing. This also reduces the risk, that the function could be called recursively in a damaging way.
It seems to me, that there is not yet a standard way in Emacs for continuations (or event driven programming) although the Emacs Wiki refers to the emacs-deferred library: https://www.emacswiki.org/emacs/ConcurrentEmacs
Because there is no such library in Emacs, people either write their own code for continuations (eglot?) or do too much work in a process filter function (speechd-el in 2004 which led to JUST-THIS-ONE).
While I don't know elisp, I unfortunately had to do JavaScript. Like Emacs, JS is single-threaded. While I share the sentiment about JS, there are still things to learn from it, e.g. event driven programming.
See also:
- 2011 emacs-dev discussion: https://lists.gnu.org/archive/html/emacs-devel/2011-05/msg00575.html
- 2016 Blogpost https://jyp.github.io/posts/elisp-cps.html
- https://stable.melpa.org/#/deferred
- https://www.gnu.org/software/emacs/manual///html_node/elisp/Transaction-Queues.html
- Maybe: https://elpa.gnu.org/packages/fsm.html
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-11 12:53 continuation passing in Emacs vs. JUST-THIS-ONE Thomas Koch
@ 2023-03-12 1:45 ` Jim Porter
2023-03-12 6:33 ` tomas
2023-03-14 6:39 ` Karthik Chikmagalur
2023-03-14 3:58 ` Richard Stallman
` (3 subsequent siblings)
4 siblings, 2 replies; 53+ messages in thread
From: Jim Porter @ 2023-03-12 1:45 UTC (permalink / raw)
To: Thomas Koch, emacs-devel@gnu.org
On 3/11/2023 4:53 AM, Thomas Koch wrote:
> TL;DR: (Why) is there no standard way for continuation passing style[1] ("event driven") programming in Emacs?
There is (sort of): generator.el. There's also 'eshell-do-eval', but
that's not as useful for general purposes, and I hope to replace it with
either generator.el or real threads at some point in the future.
It would probably be reasonable to add a more asynchronous-oriented way
of working with generator.el's CPS machinery, but I think the bigger
problem is just the time to fix existing code that's not doing the right
thing. Even without an easy-to-use asynchronous programming interface,
you could probably get pretty far in terms of real-world improvements
with using timers. I imagine timers would be a lot clumsier than a
"proper" async API, but they'd likely work well enough for a first pass.
In any case, working on this would likely be a big help for Emacs. One
of the more-common things I see people wish for in Emacs is "threading".
I think this is probably a mistaken wish (Emacs generally doesn't use
enough CPU to saturate a core), but what they really want is for fewer
operations that block for a long time. If it were easier to divide up
long-running tasks into small chunks, that would go a long way towards
solving these sorts of issues.
(In theory, you could even get real multithreading this way, if you
could divide up your task in a way that Emacs could be sure some chunk
can be offloaded onto another thread.)
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-12 1:45 ` Jim Porter
@ 2023-03-12 6:33 ` tomas
2023-03-14 6:39 ` Karthik Chikmagalur
1 sibling, 0 replies; 53+ messages in thread
From: tomas @ 2023-03-12 6:33 UTC (permalink / raw)
To: emacs-devel
[-- Attachment #1: Type: text/plain, Size: 882 bytes --]
On Sat, Mar 11, 2023 at 05:45:15PM -0800, Jim Porter wrote:
[...]
> In any case, working on this would likely be a big help for Emacs. One of
> the more-common things I see people wish for in Emacs is "threading". I
> think this is probably a mistaken wish (Emacs generally doesn't use enough
> CPU to saturate a core), but what they really want is for fewer operations
> that block for a long time. If it were easier to divide up long-running
> tasks into small chunks, that would go a long way towards solving these
> sorts of issues.
Oooh. You made my day :)
> (In theory, you could even get real multithreading this way, if you could
> divide up your task in a way that Emacs could be sure some chunk can be
> offloaded onto another thread.)
Exactly: those are two different building blocks, and most useful
when available separately.
Cheers
--
t
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-11 12:53 continuation passing in Emacs vs. JUST-THIS-ONE Thomas Koch
2023-03-12 1:45 ` Jim Porter
@ 2023-03-14 3:58 ` Richard Stallman
2023-03-14 6:28 ` Jim Porter
2023-03-16 21:35 ` miha
` (2 subsequent siblings)
4 siblings, 1 reply; 53+ messages in thread
From: Richard Stallman @ 2023-03-14 3:58 UTC (permalink / raw)
To: Thomas Koch; +Cc: emacs-devel
[[[ To any NSA and FBI agents reading my email: please consider ]]]
[[[ whether defending the US Constitution against all enemies, ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]
> TL;DR: (Why) is there no standard way for continuation passing
> style[1] ("event driven") programming in Emacs?
I implemented Emacs Lisp using simple, natural C data structures
including the C call stack. This does not lend itself to implementing
continuations.
To change that would be enormous trouble, and i expect it would cause
a big slowdown too. In my opinion, continuation-passing style is not
worth that downside.
--
Dr Richard Stallman (https://stallman.org)
Chief GNUisance of the GNU Project (https://gnu.org)
Founder, Free Software Foundation (https://fsf.org)
Internet Hall-of-Famer (https://internethalloffame.org)
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-14 3:58 ` Richard Stallman
@ 2023-03-14 6:28 ` Jim Porter
0 siblings, 0 replies; 53+ messages in thread
From: Jim Porter @ 2023-03-14 6:28 UTC (permalink / raw)
To: rms, Thomas Koch; +Cc: emacs-devel
On 3/13/2023 8:58 PM, Richard Stallman wrote:
> [[[ To any NSA and FBI agents reading my email: please consider ]]]
> [[[ whether defending the US Constitution against all enemies, ]]]
> [[[ foreign or domestic, requires you to follow Snowden's example. ]]]
>
> > TL;DR: (Why) is there no standard way for continuation passing
> > style[1] ("event driven") programming in Emacs?
>
> I implemented Emacs Lisp using simple, natural C data structures
> including the C call stack. This does not lend itself to implementing
> continuations.
>
> To change that would be enormous trouble, and i expect it would cause
> a big slowdown too. In my opinion, continuation-passing style is not
> worth that downside.
There's already some support in Emacs for coroutines: generator.el
provides, well... generators, which should allow for most (all?) of what
you can normally do with coroutines, albeit with syntax that might not
be as fluent as we might like. This is implemented entirely in Lisp, so
I wouldn't be surprised if the performance suffers, but for certain
kinds of tasks where Emacs isn't CPU-bound, even that could be a
significant improvement for overall responsiveness.
For this thread in particular, I believe it's inspired by some issues
with Tramp, where (if I understand correctly), process filters from
relatively-long network operations are causing hangs (and also the
dreaded "forbidden reentrant call to Tramp" error). In these cases, I
think it's at least reasonably likely that the operations in question
are network/IO-bound, so slicing them up into continuations might be
good enough, even if those continuations have a performance penalty in
terms of CPU use.
Of course, without at least a simple proof of concept, it's hard to say
what the pros and cons look like. I'm hoping to test something like this
out in Eshell by using/adapting generator.el, since Eshell already
effectively contains its own CPS transformer called 'eshell-do-eval'.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-12 1:45 ` Jim Porter
2023-03-12 6:33 ` tomas
@ 2023-03-14 6:39 ` Karthik Chikmagalur
2023-03-14 18:58 ` Jim Porter
1 sibling, 1 reply; 53+ messages in thread
From: Karthik Chikmagalur @ 2023-03-14 6:39 UTC (permalink / raw)
To: Jim Porter, Thomas Koch, emacs-devel@gnu.org
One of the issues I've faced with using generator.el and its offshoots,
such as the emacs-aio library (written by Chris Wellons), is that I can't
run `debug-on-entry' on or edebug an `iter-defun'. This makes it
difficult for me to write iterator-based logic for anything non-trivial.
Is there some way to step through calls to iterators?
> It would probably be reasonable to add a more asynchronous-oriented way
> of working with generator.el's CPS machinery, but I think the bigger
> problem is just the time to fix existing code that's not doing the right
> thing.
What needs to be fixed in generator.el for this?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-14 6:39 ` Karthik Chikmagalur
@ 2023-03-14 18:58 ` Jim Porter
2023-03-15 17:48 ` Stefan Monnier
0 siblings, 1 reply; 53+ messages in thread
From: Jim Porter @ 2023-03-14 18:58 UTC (permalink / raw)
To: Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org; +Cc: monnier
On 3/13/2023 11:39 PM, Karthik Chikmagalur wrote:
>> It would probably be reasonable to add a more asynchronous-oriented way
>> of working with generator.el's CPS machinery, but I think the bigger
>> problem is just the time to fix existing code that's not doing the right
>> thing.
>
> What needs to be fixed in generator.el for this?
I was thinking something like the emacs-aio library you mentioned,
actually. It's less that generator.el is broken (I don't think it is, at
least), and more that it's not the interface I'd use for writing
asynchronous code.
That said, something that looks like emacs-aio might not be the best
answer either; it will probably take some experimentation to see what
would be most usable (and what would have acceptable performance). I
seem to recall that Stefan Monnier (CCed) mentioned having some WIP code
to make generator.el easier to use for asynchronous code...
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-14 18:58 ` Jim Porter
@ 2023-03-15 17:48 ` Stefan Monnier
2023-03-17 0:17 ` Tomas Hlavaty
0 siblings, 1 reply; 53+ messages in thread
From: Stefan Monnier @ 2023-03-15 17:48 UTC (permalink / raw)
To: Jim Porter; +Cc: Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
[-- Attachment #1: Type: text/plain, Size: 496 bytes --]
> That said, something that looks like emacs-aio might not be the best answer
> either; it will probably take some experimentation to see what would be most
> usable (and what would have acceptable performance). I seem to recall that
> Stefan Monnier (CCed) mentioned having some WIP code to make generator.el
> easier to use for asynchronous code...
I think my WiP thingy is very similar to emacs-aio.
I haven't had time to work on it and I'd welcome help with it (attached).
Stefan
[-- Attachment #2: futur.el --]
[-- Type: application/emacs-lisp, Size: 11104 bytes --]
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-11 12:53 continuation passing in Emacs vs. JUST-THIS-ONE Thomas Koch
2023-03-12 1:45 ` Jim Porter
2023-03-14 3:58 ` Richard Stallman
@ 2023-03-16 21:35 ` miha
2023-03-16 22:14 ` Jim Porter
2023-03-25 21:05 ` Tomas Hlavaty
2023-03-26 23:50 ` Tomas Hlavaty
4 siblings, 1 reply; 53+ messages in thread
From: miha @ 2023-03-16 21:35 UTC (permalink / raw)
To: Thomas Koch, emacs-devel@gnu.org
[-- Attachment #1: Type: text/plain, Size: 819 bytes --]
There's also the issue that using continuation passing (async-io)
doesn't auto-magically solve the re-entrancy issues.
Consider the following hypothetical command, written using JS-style
async/await operators:
(async-defun insert-some-parent-dirs ()
(interactive)
(insert (await (locate-dominating-file default-directory "go.mod")))
(insert "\n")
(insert (await (locate-dominating-file default-directory "go.work"))))
If the user executed such a command multiple times in quick succession,
the executions could happen in parallel and would trample over each
other.
For each use of "await", the programmer has to think about the
possibility of other code running "in-between". This style of
programming may be harder in Elisp which has a lot of global state in
form of buffer contents, markers and overlays.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 861 bytes --]
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-16 21:35 ` miha
@ 2023-03-16 22:14 ` Jim Porter
0 siblings, 0 replies; 53+ messages in thread
From: Jim Porter @ 2023-03-16 22:14 UTC (permalink / raw)
To: miha, Thomas Koch, emacs-devel@gnu.org
On 3/16/2023 2:35 PM, miha@kamnitnik.top wrote:
> There's also the issue that using continuation passing (async-io)
> doesn't auto-magically solve the re-entrancy issues.
>
> Consider the following hypothetical command, written using JS-style
> async/await operators:
>
> (async-defun insert-some-parent-dirs ()
> (interactive)
> (insert (await (locate-dominating-file default-directory "go.mod")))
> (insert "\n")
> (insert (await (locate-dominating-file default-directory "go.work"))))
>
> If the user executed such a command multiple times in quick succession,
> the executions could happen in parallel and would trample over each
> other.
>
> For each use of "await", the programmer has to think about the
> possibility of other code running "in-between". This style of
> programming may be harder in Elisp which has a lot of global state in
> form of buffer contents, markers and overlays.
Yeah, this isn't easy to fix on its own. The best I can think of (and
this would take quite a bit of experimentation) would be some way of
declaring async functions as non-reentrant for certain contexts. So your
example would be non-reentrant for a given buffer. Some other functions
might be non-reentrant globally, or for a particular argument to the
function.
With a declaration like that, we could hopefully go a fair way towards
solving the problem by serializing any async calls that are non-reentrant.
I hope to test out Stefan's futur.el (and maybe vanilla generator.el) as
a new iterative evaluation backend for Eshell in the coming months;
hopefully that will help produce some more concrete information about
what the pitfalls are.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-15 17:48 ` Stefan Monnier
@ 2023-03-17 0:17 ` Tomas Hlavaty
2023-03-17 3:08 ` Stefan Monnier
0 siblings, 1 reply; 53+ messages in thread
From: Tomas Hlavaty @ 2023-03-17 0:17 UTC (permalink / raw)
To: Stefan Monnier, Jim Porter
Cc: Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
Hi Stefan,
On Wed 15 Mar 2023 at 13:48, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>> I seem to recall that Stefan Monnier (CCed) mentioned having some WIP
>> code to make generator.el easier to use for asynchronous code...
> I think my WiP thingy is very similar to emacs-aio. I haven't had
> time to work on it and I'd welcome help with it (attached).
Interesting.
From futur.el:
> ;; (futur-let*
> ;; (exitcode <- (futur-process-make :command cmd :buffer t))
> ;; (out (buffer-string)) ;; Get the process's output.
> ;; (cmd2 (build-second-arg-list exitcode out))
> ;; (otherexit <- (futur-process-make :command cmd :buffer t)))
> ;; (futur-pure (buffer-string)))
Seems like beautiful lisp code has no futur. :-)
There is something very ugly about this code.
It looks like assembly, 1 dimensional vertical code.
It is hard to see the structure of the code and what it actually does.
I do not think it is practical to write non-trivial code in this style.
Nice lisp code is usually 2 dimensional,
with indentation and top-left to bottom-right direction.
It is usually much clearer to see what is an argument to what
based on the position in the syntax tree.
Is it possible to make the syntax more structured (lispy)?
Meaning tree-like, not list-like?
Something in the spirit of:
(futur-progn
(futur-process-make
:command (futur-let ((exitcode (futur-process-make
:command (build-arg-list)
:buffer t)))
(build-second-arg-list exitcode (buffer-string)))
:buffer t)
(buffer-string))
or would it need some fancy syntax rewriting like other async/cps
syntax rewriting libraries?
Second question: I see that futur-wait blocks the whole emacs due to
the while loop. How can one use futur without blocking emacs?
I usually prefer pull based code as it does not steal control from me.
Lets say I want to do something nontrivial but not block emacs. I would
split computation into chunks, identify state explicitly and move it to
the heap and suspend the computation without needing to reserve a stack
for it. I.e. manually write a kind of stream that yields items or nil
as EOF (without syntax rewriting ala generators.el). I need EAGAIN for
stuff happening asynchronously. Stuff that blocks simply needs to be
such a small chunk that it does not negatively affect emacs useability.
Unfortunately futur.el does not have executable example so I'll invent
one.
Example (requires lexical bindings): Traverse filesystem, find
*.el files and do something for each one (here I just count the length
of the absolute path for simplicity). And the whole thing should not
block emacs.
(defun stream-pull-in-background (stream &optional secs repeat)
(let (timer)
(setq timer (run-with-timer
(or secs 1)
(or repeat 1)
(lambda ()
;;(message "@@@ polling!")
(unless (funcall stream)
(cancel-timer timer)))))))
(defun line-stream (buffer)
;; yield buffer lines, follow process output if any
(let (start)
(lambda ()
(with-current-buffer buffer
(save-excursion
(unless start
(setq start (point-min)))
(goto-char start)
(let ((end (line-beginning-position 2)))
;;(message "@@@ %s %s" start end)
(if (< start end)
(prog1 (buffer-substring-no-properties
start
(line-end-position 1))
(setq start end))
(let ((process (get-buffer-process buffer)))
(if (and process (process-live-p process))
'EAGAIN
(let ((end (point-max)))
(when (< start end)
(prog1 (buffer-substring-no-properties start end)
(setq start end)))))))))))))
(defun burst-stream (stream &optional secs)
;; pull available data during SECS time window
;; this is very crude "scheduler" but keeps emacs mostly useable
(let ((secs (or secs 0.2)))
(lambda ()
(when secs
(let ((z 'EAGAIN)
(end (+ secs (float-time (current-time)))))
;;(message "@@@ burst %s %s:" (float-time (current-time)) end)
(while (and (< (float-time (current-time)) end)
(setq z (funcall stream))
(not (eq 'EAGAIN z))))
(unless z (setq secs nil))
z)))))
(defun message2-stream (stream)
(lambda ()
(let ((x (funcall stream)))
(when x
(unless (eq 'EAGAIN x)
(message "@@@ %s %s" (length x) x))
x))))
(defun test-buffer (name)
(let ((b (get-buffer-create name)))
(with-current-buffer b
(buffer-disable-undo)
(erase-buffer))
b))
(defun test3 (buffer-name command)
(stream-pull-in-background
(let ((b (test-buffer buffer-name)))
(make-process :name buffer-name
:command command
:buffer b)
(burst-stream (message2-stream (line-stream b))))))
;;(test3 "test3" '("cat" "/tmp/a.el"))
;;(test3 "test3" '("find" "/home/tomas/mr/" "-type" "f" "-name" "*.el"))
;;(list-processes)
;;(list-timers)
Last question: How would similar functionality be implemented using
futur?
Cheers
Tomas
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-17 0:17 ` Tomas Hlavaty
@ 2023-03-17 3:08 ` Stefan Monnier
2023-03-17 5:37 ` Jim Porter
2023-03-25 18:42 ` Tomas Hlavaty
0 siblings, 2 replies; 53+ messages in thread
From: Stefan Monnier @ 2023-03-17 3:08 UTC (permalink / raw)
To: Tomas Hlavaty
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
>> ;; (futur-let*
>> ;; (exitcode <- (futur-process-make :command cmd :buffer t))
>> ;; (out (buffer-string)) ;; Get the process's output.
>> ;; (cmd2 (build-second-arg-list exitcode out))
>> ;; (otherexit <- (futur-process-make :command cmd2 :buffer t)))
>> ;; (futur-pure (buffer-string)))
>
> Seems like beautiful lisp code has no futur. :-)
BTW the above code can't work right now. Part of the issue is the
management of `current-buffer`: should the composition of futures with
`futur-let*` save&restore `current-buffer` to mimic more closely the
behavior one would get with plain old sequential execution? If so,
should we do the same with `point`? What about other such state?
> There is something very ugly about this code.
> It looks like assembly, 1 dimensional vertical code.
> It is hard to see the structure of the code and what it actually does.
> I do not think it is practical to write non-trivial code in this style.
:-)
> Nice lisp code is usually 2 dimensional,
> with indentation and top-left to bottom-right direction.
> It is usually much clearer to see what is an argument to what
> based on the position in the syntax tree.
>
> Is it possible to make the syntax more structured (lispy)?
> Meaning tree-like, not list-like?
> Something in the spirit of:
>
> (futur-progn
> (futur-process-make
> :command (futur-let ((exitcode (futur-process-make
> :command (build-arg-list)
> :buffer t)))
> (build-second-arg-list exitcode (buffer-string)))
> :buffer t)
> (buffer-string))
The `futur-progn` is just:
(defmacro futur-progn (form &rest forms)
(if (null forms) form
`(futur-let* ((_ ,form)) (futur-progn ,@forms))))
As for passing the result of `futur-let` to `:command` it just requires
writing `futur-process-make` in a way that is tolerant of this
`:command` arg being a future rather than a string, which should be
fairly easy (it's basically always easy when done within a function
which itself returns a future).
> or would it need some fancy syntax rewriting like other async/cps
> syntax rewriting libraries?
I don't think so, no. But you would need fancy rewriting if you wanted
to allow
(concat foo (futur-let* (...) ...))
But as you point out at the beginning, as a general rule, if you want to
avoid rewritings in the style of `generator.el`, then the code will tend
to feel less like a tree and more "linear/imperative/sequential",
because you fundamentally have to compose your operations "manually"
with a monadic "bind" operation that forces you to *name* the
intermediate value.
> Second question: I see that futur-wait blocks the whole Emacs due to
> the while loop. How can one use futur without blocking Emacs?
Don't use `futur-wait` and instead use `futur-let*`.
IOW: instead of waiting, return immediately a future.
> Last question: How would similar functionality be implemented
> using futur?
Good question.
To a large extent I guess it could be implemented in basically the same
way: you'd use futures only for the timer part of the code, and leave
the process's output to fill the buffer just like you do.
I think the difference would be very small and cosmetic like replacing
(defun stream-pull-in-background (stream &optional secs repeat)
(let (timer)
(setq timer (run-with-timer
(or secs 1)
(or repeat 1)
(lambda ()
;;(message "@@@ polling!")
(unless (funcall stream)
(cancel-timer timer)))))))
with something like:
(defun stream-pull-in-background (stream &optional secs repeat)
(futur-run-with-timer
(or secs 1)
(lambda ()
;;(message "@@@ polling!")
(when (and (funcall stream) repeat)
(stream-pull-in-background stream secs repeat)))))
The only benefit I could see is that it returns a future, i.e. a kind of
standardized representation of that async computation so the caller can
use things like `futur-wait` or `futur-let*` without having to care
about whether the function is using timers or something else.
And, there's also the benefit of standardized error-signaling.
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-17 3:08 ` Stefan Monnier
@ 2023-03-17 5:37 ` Jim Porter
2023-03-25 18:42 ` Tomas Hlavaty
1 sibling, 0 replies; 53+ messages in thread
From: Jim Porter @ 2023-03-17 5:37 UTC (permalink / raw)
To: Stefan Monnier, Tomas Hlavaty
Cc: Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
On 3/16/2023 8:08 PM, Stefan Monnier wrote:
> BTW the above code can't work right now. Part of the issue is the
> management of `current-buffer`: should the composition of futures with
> `futur-let*` save&restore `current-buffer` to mimic more closely the
> behavior one would get with plain old sequential execution? If so,
> should we do the same with `point`? What about other such state?
How about doing what threads do?
> Each thread also has its own current buffer and its own match data.
If nothing else, consistency makes this easier to remember. (And if more
stuff should be saved and restored, it would probably be good to add
those to threads too.)
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-17 3:08 ` Stefan Monnier
2023-03-17 5:37 ` Jim Porter
@ 2023-03-25 18:42 ` Tomas Hlavaty
2023-03-26 19:35 ` Tomas Hlavaty
2023-03-29 18:47 ` Stefan Monnier
1 sibling, 2 replies; 53+ messages in thread
From: Tomas Hlavaty @ 2023-03-25 18:42 UTC (permalink / raw)
To: Stefan Monnier
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
On Thu 16 Mar 2023 at 23:08, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>>> ;; (futur-let*
>>> ;; (exitcode <- (futur-process-make :command cmd :buffer t))
>>> ;; (out (buffer-string)) ;; Get the process's output.
>>> ;; (cmd2 (build-second-arg-list exitcode out))
>>> ;; (otherexit <- (futur-process-make :command cmd2 :buffer t)))
>>> ;; (futur-pure (buffer-string)))
>>
>> Seems like beautiful lisp code has no futur. :-)
>
> BTW the above code can't work right now.
That is a shame.
> Part of the issue is the
> management of `current-buffer`: should the composition of futures with
> `futur-let*` save&restore `current-buffer` to mimic more closely the
> behavior one would get with plain old sequential execution? If so,
> should we do the same with `point`? What about other such state?
I do not think there is a good implicit solution.
Either it would save too much state or too little,
or save it the wrong way.
It should be written out explicitly (like proc-writer below, for
example).
> The `futur-progn` is just:
>
> (defmacro futur-progn (form &rest forms)
> (if (null forms) form
> `(futur-let* ((_ ,form)) (futur-progn ,@forms))))
Nice, this is much better.
> As for passing the result of `futur-let` to `:command` it just requires
> writing `futur-process-make` in a way that is tolerant of this
> `:command` arg being a future rather than a string, which should be
> fairly easy (it's basically always easy when done within a function
> which itself returns a future).
Sounds good.
>> or would it need some fancy syntax rewriting like other async/cps
>> syntax rewriting libraries?
>
> I don't think so, no. But you would need fancy rewriting if you wanted
> to allow
>
> (concat foo (futur-let* (...) ...))
>
> But as you point out at the beginning, as a general rule, if you want to
> avoid rewritings in the style of `generator.el`, then the code will tend
> to feel less like a tree and more "linear/imperative/sequential",
> because you fundamentally have to compose your operations "manually"
> with a monadic "bind" operation that forces you to *name* the
> intermediate value.
That's what I suspected.
Being forced to name the values leads to very bad code.
>> Second question: I see that futur-wait blocks the whole Emacs due to
>> the while loop. How can one use futur without blocking Emacs?
>
> Don't use `futur-wait` and instead use `futur-let*`.
> IOW: instead of waiting, return immediately a future.
Understand, thanks for clarification.
>> Last question: How would similar functionality be implemented
>> using futur?
>
> Good question.
> To a large extent I guess it could be implemented in basically the same
> way: you'd use futures only for the timer part of the code, and leave
> the process's output to fill the buffer just like you do.
>
> I think the difference would be very small and cosmetic like replacing
>
> (defun stream-pull-in-background (stream &optional secs repeat)
> (let (timer)
> (setq timer (run-with-timer
> (or secs 1)
> (or repeat 1)
> (lambda ()
> ;;(message "@@@ polling!")
> (unless (funcall stream)
> (cancel-timer timer)))))))
>
> with something like:
>
> (defun stream-pull-in-background (stream &optional secs repeat)
> (futur-run-with-timer
> (or secs 1)
> (lambda ()
> ;;(message "@@@ polling!")
> (when (and (funcall stream) repeat)
> (stream-pull-in-background stream secs repeat)))))
>
> The only benefit I could see is that it returns a future, i.e. a kind of
> standardized representation of that async computation so the caller can
> use things like `futur-wait` or `futur-let*` without having to care
> about whether the function is using timers or something else.
> And, there's also the benefit of standardized error-signaling.
Given that there is no working example and state management is not
really thought through, it is hard to imagine, what do you mean exactly.
I do not want to block emacs.
futur-wait blocks emacs.
I also do not understand, why would you use a timer and poll.
The functionality is edge triggered push model where this does not
make sense.
Here is the edge triggered push model example written in the inverse
style of the level triggered pull model example using streams I sent
earlier:
(defun message-writer ()
(lambda (string)
(when string
(insert (format "%d %s\n" (length string) string)))))
(defun proc-writer (buffer writer)
(lambda (string)
(when string
(with-current-buffer buffer
(let ((proc (get-buffer-process buffer)))
(if proc
(let* ((mark (process-mark proc))
(moving (= (point) mark)))
(save-excursion
(goto-char mark)
(funcall writer string)
(set-marker mark (point)))
(when moving
(goto-char mark)))
(save-excursion
(goto-char (point-max))
(funcall writer string))))))))
(defun line-writer (writer)
(let (line)
(lambda (string)
(if string
(let ((x (split-string (concat (or line "") string) "\n")))
(while (cdr x)
(funcall writer (pop x)))
(setq line (car x)))
(when (and line (not (equal "" line)))
(funcall writer line)
(setq line nil))))))
(defun writer-filter (writer)
(lambda (_proc string)
(funcall writer string)))
(defun writer-sentinel (writer)
(lambda (proc _event)
(unless (process-live-p proc)
(funcall writer nil))))
(defun writer-process (buffer-name command writer)
(let* ((b (test-buffer buffer-name))
(w (line-writer (proc-writer b writer))))
(make-process :name buffer-name
:command command
:buffer b
:sentinel (writer-sentinel w)
:filter (writer-filter w))))
(defun test4 (buffer-name command)
(writer-process buffer-name command (message-writer)))
;;(test4 "test4" '("cat" "/tmp/a.el"))
;;(test4 "test4" '("find" "/home/tomas/mr/" "-type" "f" "-name" "*.el"))
In the edge triggered push style, it is important to not miss any
events, which is what writer-filter and writer-sentinel do (and one can
also see that in your comment FIXME: If the process's sentinel signals
an error, it won't run us; note that in the pull model, this issue does
not exist and the code is much simpler). line-writer splits input into
lines. proc-writer manages the external state explicitly and
message-writer is the actual functionality I want to achieve in the
example (output lines and their length). writer-process is a generic
driver to run a command in background and do something per each line of
output, push style.
The advantage of this push model is that it does not require "infinite"
buffer and the computation happens as soon as possible without polling.
The disadvantages are numerous. For example, the pace is dictated by
the outside process which can overwhelm emacs and make it unuseable.
Another serious problem is that doing too much in the filter function
can make emacs unuseable (iirc that was one of the issues the original
poster in this thread complained about). Even more serious problem is,
that C-g in filter function does not work and leads to abort (maybe that
is the reason C-g is not very reliable and I need to use more than one
emacs process).
In futur.el, you do not use filter function but it seems that futur.el
combines the worst features of both pull and push models, e.g. event
detection, infinite buffer, polling and even blocking emacs.
Why do you recommend to poll with futur.el?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-11 12:53 continuation passing in Emacs vs. JUST-THIS-ONE Thomas Koch
` (2 preceding siblings ...)
2023-03-16 21:35 ` miha
@ 2023-03-25 21:05 ` Tomas Hlavaty
2023-03-26 23:50 ` Tomas Hlavaty
4 siblings, 0 replies; 53+ messages in thread
From: Tomas Hlavaty @ 2023-03-25 21:05 UTC (permalink / raw)
To: Thomas Koch, emacs-devel@gnu.org
On Sat 11 Mar 2023 at 14:53, Thomas Koch <thomas@koch.ro> wrote:
> TL;DR: (Why) is there no standard way for continuation passing
> style[1] ("event driven") programming in Emacs?
Asynchronous processes take callbacks as arguments.
> [1] https://debbugs.gnu.org/61350
> [2] https://debbugs.gnu.org/12145
The problem seems to be that an event was missed and emacs gets stuck
waiting in a loop. It is essential in event driven edge trigered push
model not to miss events.
Moreover, it is a bad idea to loop or wait in filter callback.
> Because there is no such library in Emacs, people either write their
> own code for continuations (eglot?) or do too much work in a process
> filter function (speechd-el in 2004 which led to JUST-THIS-ONE).
Yeah, it looks like people tend to do too much stuff in filter
callbacks. It is better to keep those simple and do any complex work
outside the callback. The code could be written not to do too much work
in a process filter function even without such library.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-25 18:42 ` Tomas Hlavaty
@ 2023-03-26 19:35 ` Tomas Hlavaty
2023-03-28 7:23 ` Tomas Hlavaty
2023-03-29 19:00 ` Stefan Monnier
2023-03-29 18:47 ` Stefan Monnier
1 sibling, 2 replies; 53+ messages in thread
From: Tomas Hlavaty @ 2023-03-26 19:35 UTC (permalink / raw)
To: Stefan Monnier
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
On Sat 25 Mar 2023 at 19:42, Tomas Hlavaty <tom@logand.com> wrote:
>> I don't think so, no. But you would need fancy rewriting if you wanted
>> to allow
>>
>> (concat foo (futur-let* (...) ...))
or one could do it explicitly:
(concat foo (future-wait (futur-let* (...) ...)))
> Why do you recommend to poll with futur.el?
I see now that it is future-wait which requires it.
I think I managed to derive nicer async and await than futur.el:
Here a future is just a thunk which returns the resolved value, EAGAIN
if unresolved or throws an error.
(defun await (future)
(let (z)
(while (eq 'EAGAIN (setq z (funcall future)))
;; TODO poke sit-for/io on yield? how?
(sit-for 0.2))
z))
(defmacro async (&rest body)
(declare (indent 0))
(let ((z (gensym))
(e (gensym)))
`(let (,e (,z 'EAGAIN))
(cl-flet ((yield (x) (setq ,z x))
;; TODO catch and re-throw instead of fail? how?
(fail (string &rest args) (setq ,e (cons string args))))
;; TODO add abort? how? is it a good idea?
,@body)
(lambda () (if ,e (apply #'error ,e) ,z)))))
(await (async (yield (+ 1 41))))
(await (async (fail "hi %d" 42)))
That's it.
Now it would be good to run something in background. I got the
following examples to work:
Assuming alet (async let) macro, which binds var when async process
finishes with the value of its output:
(alet p1 '("which" "emacs")
(when p1
(alet p2 `("readlink" "-f" ,p1)
(when p2
(message "@@@ %s" p2)))))
I can await async process:
(await
(async
(alet p1 '("which" "emacs")
(when p1
(alet p2 `("readlink" "-f" ,p1)
(when p2
(yield p2)))))))
or even await async process inside async process:
(await
(async
(alet p `("readlink" "-f" ,(await
(async
(alet p '("which" "emacs")
(when p
(yield p))))))
(when p
(yield p)))))
This shows off async & await working with async process. await is
annoying and not needed in this example as shown above but in some cases
it is necessary.
How does alet look like?
In the previous examples I processed the output of an async process per
line but futur.el example takes the whole output. The only thing I need
to change is to change output chunking from line-writer to
buffer-writer and add a few convenience functions and macros:
(defmacro consume (var val &rest body)
;; set up async process and return immediately
;; body called repeatedly in background per process output event
(declare (indent 2))
`(funcall ,val (lambda (,var) ,@body)))
;; wrap in nicer syntax, async let, in background
(defmacro alet (var command &rest body)
(declare (indent 2))
(let ((cmd (gensym)))
`(let ((,cmd ,command))
(consume ,var (let ((b (test-buffer (format "*alet%s" ,cmd))))
(writer-process6
b
,cmd
(lambda (writer) (buffer-writer b writer))))
,@body))))
;; customizeable chunking
(defun writer-process6 (buffer command chunk)
(lambda (writer)
(let ((w (funcall chunk writer)))
(make-process :name (buffer-name buffer)
:command command
:buffer buffer
:sentinel (writer-sentinel w)
:filter (writer-filter w)))))
;; taken from (info "Process Filter Functions")
;; quite useful, why is this not part of emacs code?
(defun ordinary-insertion-filter (proc string)
(when (buffer-live-p (process-buffer proc))
(with-current-buffer (process-buffer proc)
(let ((moving (= (point) (process-mark proc))))
(save-excursion
;; Insert the text, advancing the process marker.
(goto-char (process-mark proc))
(insert string)
(set-marker (process-mark proc) (point)))
(if moving (goto-char (process-mark proc)))))))
;; like line-writer but output the whole thing
(defun buffer-writer (buffer writer)
(let (done)
(lambda (string)
(unless done
(if string
(ordinary-insertion-filter (get-buffer-process buffer) string)
(let ((z (with-current-buffer buffer (buffer-string))))
(kill-buffer buffer)
(funcall writer z))
(funcall writer nil)
(setq done t))))))
I think this provides nicer interface for async code than futur.el and
even comes with a working example.
Is there anything else async & await should handle but does not?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-11 12:53 continuation passing in Emacs vs. JUST-THIS-ONE Thomas Koch
` (3 preceding siblings ...)
2023-03-25 21:05 ` Tomas Hlavaty
@ 2023-03-26 23:50 ` Tomas Hlavaty
4 siblings, 0 replies; 53+ messages in thread
From: Tomas Hlavaty @ 2023-03-26 23:50 UTC (permalink / raw)
To: Thomas Koch, emacs-devel@gnu.org
On Sat 11 Mar 2023 at 14:53, Thomas Koch <thomas@koch.ro> wrote:
> While I don't know elisp, I unfortunately had to do JavaScript. Like
> Emacs, JS is single-threaded. While I share the sentiment about JS,
> there are still things to learn from it, e.g. event driven
> programming.
I guess you mean async & await as opposed to callback hell.
I think that the essence of async & await is to teleport a value from
one place to another. It has nothing to do with asynchronicity. It
just happens that this is useful with asynchronous code where it is
convenient to teleport a value from under one stack (or thread) of
execution to under the current stack (or thread) of execution.
(await <- 2) to the current place
(async
...
(yield 42))) <- 1) teleport 42
=> 42
async is just a syntactic sugar to lexically provide the
necessary facilities to make this teleportation work. Thanks to
lexical binding and lisp macros, this is easy work for the lisp
compiler:
(defun await (future)
(let (z)
(while (eq 'EAGAIN (setq z (funcall future)))
(sit-for 0.2))
z))
(defmacro async (&rest body)
(declare (indent 0))
(let ((z (gensym))
(e (gensym)))
`(let (,e (,z 'EAGAIN))
(cl-flet ((yield (x) (setq ,z x))
(fail (string &rest args) (setq ,e (cons string args))))
,@body)
(lambda () (if ,e (apply #'error ,e) ,z)))))
Doing it this way brings great flexibility in what the three dots in the
sketch above can be: synchronous code in the current thread of
execution, asynchronous process with its filter or sentinel callback,
another thread or maybe a timer loop like in javascript.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-26 19:35 ` Tomas Hlavaty
@ 2023-03-28 7:23 ` Tomas Hlavaty
2023-03-29 19:00 ` Stefan Monnier
1 sibling, 0 replies; 53+ messages in thread
From: Tomas Hlavaty @ 2023-03-28 7:23 UTC (permalink / raw)
To: Stefan Monnier
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
On Sun 26 Mar 2023 at 21:35, Tomas Hlavaty <tom@logand.com> wrote:
> On Sat 25 Mar 2023 at 19:42, Tomas Hlavaty <tom@logand.com> wrote:
>>> I don't think so, no. But you would need fancy rewriting if you wanted
>>> to allow
>>>
>>> (concat foo (futur-let* (...) ...))
>
> or one could do it explicitly:
>
> (concat foo (future-wait (futur-let* (...) ...)))
Looking at other languages, they do it explicitly. The reason is, that
one might want to save the future, do something else and await the
future later at some point. Not await it immediately:
(let ((future (futur-let* (...) ...)))
...
(concat foo (future-wait future)))
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-25 18:42 ` Tomas Hlavaty
2023-03-26 19:35 ` Tomas Hlavaty
@ 2023-03-29 18:47 ` Stefan Monnier
2023-04-17 3:46 ` Lynn Winebarger
1 sibling, 1 reply; 53+ messages in thread
From: Stefan Monnier @ 2023-03-29 18:47 UTC (permalink / raw)
To: Tomas Hlavaty
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
>> Part of the issue is the management of `current-buffer`: should the
>> composition of futures with `futur-let*` save&restore
>> `current-buffer` to mimic more closely the behavior one would get
>> with plain old sequential execution? If so, should we do the same
>> with `point`? What about other such state?
>
> I do not think there is a good implicit solution.
> Either it would save too much state or too little,
> or save it the wrong way.
Currently it doesn't save anything, which is "ideal" in terms of
efficiency, but sometimes leads to code that's more verbose than
I'd like.
Someone suggested to save as much as threads do, but that's not
practical.
>> But as you point out at the beginning, as a general rule, if you want to
>> avoid rewritings in the style of `generator.el`, then the code will tend
>> to feel less like a tree and more "linear/imperative/sequential",
>> because you fundamentally have to compose your operations "manually"
>> with a monadic "bind" operation that forces you to *name* the
>> intermediate value.
>
> That's what I suspected.
> Being forced to name the values leads to very bad code.
That's not my experience. It's sometimes a bit more verbose than
strictly necessary, but it's quite rare for it to make the code
less readable.
> I do not want to block Emacs.
> futur-wait blocks Emacs.
`futur-wait` should be avoided as much as possible. But occasionally
the context (i.e. the caller) wants an actual answer so you don't get
to choose. E.g. when implementing `url-retrieve` you have to wait, by
definition of what `url-retrive` does.
`futur-wait` is provided for those use-cases. Most such uses reflect
a problem/limitation elsewhere.
AFAIK `futur-let*` corresponds more or less to Javascript's `await`, but
I don't think Javascript provides an equivalent to `futur-wait`.
Maybe I should use another name than `futur-wait`, like
`futur-block-everything-annoyingly-until-we-get-the-result` to avoid
the confusion?
> I also do not understand, why would you use a timer and poll.
> The functionality is edge triggered push model where this does not
> make sense.
I just showed how to "translate" your code into one that uses
`futur.el`. `futur.el` doesn't magically change the algorithm.
> (defun writer-process (buffer-name command writer)
> (let* ((b (test-buffer buffer-name))
> (w (line-writer (proc-writer b writer))))
> (make-process :name buffer-name
> :command command
> :buffer b
> :sentinel (writer-sentinel w)
> :filter (writer-filter w))))
I haven't yet thought about how we could/should make `futur.el` useful
for process filters (contrary to the use of process sentinels where the
integration is more natural).
> Why do you recommend to poll with futur.el?
I don't.
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-26 19:35 ` Tomas Hlavaty
2023-03-28 7:23 ` Tomas Hlavaty
@ 2023-03-29 19:00 ` Stefan Monnier
2023-04-03 0:39 ` Tomas Hlavaty
1 sibling, 1 reply; 53+ messages in thread
From: Stefan Monnier @ 2023-03-29 19:00 UTC (permalink / raw)
To: Tomas Hlavaty
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
> (defun await (future)
> (let (z)
> (while (eq 'EAGAIN (setq z (funcall future)))
> ;; TODO poke sit-for/io on yield? how?
> (sit-for 0.2))
> z))
This blocks, so it's the equivalent of `futur-wait`.
I.e. it's the thing we'd ideally never use.
> Assuming alet (async let) macro, which binds var when async process
> finishes with the value of its output:
I.e. what I called `future-let*`.
> (alet p1 '("which" "emacs")
> (when p1
> (alet p2 `("readlink" "-f" ,p1)
> (when p2
> (message "@@@ %s" p2)))))
Your syntax is more concise because it presumes all your async objects
run commands via `make-process`, but other than that it seems to be
doing basically the same as my code, yes.
> or even await async process inside async process:
>
> (await
> (async
> (alet p `("readlink" "-f" ,(await
> (async
> (alet p '("which" "emacs")
> (when p
> (yield p))))))
> (when p
> (yield p)))))
You use `await` which will block Emacs :-(
> I think this provides nicer interface for async code than futur.el and
> even comes with a working example.
I think you just reinvented the same thing, yes :-)
>> (concat foo (future-wait (futur-let* (...) ...)))
>
> Looking at other languages, they do it explicitly. The reason is, that
> one might want to save the future, do something else and await the
> future later at some point. Not await it immediately:
>
> (let ((future (futur-let* (...) ...)))
> ...
> (concat foo (future-wait future)))
I suspect that a better option would be instead of:
(let ((future (futur-let* (BINDS...) BODY)))
...
(concat foo (future-wait future)))
to use
(futur-let* (BINDS...
(s BODY))
(concat foo s))
The difference is that it doesn't return a string but a `futur`, so if
you want the string you need to use `future-let*` or `futur-wait`.
The advantage is that you still have the choice to use `future-let*`
rather than `futur-wait`.
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-29 19:00 ` Stefan Monnier
@ 2023-04-03 0:39 ` Tomas Hlavaty
2023-04-03 1:44 ` Emanuel Berg
2023-04-03 2:09 ` Stefan Monnier
0 siblings, 2 replies; 53+ messages in thread
From: Tomas Hlavaty @ 2023-04-03 0:39 UTC (permalink / raw)
To: Stefan Monnier
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
Hi Stefan,
thank you for your time, your discussion helps me to clear my thinking.
This works with asynchronous processes, threads and iter (cps
rewriting):
(defun await (future)
(let (z)
(while (eq 'EAGAIN (setq z (funcall future)))
(accept-process-output)
(sit-for 0.2))
z))
(defun await-in-background (future &optional callback secs repeat)
(let ((z (funcall future)))
(if (eq 'EAGAIN z)
(let (timer)
(setq timer (run-with-timer
(or secs 0.2)
(or repeat 0.2)
(lambda ()
(let ((z (funcall future)))
(unless (eq 'EAGAIN z)
(cancel-timer timer)
(when callback
(funcall callback z))))))))
(when callback
(funcall callback z)))
z))
(defmacro await-iter (future)
(let ((f (gensym)))
`(let ((,f ,future))
(let (z)
(while (eq 'EAGAIN (setq z (funcall ,f)))
(iter-yield 'EAGAIN))
z))))
(defmacro async (&rest body)
(declare (indent 0))
(let ((z (gensym))
(e (gensym)))
`(let (,e (,z 'EAGAIN))
(cl-flet ((yield (x) (setq ,z x))
(fail (string &rest args) (setq ,e (cons string args))))
,@body)
(lambda () (if ,e (signal 'error ,e) ,z)))))
(defmacro async-thread (&rest body)
(declare (indent 0))
(let ((z (gensym))
(e (gensym)))
`(let (,e (,z 'EAGAIN))
(cl-flet ((yield (x) (setq ,z x))
(fail (string &rest args) (setq ,e (cons string args))))
(let ((thread (make-thread (lambda () (yield (progn ,@body))))))
(lambda ()
(thread-join thread)
(if ,e (signal 'error ,e) ,z)))))))
(defun buffer-string2 (buffer)
(with-current-buffer buffer
(buffer-string)))
(defun async-process (command)
(async
(let* ((n (format "*%s" command))
(b (generate-new-buffer n t))
(e (generate-new-buffer (format "%s-stderr" n) t)))
(condition-case c
(make-process
:name n
:command command
:buffer b
:stderr e
:sentinel (lambda (proc _event)
(unless (process-live-p proc)
(let ((x (process-exit-status proc)))
(if (and (eq 'exit (process-status proc))
(zerop x))
(yield (buffer-string2 b))
(fail 'async-process
:command command
:code x
:stderr (buffer-string2 e)
:stdout (buffer-string2 b))))
(kill-buffer b)
(kill-buffer e))))
(error
(kill-buffer b)
(kill-buffer e)
(signal (car c) (cdr c)))))))
(defmacro async-iter (&rest body)
(declare (indent 0))
`(let ((i (iter-make (iter-yield (progn ,@body))))
(z 'EAGAIN))
(setq z (iter-next i))
(lambda ()
(when (eq 'EAGAIN z)
(setq z (iter-next i)))
(unless (eq 'EAGAIN z)
(iter-close i))
z)))
There are two sides using the future: producer and consumer. Producer
runs asynchronously and is delimited by an async macro. Consumer
synchronously waits for the value of the future using an await function
or macro.
Consumer options:
- await is a general await function which polls for the future in
foreground. It does not expect anything from the future and/or
producer.
- await-in-background is a background version of await probably makes
sense only as a top-level await together with callback.
- await-iter is used inside async-iter producers. plain await above is
needed at the top-level.
Producer options:
- async is a general async macro which sets the value or error of the
future.
- async-thread runs the producer in a new thread and has more efficient
await.
- async-process runs a command in an asynchronous process and sets the
future value to the output of the process if successful.
- async-iter cps rewrites the producer so that it can be executed
iteratively without the need for threads or processes.
It seems to me that the fundamental difference compared to futur.el is
that futur.el tries to manually wire up the links between producers and
consumers and wraps it together in complex macro futur-let* together
with future constructors. I leave this to the lisp compiler and have to
implement only how consumers pull the data (await*) and how the
producers are constructed and push the data (async*).
Here is a non-trivial promise pipelining example from capnproto:
The client wants to compute:
((5 * 2) + ((7 - 3) * 10)) / (6 - 4)
The client decomposes the problem into API functions + - * /, names the
immediate results and sends everything to the server in one request.
Here the client sends the following to the server:
A = (* 5 2)
B = (- 7 3)
C = (- 6 4)
D = (* B 10)
E = (+ A D)
F = (/ E C)
F = ?
In lisp:
((A * 5 2)
(B - 7 3)
(C - 6 4)
(D * B 10)
(E + A D)
(F / E C)
F)
Server runs the batched request: subtitutes the named immediate results
with the computed value and sends the result to the client.
Response:
(<value of F>)
In lisp:
(defun promise-pipelining-client (expr)
(let (z zk)
(cl-labels ((rec (x)
(if (atom x)
x
(cl-destructuring-bind (op l r) x
(cl-ecase op
((+ - * /)
(let ((ll (rec l))
(rr (rec r))
(k (gensym)))
(setq zk k)
(push (list k op ll rr) z)
k)))))))
(rec expr))
(nreverse (cons zk z))))
(promise-pipelining-client '(/ (+ (* 5 2) (* (- 7 3) 10)) (- 6 4)))
=> ((g3488 * 5 2) (g3489 - 7 3) (g3490 * g3489 10) (g3491 + g3488 g3490)
(g3492 - 6 4) (g3493 / g3491 g3492) g3493)
(defun promise-pipelining-server0 (req)
(funcall
(byte-compile-sexp
`(lambda ()
(let* (,@(cl-loop
for x in req
unless (atom x)
collect (cl-destructuring-bind (k op l r) x
`(,k (,op ,l ,r)))))
,(car (last req)))))))
(promise-pipelining-server0 (promise-pipelining-client '(/ (+ (* 5 2) (*
(- 7 3) 10)) (- 6 4))))
=> 25
Lets say now that it takes 5sec for the server to compute the number 5
etc. I can use async/await on the server to run slow computations in
parallel.
Here using threads:
(defun slowly-thread (sec) ;; slowly computes sec in async thread
(async-thread (or (sleep-for sec) sec)))
(defun promise-pipelining-server3 (req)
(funcall
(byte-compile-sexp
(let (f v z)
(dolist (x req)
(if (atom x)
(push x z)
(cl-destructuring-bind (k op l r) x
(let ((ll (if (symbolp l)
l
(let ((lk (gensym)))
(push `(,lk (slowly-thread ,l)) f)
`(await ,lk))))
(rr (if (symbolp r)
r
(let ((rk (gensym)))
(push `(,rk (slowly-thread ,r)) f)
`(await ,rk)))))
(push `(,k (,op ,ll ,rr)) v)))))
`(lambda ()
(let ,(nreverse f)
(let* ,(nreverse v)
(list ,@(nreverse z)))))))))
(promise-pipelining-server3 (promise-pipelining-client '(/ (+ (* 5 2) (*
(- 7 3) 10)) (- 6 4))))
=> (25)
Here using processes:
(defun async-emacs (expr)
(async-process
`("emacs" "-Q" "--batch" "--eval" ,(cl-prin1-to-string `(print ,expr)))))
(defun await-emacs (future)
(car (read-from-string (await future))))
(defun slowly-emacs (sec) ;; slowly computes sec in async sub-emacs
(async-emacs `(or (sleep-for ,sec) ,sec)))
(defun promise-pipelining-server4 (req)
(funcall
(byte-compile-sexp
(let (f v z)
(dolist (x req)
(if (atom x)
(push x z)
(cl-destructuring-bind (k op l r) x
(let ((ll (if (symbolp l)
l
(let ((lk (gensym)))
(push `(,lk (slowly-emacs ,l)) f)
`(await-emacs ,lk))))
(rr (if (symbolp r)
r
(let ((rk (gensym)))
(push `(,rk (slowly-emacs ,r)) f)
`(await-emacs ,rk)))))
(push `(,k (,op ,ll ,rr)) v)))))
`(lambda ()
(let ,(nreverse f)
(let* ,(nreverse v)
(list ,@(nreverse z)))))))))
(promise-pipelining-server4 (promise-pipelining-client '(/ (+ (* 5 2) (*
(- 7 3) 10)) (- 6 4))))
=> (25)
Expanded server code for that req looks something like this:
(let ((g710 (slowly-emacs 5))
(g711 (slowly-emacs 2))
(g712 (slowly-emacs 7))
(g713 (slowly-emacs 3))
(g714 (slowly-emacs 10))
(g715 (slowly-emacs 6))
(g716 (slowly-emacs 4)))
(let* ((g704 (* (await-emacs g710) (await-emacs g711)))
(g705 (- (await-emacs g712) (await-emacs g713)))
(g706 (* g705 (await-emacs g714)))
(g707 (+ g704 g706))
(g708 (- (await-emacs g715) (await-emacs g716)))
(g709 (/ g707 g708)))
(list g709)))
First it starts all asynchronous processes, then awaits them as needed
and finally returns the result.
It uses (length '(5 2 7 3 10 6 4)) = 7 emacs sub-processes
where 'slowly sleeps in background so it takes
(max 5 2 7 3 10 6 4) = 10sec parallel
instead of (+ 5 2 7 3 10 6 4) = 37sec sequential.
On Wed 29 Mar 2023 at 15:00, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>> (defun await (future)
>> (let (z)
>> (while (eq 'EAGAIN (setq z (funcall future)))
>> ;; TODO poke sit-for/io on yield? how?
>> (sit-for 0.2))
>> z))
>
> This blocks, so it's the equivalent of `futur-wait`.
> I.e. it's the thing we'd ideally never use.
I think that futur-wait (or wrapper future-get) aka await is essential
but what futur.el provides is not sufficient. There need to be
different await ways depending on use-case (process, thread, iter).
await is necessary for waiting at top-level in any case. For top-level
waiting in background, use await-in-background instead.
>> Assuming alet (async let) macro, which binds var when async process
>> finishes with the value of its output:
>
> I.e. what I called `future-let*`.
>
>> (alet p1 '("which" "emacs")
>> (when p1
>> (alet p2 `("readlink" "-f" ,p1)
>> (when p2
>> (message "@@@ %s" p2)))))
alet was just a macro to turn a body into a callback which is then
plugged into process sentinel. It has nothing to do with futures, async
or await. This example also shows, that futures are not necessary in
this case (the futur.el example) and actually make things more
comlicated.
> Your syntax is more concise because it presumes all your async objects
> run commands via `make-process`, but other than that it seems to be
> doing basically the same as my code, yes.
>
>> or even await async process inside async process:
>>
>> (await
>> (async
>> (alet p `("readlink" "-f" ,(await
>> (async
>> (alet p '("which" "emacs")
>> (when p
>> (yield p))))))
>> (when p
>> (yield p)))))
>
> You use `await` which will block Emacs :-(
I think that the promise pipelining example above shows better what
futures are about.
Calling await immediately after async is useless (simply use blocking
call). The point of future is to make the distance between those calls
as big as possible so that the sum of times in the sequential case is
replaced with max of times in the parallel case.
>> I think this provides nicer interface for async code than futur.el and
>> even comes with a working example.
>
> I think you just reinvented the same thing, yes :-)
I think it is quite different. What is the point of futur-deliver,
futur-fail, futur-pure, futur--bind, futur--join, futur-let*,
futur-multi-bind when the lisp can figure those automatically? Some are
cosmetic but all the manual wiring is fundamentally unnecessary. It
seems like lots of superficial code can be easily eliminated and the
result will be better because the lisp primitives like let or let* are
already brilliant :-)
>>> (concat foo (future-wait (futur-let* (...) ...)))
>>
>> Looking at other languages, they do it explicitly. The reason is, that
>> one might want to save the future, do something else and await the
>> future later at some point. Not await it immediately:
>>
>> (let ((future (futur-let* (...) ...)))
>> ...
>> (concat foo (future-wait future)))
>
> I suspect that a better option would be instead of:
>
> (let ((future (futur-let* (BINDS...) BODY)))
> ...
> (concat foo (future-wait future)))
>
> to use
>
> (futur-let* (BINDS...
> (s BODY))
> (concat foo s))
>
> The difference is that it doesn't return a string but a `futur`, so if
> you want the string you need to use `future-let*` or `futur-wait`.
> The advantage is that you still have the choice to use `future-let*`
> rather than `futur-wait`.
Sorry I should have not used the terminology from futur.el which
is confusing.
I meant:
(let ((future (async BODY)))
...do as much as possible in parallel...
(concat foo (await future)))
The point is that waiting for the future has to be explicit, otherwise
there is no way to distinguish between passing the future around and
waiting for the future.
I do not see where future-let* would do anything useful. A future is a
first class value after all so I can pass it around as such, e.g. to
mapcar:
(defun acurl (url)
(async-process `("curl" ,url)))
(seq-reduce ;; compute total length, parallel (faster)
#'+
(mapcar (lambda (x) (length (await x)))
(mapcar 'acurl '("https://dipat.eu"
"https://logand.com"
"https://osmq.eu")))
0)
(seq-reduce ;; compute total length, sequential (slower)
#'+
(mapcar (lambda (x) (length (await (acurl x))))
'("https://dipat.eu"
"https://logand.com"
"https://osmq.eu"))
0)
On Wed 29 Mar 2023 at 14:47, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>>> Part of the issue is the management of `current-buffer`: should the
>>> composition of futures with `futur-let*` save&restore
>>> `current-buffer` to mimic more closely the behavior one would get
>>> with plain old sequential execution? If so, should we do the same
>>> with `point`? What about other such state?
>>
>> I do not think there is a good implicit solution.
>> Either it would save too much state or too little,
>> or save it the wrong way.
>
> Currently it doesn't save anything, which is "ideal" in terms of
> efficiency, but sometimes leads to code that's more verbose than
> I'd like.
>
> Someone suggested to save as much as threads do, but that's not
> practical.
A future is a single value, not a stream of values.
I think that one needs to decide what the use-case actually is,
i.e. what the value of the future is supposed to be.
For example, if I am talking about a future for an asynchronous process
and I am interested in its output, then there is no state to worry
about, simply return the buffer-string of the process buffer when the
process finishes.
Other use-cases would do something different, but once the future is
computed, it does not change so there is no state to maintain between
changes of the future.
>>> But as you point out at the beginning, as a general rule, if you want to
>>> avoid rewritings in the style of `generator.el`, then the code will tend
>>> to feel less like a tree and more "linear/imperative/sequential",
>>> because you fundamentally have to compose your operations "manually"
>>> with a monadic "bind" operation that forces you to *name* the
>>> intermediate value.
>>
>> That's what I suspected.
>> Being forced to name the values leads to very bad code.
>
> That's not my experience. It's sometimes a bit more verbose than
> strictly necessary, but it's quite rare for it to make the code
> less readable.
I think there is no need for anything monadic in connection with
futures. Plain lisp is pretty good for composing operations.
Also cps rewriting has nothing to do with futures. For example, there
is no need for it with threads or asynchronous processes. cps rewriting
is only needed if one wants to fake running something in parallel
without threads or asynchronous processes.
It is also pretty bad way of doing it. Any function boundary stops the
rewriting process so I cannot yield from things like defun, cl-flet,
cl-labels so any useful factoring goes out of the window. I cannot even
unify my async/await interface to yield = iter-yield and fail = error so
the iter case looks quite different from the thread or process cases.
For example, how would one implement sleep-iter as a function?
It works as a macro:
(defmacro sleep-iter (sec)
`(let ((x ,sec))
(let ((end (+ (float-time (current-time)) x)))
(while (< (float-time (current-time)) end)
(iter-yield 'EAGAIN)))))
I found only this solution:
(defun sleep-iter3 (sec)
(async-iter
(let ((end (+ (float-time (current-time)) sec)))
(while (< (float-time (current-time)) end)
(iter-yield 'EAGAIN)))
(iter-yield sec)))
and that has to be called using await-iter like this:
(await-iter (sleep-iter3 3))
where the extra await-iter is annoying.
It seems to me that my async-iter and await-iter are needed to make it
possible to factor nontrivial code which wants to iter-yield.
One more thing: Is futur-abort a good idea?
Cheers
Tomas
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-03 0:39 ` Tomas Hlavaty
@ 2023-04-03 1:44 ` Emanuel Berg
2023-04-03 2:09 ` Stefan Monnier
1 sibling, 0 replies; 53+ messages in thread
From: Emanuel Berg @ 2023-04-03 1:44 UTC (permalink / raw)
To: emacs-devel
Tomas Hlavaty wrote:
> One more thing [...]
This is amazing! :O
--
underground experts united
https://dataswamp.org/~incal
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-03 0:39 ` Tomas Hlavaty
2023-04-03 1:44 ` Emanuel Berg
@ 2023-04-03 2:09 ` Stefan Monnier
2023-04-03 4:03 ` Po Lu
2023-04-10 21:47 ` Tomas Hlavaty
1 sibling, 2 replies; 53+ messages in thread
From: Stefan Monnier @ 2023-04-03 2:09 UTC (permalink / raw)
To: Tomas Hlavaty
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
> (defun await (future)
> (let (z)
> (while (eq 'EAGAIN (setq z (funcall future)))
> (accept-process-output)
> (sit-for 0.2))
> z))
So `await` blocks Emacs.
IOW, your `await` is completely different from Javascript's `await`.
> (defun promise-pipelining-server3 (req)
> (funcall
> (byte-compile-sexp
> (let (f v z)
> (dolist (x req)
> (if (atom x)
> (push x z)
> (cl-destructuring-bind (k op l r) x
> (let ((ll (if (symbolp l)
> l
> (let ((lk (gensym)))
> (push `(,lk (slowly-thread ,l)) f)
> `(await ,lk))))
> (rr (if (symbolp r)
> r
> (let ((rk (gensym)))
> (push `(,rk (slowly-thread ,r)) f)
> `(await ,rk)))))
> (push `(,k (,op ,ll ,rr)) v)))))
> `(lambda ()
> (let ,(nreverse f)
> (let* ,(nreverse v)
> (list ,@(nreverse z)))))))))
And the use `await` above means that your Emacs will block while waiting
for one result. `futur-let*` instead lets you compose async operations
without blocking Emacs, and thus works more like Javascript's `await`.
>> This blocks, so it's the equivalent of `futur-wait`.
>> I.e. it's the thing we'd ideally never use.
> I think that futur-wait (or wrapper future-get) aka await is essential
> but what futur.el provides is not sufficient. There need to be
> different await ways depending on use-case (process, thread, iter).
Not sure what you mean by that. `futur-wait` does work in different ways
depending on whether it's waiting for a process, a thread, etc: it's
a generic function.
The `iter` case (same for streams) is similar to process filters in that
it doesn't map directly to "futures". So we'll probably want to
supplement futures with "streams of futures" or something like that to
try and provide a convenient interface for generators, streams, process
filters and such.
> await is necessary for waiting at top-level in any case.
That's what `futur-wait` is for, indeed.
> For top-level waiting in background, use await-in-background instead.
`future-let*` seems to provide a better alternative that doesn't need to
use a busy-loop polling from a timer.
> Calling await immediately after async is useless (simply use blocking
> call). The point of future is to make the distance between those calls
> as big as possible so that the sum of times in the sequential case is
> replaced with max of times in the parallel case.
You're looking for parallelism. I'm not.
I'm trying to provide a more convenient interface for async programming,
e.g. when you need to consult a separate executable/server from within
`jit-lock`, so you need to immediately reply to `jit-lock` saying
"pretend it's already highlighted" spawn some async operation to query
the external tool for info, and run some ELisp when the info comes back
(which may require running some other external tool, after which you
need to run some more ELisp, ...).
> I think it is quite different. What is the point of futur-deliver,
> futur-fail, futur-pure, futur--bind, futur--join, futur-let*,
> futur-multi-bind when the lisp can figure those automatically?
When writing the code by hand, for the cases targeted by my library, you
*have* to use process sentinels. `futur.el` just provides a fairly thin
layer on top. Lisp can't just "figure those out" for you.
> Other use-cases would do something different, but once the future is
> computed, it does not change so there is no state to maintain between
> changes of the future.
I'm not talking about saving some state in the future's value.
I'm talking about saving some state in the "continuations/callbacks"
created by `futur-let*` so that when you're called back you don't need
to first manually re-establish your context.
> One more thing: Is futur-abort a good idea?
I don't know. I can see places where it might make sense to use it, but
I don't know yet whether those places will actually be able to use it,
whether it will work well, ...
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-03 2:09 ` Stefan Monnier
@ 2023-04-03 4:03 ` Po Lu
2023-04-03 4:51 ` Jim Porter
2023-04-10 21:47 ` Tomas Hlavaty
1 sibling, 1 reply; 53+ messages in thread
From: Po Lu @ 2023-04-03 4:03 UTC (permalink / raw)
To: Stefan Monnier
Cc: Tomas Hlavaty, Jim Porter, Karthik Chikmagalur, Thomas Koch,
emacs-devel@gnu.org
I have not looked carefully at this thread, but I would hope that if
people are discussing a way to add multiprocessing to Emacs, we settle
on separate threads of execution executing in parallel, with all the
interlocking necessary to make that happen, like in most Unix thread
implementations.
Instead of adding exotic `async' functions which mess around with the
call stack of the type found in C#.
Thanks.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-03 4:03 ` Po Lu
@ 2023-04-03 4:51 ` Jim Porter
0 siblings, 0 replies; 53+ messages in thread
From: Jim Porter @ 2023-04-03 4:51 UTC (permalink / raw)
To: Po Lu, Stefan Monnier
Cc: Tomas Hlavaty, Karthik Chikmagalur, Thomas Koch,
emacs-devel@gnu.org
On 4/2/2023 9:03 PM, Po Lu wrote:
> I have not looked carefully at this thread, but I would hope that if
> people are discussing a way to add multiprocessing to Emacs, we settle
> on separate threads of execution executing in parallel, with all the
> interlocking necessary to make that happen, like in most Unix thread
> implementations.
Indeed, one of my hopes is to have a way to package some task so that
(eventually) it can be run on another thread, and everything Just Works.
From one of my previous messages on the subject[1]:
On 3/11/2023 5:45 PM, Jim Porter wrote:
> If it were easier to divide up long-running tasks into small chunks,
> that would go a long way towards solving these sorts of issues.
>
> (In theory, you could even get real multithreading this way, if you
> could divide up your task in a way that Emacs could be sure some chunk
> can be offloaded onto another thread.)
[1] https://lists.gnu.org/archive/html/emacs-devel/2023-03/msg00472.html
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-03 2:09 ` Stefan Monnier
2023-04-03 4:03 ` Po Lu
@ 2023-04-10 21:47 ` Tomas Hlavaty
2023-04-11 2:53 ` Stefan Monnier
1 sibling, 1 reply; 53+ messages in thread
From: Tomas Hlavaty @ 2023-04-10 21:47 UTC (permalink / raw)
To: Stefan Monnier
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
On Sun 02 Apr 2023 at 22:09, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>> (defun await (future)
>> (let (z)
>> (while (eq 'EAGAIN (setq z (funcall future)))
>> (accept-process-output)
>> (sit-for 0.2))
>> z))
>
> So `await` blocks Emacs.
Not if I run the top-level await in a separate thread, e.g.
(make-thread (lambda () (await ...)))
Or in case of iter, await-in-background should work without any thread.
It block Emacs because I want it to block emacs in these examples: I
want to press M-e (bound to ee-eval-sexp-eol for me) and observe what it
does. I.e. I want to see it working even though my Emacs UI seems
blocked. That is useful for testing purposes.
> IOW, your `await` is completely different from Javascript's `await`.
It depends what do you mean exactly and why do you bring javascript as
relevant here.
Only as an implementation detail it is different.
The user interface is the same and is better than futur.el I think.
Of course, it is implemented in Emacs Lisp so it will not be the same.
Javascript does not have asynchronous processes or threads.
You probably want await-iter and async-iter which use cps like other languages.
Also Emacs does not have such sophisticated event loop like javascript.
But from the user point of view it does the same thing.
I would say that futur.el is nothing like what one can see in javascript
or other languages. Even the user interface is completely different.
>> (defun promise-pipelining-server3 (req)
>> (funcall
>> (byte-compile-sexp
>> (let (f v z)
>> (dolist (x req)
>> (if (atom x)
>> (push x z)
>> (cl-destructuring-bind (k op l r) x
>> (let ((ll (if (symbolp l)
>> l
>> (let ((lk (gensym)))
>> (push `(,lk (slowly-thread ,l)) f)
>> `(await ,lk))))
>> (rr (if (symbolp r)
>> r
>> (let ((rk (gensym)))
>> (push `(,rk (slowly-thread ,r)) f)
>> `(await ,rk)))))
>> (push `(,k (,op ,ll ,rr)) v)))))
>> `(lambda ()
>> (let ,(nreverse f)
>> (let* ,(nreverse v)
>> (list ,@(nreverse z)))))))))
>
> And the use `await` above means that your Emacs will block while waiting
> for one result. `futur-let*` instead lets you compose async operations
> without blocking Emacs, and thus works more like Javascript's `await`.
Blocking the current thread for one result is fine, because all the
futures already run in other threads in "background" so there is nothing
else to do. Like thread-join also in futur.el.
If you mean that you want to use the editor at the same time, just run
the example in another thread. But then you have to look for the result
in the *Message* buffer. If I actually want to get the same behaviour
as C-x C-e (eval-last-sexp) then I want await to block Emacs; and this
is what await at top-level does.
Using emacs subprocesses instead of threads in
promise-pipelining-server4 shows also nicely that the example spawns 7
emacs sub-processes that compute something in the background and await
then collects the results in the right time as needed.
>>> This blocks, so it's the equivalent of `futur-wait`.
>>> I.e. it's the thing we'd ideally never use.
>> I think that futur-wait (or wrapper future-get) aka await is essential
>> but what futur.el provides is not sufficient. There need to be
>> different await ways depending on use-case (process, thread, iter).
>
> Not sure what you mean by that. `futur-wait` does work in different ways
> depending on whether it's waiting for a process, a thread, etc: it's
> a generic function.
Sure. But there are 3 cases and the 2 cases in futur.el do not work
with iter (i.e. without asynchronous processes or threads).
> The `iter` case (same for streams) is similar to process filters in that
> it doesn't map directly to "futures". So we'll probably want to
> supplement futures with "streams of futures" or something like that to
> try and provide a convenient interface for generators, streams, process
> filters and such.
No, the iter case does map directly to futures:
(await
(async-iter
(let ((a (async-iter
(message "a1")
(await-iter (sleep-iter3 3))
(message "a2")
1))
(b (async-iter
(message "b1")
(let ((c (async-iter
(message "c1")
(await-iter (sleep-iter3 3))
(message "c2")
2)))
(message "b2")
(+ 3 (await-iter c))))))
(+ (await-iter a) (await-iter b)))))
;; a1
;; b1
;; c1 <- a, b, c started in background
;; b2
;; @@@ await: EAGAIN [15 times] <- 15 * 0.2sec = 3sec
;; a2 <- had to wait for the first sleep
;; c2 <- second sleep already computed in bg
;; @@@ await: 6
;; 6 (#o6, #x6, ?\C-f)
The difference with for example javascript is that I drive the polling
loop explicitly here, while javascript queues the continuations in the
event loop implicitly.
>> await is necessary for waiting at top-level in any case.
>
> That's what `futur-wait` is for, indeed.
>
>> For top-level waiting in background, use await-in-background instead.
>
> `future-let*` seems to provide a better alternative that doesn't need to
> use a busy-loop polling from a timer.
The polling loop is needed for some use-cases (asynchronous processes
and iter). Not for threads.
In the case of async-thread, await collapses into thread-join and the
polling loop "disappears" because async-thread future never returns
EAGAIN.
So I do not need an extra implementation for threads, because the
existing case for asynchronous processes works without any change also
with threads, just even more efficiently.
>> Calling await immediately after async is useless (simply use blocking
>> call). The point of future is to make the distance between those calls
>> as big as possible so that the sum of times in the sequential case is
>> replaced with max of times in the parallel case.
>
> You're looking for parallelism. I'm not.
What do you mean exactly?
I am asking because:
https://wiki.haskell.org/Parallelism_vs._Concurrency
Warning: Not all programmers agree on the meaning of the terms
'parallelism' and 'concurrency'. They may define them in different
ways or do not distinguish them at all.
:-)
But it seems that you insist on composing promises sequentially:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises
However, before you compose promises sequentially, consider if it's
really necessary — it's always better to run promises concurrently so
that they don't unnecessarily block each other unless one promise's
execution depends on another's result.
Also futur.el does seems to run callbacks synchronously:
The above design is strongly discouraged because it leads to the
so-called "state of Zalgo". In the context of designing asynchronous
APIs, this means a callback is called synchronously in some cases but
asynchronously in other cases, creating ambiguity for the caller. For
further background, see the article Designing APIs for Asynchrony,
where the term was first formally presented. This API design makes
side effects hard to analyze:
If you look at the Javascript event loop and how promises are scheduled,
it seems rather complicated.
If you use threads or asynchronous processes, there is no reason to
limit yourself so that those do not run in parallel unless the
topological ordering of the computation says so.
If you use iter, then obviously it will not run in parallel. However,
it can be so arranged that it appers so. Like in javascript for
example.
In this javascript example, a and b appear to run in parallel (shall I
say concurrently?):
function sleep(sec) {
return new Promise(resolve => {
setTimeout(() => {resolve(sec);}, sec * 1000);
});
}
async function test() {
const a = sleep(9);
const b = sleep(8);
const z = await a + await b;
console.log(z);
}
test();
Here the console log will show 17 after 9sec.
It will not show 17 after 17sec.
Can futur.el do that?
Are you saying no?
> I'm trying to provide a more convenient interface for async
> programming,
So far I am not convinced that futur.el provides a good interface for
async programming. Flat unstructured assempler like list of
instructions is not a good way to write code. async/await in other
languages show nicer structured way of doing that.
I thought the async-emacs example was pretty cool:
(let ((a (async-emacs '(or (sleep-for 5) 5)))
(b (async-emacs '(or (sleep-for 2) 2))))
(+ 1 (await-emacs a) (await-emacs b)))
and I can make it not block emacs easily like this:
(make-thread
(lambda ()
(print
(let ((a (async-emacs '(or (sleep-for 5) 5)))
(b (async-emacs '(or (sleep-for 2) 2))))
(+ 1 (await-emacs a) (await-emacs b))))))
Maybe if make-thread was not desirable, the result could be output via
asynchronous cat process (something like the trick ielm uses), but that
seems unnecessary complication.
> e.g. when you need to consult a separate executable/server from within
> `jit-lock`, so you need to immediately reply to `jit-lock` saying
> "pretend it's already highlighted" spawn some async operation to query
> the external tool for info, and run some ELisp when the info comes back
> (which may require running some other external tool, after which you
> need to run some more ELisp, ...).
Sure, if the consumer does not really need the value of the result of
the asynchronous computation, just plug in a callback that does
something later. In your example, you immediately return a lie and then
fix it later asynchronously from a callback.
But this breaks down when the consumer already run away with the lie and
the callback has no way of fixing it. So this is not what future, async
and await are about. Those are about the consumer waiting for truth.
It is not about putting stuff to do into callback but more about taking
a value out of callback.
For putting stuff into callback, the simple macros consume and alet do
that. It is trivial with macros.
Maybe it is confusing because you describe what the producer does, but
not what the consumer does. And in your example, it does not matter
what value the consumer receives because the callback will be able to
fix it later. In your example, there is no consumer that needs the
value of the future.
Like in your example, my async* functions return a value (future)
immediately. But it is important, that await itself will eventually
return a true value which the consumer will use for further computation.
>> I think it is quite different. What is the point of futur-deliver,
>> futur-fail, futur-pure, futur--bind, futur--join, futur-let*,
>> futur-multi-bind when the lisp can figure those automatically?
>
> When writing the code by hand, for the cases targeted by my library, you
> *have* to use process sentinels. `futur.el` just provides a fairly thin
> layer on top. Lisp can't just "figure those out" for you.
async-process uses process sentinel but this is just an implementation
detail specific to asynchronous processes. It does not have to leak out
of the future/async/await "abstraction".
I am talking about code which takes several futures as input and computes
a result. There is no need for future-let* etc because everything "just
works" with normal lisp code. Here is a working example again:
(seq-reduce ;; compute total length, parallel (faster)
#'+
(mapcar (lambda (x) (length (await x)))
(mapcar 'acurl '("https://dipat.eu"
"https://logand.com"
"https://osmq.eu")))
0)
Can futur.el do that?
>> Other use-cases would do something different, but once the future is
>> computed, it does not change so there is no state to maintain between
>> changes of the future.
>
> I'm not talking about saving some state in the future's value.
> I'm talking about saving some state in the "continuations/callbacks"
> created by `futur-let*` so that when you're called back you don't need
> to first manually re-establish your context.
If your example in futur.el actually worked, it would be easier to see
what do you mean.
How would the examples I provided look like with futur.el?
I was not able to figured that out.
futur.el is completely broken, e.g. future-new has let instead of let*
and throws an error.
A future represents a value, not a stream of values. For example,
async-process uses a callback but it does not need to re-eastablish any
context because the single value the future resolves to "happens" once
only.
In the case of stream of values, proc-writer is the thing that
"re-establishes" the context (because the producer and consumer do
different things to the shared buffer at the same time). But that is
not about async/await.
I think that your confusion is caused by the decision that
futur-process-make yields exit code. That is wrong, exit code is
logically not the resolved value (promise resolution), it indicates
failure (promise rejection). The exit code should just be part of an
error, when something goes wrong. Then your example would look like
this:
(futur-let*
((cmd (build-arg-list))
(out <- (futur-process-make :command cmd :buffer t))
(cmd2 (build-second-arg-list out))
(out2 <- (futur-process-make :command cmd :buffer t)))
(futur-pure out2))
or even better:
(futur-let*
((out <- (futur-process-make :command (build-arg-list)))
(out2 <- (futur-process-make :command (build-second-arg-list out))))
(futur-pure out2))
now it looks almost like my alet example
(alet out (build-arg-list)
(when out
(alet out2 (build-second-arg-list out)
(when out2
(print out2)))))
which looks more structured and not so flat, and the implementation is
much simpler.
I think it would be interesting to see a version of await-iter which
would queue continuations in an implicit event loop like what javascript
does instead of explicit polling loop as I did for simplicity. I have
not figured that one out yet.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-10 21:47 ` Tomas Hlavaty
@ 2023-04-11 2:53 ` Stefan Monnier
2023-04-11 19:59 ` Tomas Hlavaty
0 siblings, 1 reply; 53+ messages in thread
From: Stefan Monnier @ 2023-04-11 2:53 UTC (permalink / raw)
To: Tomas Hlavaty
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
>> IOW, your `await` is completely different from Javascript's `await`.
> It depends what do you mean exactly and why do you bring javascript as
> relevant here.
Because that's the kind of model `futur.el` is trying to implement
(where `futur-let*` corresponds loosely to `await`, just without the
auto-CPS-conversion).
> Also Emacs does not have such sophisticated event loop like javascript.
Not sure what you mean by that.
>> And the use `await` above means that your Emacs will block while waiting
>> for one result. `futur-let*` instead lets you compose async operations
>> without blocking Emacs, and thus works more like Javascript's `await`.
> Blocking the current thread for one result is fine, because all the
> futures already run in other threads in "background" so there is nothing
> else to do.
You can't know that. There can be other async processes whose
filters should be run, timers to be executed, other threads to run, ...
> If you mean that you want to use the editor at the same time, just run
> the example in another thread.
The idea is to use `futur.el` *instead of* threads.
> But then you have to look for the result in the *Message* buffer.
> If I actually want to get the same behaviour as C-x C-e
> (eval-last-sexp) then I want await to block Emacs; and this is what
> await at top-level does.
Indeed, there are various cases where you do want to wait (which is why
I provide `futur-wait`). But its use should be fairly limited (to the
"top level").
> No, the iter case does map directly to futures:
>
> (await
> (async-iter
> (let ((a (async-iter
> (message "a1")
> (await-iter (sleep-iter3 3))
> (message "a2")
> 1))
> (b (async-iter
> (message "b1")
> (let ((c (async-iter
> (message "c1")
> (await-iter (sleep-iter3 3))
> (message "c2")
> 2)))
> (message "b2")
> (+ 3 (await-iter c))))))
> (+ (await-iter a) (await-iter b)))))
I must say I don't understand this example: in which sense is it using
"iter"? I don't see any `iter-yield`.
> The difference with for example javascript is that I drive the polling
> loop explicitly here, while javascript queues the continuations in the
> event loop implicitly.
`futur.el` also "queues the continuations in the event loop".
>>> Calling await immediately after async is useless (simply use blocking
>>> call). The point of future is to make the distance between those calls
>>> as big as possible so that the sum of times in the sequential case is
>>> replaced with max of times in the parallel case.
>> You're looking for parallelism. I'm not.
> What do you mean exactly?
That `futur.el` is not primarily concerned with allowing you to run
several subprocesses to exploit your multiple cores. It's instead
primarily concerned with making it easier to write asynchronous code.
One of the intended use case would be for completion tables to return
futures (which, in many cases, will have already been computed
synchronously, but not always).
> I am asking because:
>
> https://wiki.haskell.org/Parallelism_vs._Concurrency
>
> Warning: Not all programmers agree on the meaning of the terms
> 'parallelism' and 'concurrency'. They may define them in different
> ways or do not distinguish them at all.
Yet I have never heard of anyone disagree with the definitions given at
the beginning of that very same page. More specifically those who may
disagree are those who didn't know there was a distinction :-)
> But it seems that you insist on composing promises sequentially:
No, I'm merely making it easy to do that.
> Also futur.el does seems to run callbacks synchronously:
I don't think so: it runs them via `funcall-later`.
> In this javascript example, a and b appear to run in parallel (shall I
> say concurrently?):
>
> function sleep(sec) {
> return new Promise(resolve => {
> setTimeout(() => {resolve(sec);}, sec * 1000);
> });
> }
> async function test() {
> const a = sleep(9);
> const b = sleep(8);
> const z = await a + await b;
> console.log(z);
> }
> test();
>
> Here the console log will show 17 after 9sec.
> It will not show 17 after 17sec.
>
> Can futur.el do that?
Of course. You could do something like
(futur-let*
((a (futur-let* ((_ <- (futur-process-make
:command '("sleep" "9"))))
9))
(b (futur-let* ((_ <- (futur-process-make
:command '("sleep" "8"))))
8))
(a-val <- a)
(b-val <- b))
(message "Result = %s" (+ a-val b-val))))
> Sure, if the consumer does not really need the value of the result of
> the asynchronous computation, just plug in a callback that does
> something later.
How do you plug in a callback in code A which waits for code B to finish
when code A doesn't know if code B is doing its computation
synchronously or not, and if B does it asynchronously, A doesn't know if
it's done via timers, via some kind of hooks, via a subprocess which
will end when the computation is done, via a subprocess which will be
kept around for other purposes after the computation is done, etc... ?
That's what `futur.el` is about: abstracting away those differences
between the uniform API of a "future".
> In your example, you immediately return a lie and then
> fix it later asynchronously from a callback.
Yes. That's not due to `futur.el`, tho: it's due to the conflicting
requirements of jit-lock and the need to make a costly computation in
a subprocess in order to know what needs to be highlighted and how.
> Maybe it is confusing because you describe what the producer does, but
> not what the consumer does. And in your example, it does not matter
> what value the consumer receives because the callback will be able to
> fix it later. In your example, there is no consumer that needs the
> value of the future.
Yes, there is a consumer which will "backpatch" the highlighting.
But since it's done behind the back of jit-lock, we need to write it
by hand.
>> When writing the code by hand, for the cases targeted by my library, you
>> *have* to use process sentinels. `futur.el` just provides a fairly thin
>> layer on top. Lisp can't just "figure those out" for you.
>
> async-process uses process sentinel but this is just an implementation
> detail specific to asynchronous processes. It does not have to leak out
> of the future/async/await "abstraction".
Indeed, the users of the future won't know whether it's waiting for some
process to complete or for something else. They'll just call
`futur-let*` or `futur-wait` or somesuch.
> futur.el is completely broken,
Indeed, it's work in progress, not at all usable as of now.
> I think that your confusion is caused by the decision that
> futur-process-make yields exit code. That is wrong, exit code is
> logically not the resolved value (promise resolution), it indicates
> failure (promise rejection).
Not necessarily, it all depends on what the process is doing.
Similarly the "intended return value" of a process will depend on what
the process does. In some cases it will be the stdout, but I see no
reason to restrict my fundamental function to such a choice. It's easy
to build on top of `futur-process-make` a higher-level function which
returns the stdout as the result of the future.
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-11 2:53 ` Stefan Monnier
@ 2023-04-11 19:59 ` Tomas Hlavaty
2023-04-11 20:22 ` Stefan Monnier
0 siblings, 1 reply; 53+ messages in thread
From: Tomas Hlavaty @ 2023-04-11 19:59 UTC (permalink / raw)
To: Stefan Monnier; +Cc: Jim Porter, Karthik Chikmagalur, emacs-devel@gnu.org
On Mon 10 Apr 2023 at 22:53, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>>> And the use `await` above means that your Emacs will block while waiting
>>> for one result. `futur-let*` instead lets you compose async operations
>>> without blocking Emacs, and thus works more like Javascript's `await`.
>> Blocking the current thread for one result is fine, because all the
>> futures already run in other threads in "background" so there is nothing
>> else to do.
>
> You can't know that. There can be other async processes whose
> filters should be run, timers to be executed, other threads to run,
> ...
I do know that, because in this case I have been talking about the
specific case where the future runs in a thread, i.e. my async-thread
and await-thread code. Looks like similar use-case partially sketched
in futur-wait, unfortunately without working code to try.
>> If you mean that you want to use the editor at the same time, just run
>> the example in another thread.
>
> The idea is to use `futur.el` *instead of* threads.
What do you mean?
I can see thread-join and thread-signal in futur.el.
It is useful to acknowledge, that there are 3 different use-cases:
a) asynchronous processes
b) threads
c) iter
My impression was that futur.el was trying to address a) and b) but now
you say it does address a) only. That is rather limited.
>> No, the iter case does map directly to futures:
>>
>> (await
>> (async-iter
>> (let ((a (async-iter
>> (message "a1")
>> (await-iter (sleep-iter3 3))
>> (message "a2")
>> 1))
>> (b (async-iter
>> (message "b1")
>> (let ((c (async-iter
>> (message "c1")
>> (await-iter (sleep-iter3 3))
>> (message "c2")
>> 2)))
>> (message "b2")
>> (+ 3 (await-iter c))))))
>> (+ (await-iter a) (await-iter b)))))
>
> I must say I don't understand this example: in which sense is it using
> "iter"? I don't see any `iter-yield`.
await-iter and async-iter macros are using iter under the hood.
One could use iter-yield explicitly too but that example
does not need to do that.
I have already sent the definitions of those macros earlier but here the
whole thing self-contained:
;;; -*- lexical-binding: t -*-
(defmacro async-iter (&rest body)
(declare (indent 0))
`(let ((i (iter-make (iter-yield (progn ,@body))))
(z 'EAGAIN))
(setq z (iter-next i))
(lambda ()
(when (eq 'EAGAIN z)
(setq z (iter-next i)))
(unless (eq 'EAGAIN z)
(iter-close i))
z)))
(defmacro await-iter (future)
(let ((f (gensym)))
`(let ((,f ,future))
(let (z)
(while (eq 'EAGAIN (setq z (funcall ,f)))
(iter-yield 'EAGAIN))
z))))
(defun sleep-iter3 (sec)
(async-iter
(let ((end (+ (float-time (current-time)) sec)))
(while (< (float-time (current-time)) end)
(iter-yield 'EAGAIN)))
(iter-yield sec)))
(defun await-in-background (future &optional callback secs repeat)
(let ((z (funcall future)))
(if (eq 'EAGAIN z)
(let (timer)
(setq timer (run-with-timer
(or secs 0.2)
(or repeat 0.2)
(lambda ()
(let ((z (funcall future)))
(unless (eq 'EAGAIN z)
(cancel-timer timer)
(when callback
(funcall callback z))))))))
(when callback
(funcall callback z)))
z))
(await-in-background
(async-iter
(let ((a (async-iter
(message "a1")
(await-iter (sleep-iter3 3))
(message "a2")
1))
(b (async-iter
(message "b1")
(let ((c (async-iter
(message "c1")
(await-iter (sleep-iter3 3))
(message "c2")
2)))
(message "b2")
(+ 3 (await-iter c))))))
(message "Result = %s" (+ (await-iter a) (await-iter b))))))
> `futur.el` also "queues the continuations in the event loop".
I get:
futur.el:97:8: Warning: the function ‘funcall-later’ is not known to be
defined.
>>>> Calling await immediately after async is useless (simply use blocking
>>>> call). The point of future is to make the distance between those calls
>>>> as big as possible so that the sum of times in the sequential case is
>>>> replaced with max of times in the parallel case.
>>> You're looking for parallelism. I'm not.
>> What do you mean exactly?
>
> That `futur.el` is not primarily concerned with allowing you to run
> several subprocesses to exploit your multiple cores. It's instead
> primarily concerned with making it easier to write asynchronous code.
>
> One of the intended use case would be for completion tables to return
> futures (which, in many cases, will have already been computed
> synchronously, but not always).
>
>> I am asking because:
>>
>> https://wiki.haskell.org/Parallelism_vs._Concurrency
>>
>> Warning: Not all programmers agree on the meaning of the terms
>> 'parallelism' and 'concurrency'. They may define them in different
>> ways or do not distinguish them at all.
>
> Yet I have never heard of anyone disagree with the definitions given at
> the beginning of that very same page. More specifically those who may
> disagree are those who didn't know there was a distinction :-)
strawman
I was asking in order to understand why you dismissed my examples by
saying:
You're looking for parallelism. I'm not.
It looks to me that the reason is that futur.el cannot do those things
demonstrated in my examples.
>> But it seems that you insist on composing promises sequentially:
>
> No, I'm merely making it easy to do that.
>
>> Also futur.el does seems to run callbacks synchronously:
>
> I don't think so: it runs them via `funcall-later`.
>
>> In this javascript example, a and b appear to run in parallel (shall I
>> say concurrently?):
>>
>> function sleep(sec) {
>> return new Promise(resolve => {
>> setTimeout(() => {resolve(sec);}, sec * 1000);
>> });
>> }
>> async function test() {
>> const a = sleep(9);
>> const b = sleep(8);
>> const z = await a + await b;
>> console.log(z);
>> }
>> test();
>>
>> Here the console log will show 17 after 9sec.
>> It will not show 17 after 17sec.
>>
>> Can futur.el do that?
>
> Of course. You could do something like
>
> (futur-let*
> ((a (futur-let* ((_ <- (futur-process-make
> :command '("sleep" "9"))))
> 9))
> (b (futur-let* ((_ <- (futur-process-make
> :command '("sleep" "8"))))
> 8))
> (a-val <- a)
> (b-val <- b))
> (message "Result = %s" (+ a-val b-val))))
So will futur.el take 9sec or 17sec?
I cannot test this because it does not work:
I get:
Debugger entered--Lisp error: (wrong-type-argument stringp nil)
#<subr make-process>(:sentinel #f(compiled-function (proc state) #<bytecode -0x17e00415238e9184>) :command ("sleep" "9"))
make-process--with-editor-process-filter(#<subr make-process> :sentinel #f(compiled-function (proc state) #<bytecode -0x17e00415238e9184>) :command ("sleep" "9"))
apply(make-process--with-editor-process-filter #<subr make-process> (:sentinel #f(compiled-function (proc state) #<bytecode -0x17e00415238e9184>) :command ("sleep" "9")))
make-process(:sentinel #f(compiled-function (proc state) #<bytecode -0x17e00415238e9184>) :command ("sleep" "9"))
apply(make-process :sentinel #f(compiled-function (proc state) #<bytecode -0x17e00415238e9184>) (:command ("sleep" "9")))
#f(compiled-function (f) #<bytecode 0x5ccc7625eec9b18>)(#s(futur :clients nil :value nil))
futur-new(#f(compiled-function (f) #<bytecode 0x5ccc7625eec9b18>))
futur-process-make(:command ("sleep" "9"))
(futur-bind (futur-process-make :command '("sleep" "9")) #'(lambda (_) 9))
(let ((a (futur-bind (futur-process-make :command '("sleep" "9")) #'(lambda (_) 9)))) (let ((b (futur-bind (futur-process-make :command '("sleep" "8")) #'(lambda (_) 8)))) (futur-bind a #'(lambda (a-val) (futur-bind b #'(lambda ... ...))))))
(progn (let ((a (futur-bind (futur-process-make :command '("sleep" "9")) #'(lambda (_) 9)))) (let ((b (futur-bind (futur-process-make :command '...) #'(lambda ... 8)))) (futur-bind a #'(lambda (a-val) (futur-bind b #'...))))))
(setq elisp--eval-defun-result (progn (let ((a (futur-bind (futur-process-make :command '...) #'(lambda ... 9)))) (let ((b (futur-bind (futur-process-make :command ...) #'...))) (futur-bind a #'(lambda (a-val) (futur-bind b ...)))))))
elisp--eval-defun()
eval-defun(nil)
funcall-interactively(eval-defun nil)
command-execute(eval-defun)
>> I think that your confusion is caused by the decision that
>> futur-process-make yields exit code. That is wrong, exit code is
>> logically not the resolved value (promise resolution), it indicates
>> failure (promise rejection).
>
> Not necessarily, it all depends on what the process is doing.
>
> Similarly the "intended return value" of a process will depend on what
> the process does. In some cases it will be the stdout, but I see no
> reason to restrict my fundamental function to such a choice.
This overgeneralized thinking is beyond usefulness and harmfully leads
to the problem of how to maintain state. It is better to have the
process future return the output on success and error on failure. The
error can contain the error code which covers the specialized use-case
you were after.
> It's easy to build on top of `futur-process-make` a higher-level
> function which returns the stdout as the result of the future.
It might be easy but unnecessary. That overgeneralized thinking also
lead you astray:
From: Stefan Monnier <monnier@iro.umontreal.ca>
Date: Thu, 16 Mar 2023 23:08:25 -0400
id:jwvpm98nlqz.fsf-monnier+emacs@gnu.org
BTW the above code can't work right now. Part of the issue is the
management of `current-buffer`: should the composition of futures
with `futur-let*` save&restore `current-buffer` to mimic more closely
the behavior one would get with plain old sequential execution? If
so, should we do the same with `point`? What about other such state?
It is better not to open this can of worms.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-11 19:59 ` Tomas Hlavaty
@ 2023-04-11 20:22 ` Stefan Monnier
2023-04-11 23:07 ` Tomas Hlavaty
0 siblings, 1 reply; 53+ messages in thread
From: Stefan Monnier @ 2023-04-11 20:22 UTC (permalink / raw)
To: Tomas Hlavaty; +Cc: Jim Porter, Karthik Chikmagalur, emacs-devel@gnu.org
>> The idea is to use `futur.el` *instead of* threads.
> What do you mean?
> I can see thread-join and thread-signal in futur.el.
It's only used to work around the current lack of an async version of
process-send-string.
> It is useful to acknowledge, that there are 3 different use-cases:
> a) asynchronous processes
> b) threads
> c) iter
I can't see how `iter` would be a use case for futures.
> My impression was that futur.el was trying to address a) and b) but now
> you say it does address a) only. That is rather limited.
Looking at existing code, iter and threads are virtually never used, so
from where I stand it seems to cover the 99% cases.
>>> No, the iter case does map directly to futures:
>>>
>>> (await
>>> (async-iter
>>> (let ((a (async-iter
>>> (message "a1")
>>> (await-iter (sleep-iter3 3))
>>> (message "a2")
>>> 1))
>>> (b (async-iter
>>> (message "b1")
>>> (let ((c (async-iter
>>> (message "c1")
>>> (await-iter (sleep-iter3 3))
>>> (message "c2")
>>> 2)))
>>> (message "b2")
>>> (+ 3 (await-iter c))))))
>>> (+ (await-iter a) (await-iter b)))))
>>
>> I must say I don't understand this example: in which sense is it using
>> "iter"? I don't see any `iter-yield`.
>
> await-iter and async-iter macros are using iter under the hood.
The point of `iter` is to provide something that will iterate through
a sequence of things. Here I don't see any form of iteration. You seem
to use your `iter`s just as (expensive) thunks (futures).
Maybe what you mean by "iter" is the use of CPS-translation
(implemented by `generator.el`)?
>> `futur.el` also "queues the continuations in the event loop".
>
> I get:
>
> futur.el:97:8: Warning: the function ‘funcall-later’ is not known to be
> defined.
Yup, it's defined in C currently. You can use
(unless (fboundp 'funcall-later)
(defun funcall-later (function &rest args)
;; FIXME: Not sure if `run-with-timer' preserves ordering between
;; different calls with the same target time.
(apply #'run-with-timer 0 nil function args)))
>> Of course. You could do something like
>>
>> (futur-let*
>> ((a (futur-let* ((_ <- (futur-process-make
>> :command '("sleep" "9"))))
>> 9))
>> (b (futur-let* ((_ <- (futur-process-make
>> :command '("sleep" "8"))))
>> 8))
>> (a-val <- a)
>> (b-val <- b))
>> (message "Result = %s" (+ a-val b-val))))
>
> So will futur.el take 9sec or 17sec?
9 secs, of course: the above creates 2 futures and emits the message
when they're both done. Since those futures are executed in
subprocesses, they execute concurrently.
>> Similarly the "intended return value" of a process will depend on what
>> the process does. In some cases it will be the stdout, but I see no
>> reason to restrict my fundamental function to such a choice.
> This overgeneralized thinking is beyond usefulness and harmfully leads
> to the problem of how to maintain state.
While I do like to over-generalize, in this case, there is no
generalization involved. The code is the simple result of a thin
wrapper around the existing `make-process` to make it obey the
`futur.el` API. So if it's overgeneralized, it's not my fault, it's
`make-process`s :-)
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-11 20:22 ` Stefan Monnier
@ 2023-04-11 23:07 ` Tomas Hlavaty
2023-04-12 6:13 ` Eli Zaretskii
0 siblings, 1 reply; 53+ messages in thread
From: Tomas Hlavaty @ 2023-04-11 23:07 UTC (permalink / raw)
To: Stefan Monnier; +Cc: Jim Porter, Karthik Chikmagalur, emacs-devel@gnu.org
On Tue 11 Apr 2023 at 16:22, Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>> It is useful to acknowledge, that there are 3 different use-cases:
>> a) asynchronous processes
>> b) threads
>> c) iter
>
> I can't see how `iter` would be a use case for futures.
It makes it easy to implement async/await without the need for
asynchronous processes or threads.
>> My impression was that futur.el was trying to address a) and b) but
>> now you say it does address a) only. That is rather limited.
>
> Looking at existing code, iter and threads are virtually never used,
> so from where I stand it seems to cover the 99% cases.
Strange that futur.el is "primarily concerned with making it easier to
write asynchronous code" but limits itself to asynchronous processes
only.
I am also "concerned with making it easier to write asynchronous code"
but I explore various options available in Emacs so that the result
would not end up with some patological flaw due to specifics of
asynchronous processes.
I do not know how useable threads in Emacs are at the moment,
but they are already there and the examples I tried worked well.
If they are not useable now, I hope they will be useable in the future.
(Pun intended:-)
And I do not think I am the only one:
From: Po Lu <luangruo@yahoo.com>
Date: Mon, 03 Apr 2023 12:03:17 +0800
id:87mt3pr4sa.fsf@yahoo.com
I have not looked carefully at this thread, but I would hope that if
people are discussing a way to add multiprocessing to Emacs, we
settle on separate threads of execution executing in parallel, with
all the interlocking necessary to make that happen, like in most Unix
thread implementations.
It would be a shame not to consider them.
Especially when that use-case is easy to implement and
works as demonstrated with the promise-pipelining-server3 example.
>>>> No, the iter case does map directly to futures:
>>>>
>>>> (await
>>>> (async-iter
>>>> (let ((a (async-iter
>>>> (message "a1")
>>>> (await-iter (sleep-iter3 3))
>>>> (message "a2")
>>>> 1))
>>>> (b (async-iter
>>>> (message "b1")
>>>> (let ((c (async-iter
>>>> (message "c1")
>>>> (await-iter (sleep-iter3 3))
>>>> (message "c2")
>>>> 2)))
>>>> (message "b2")
>>>> (+ 3 (await-iter c))))))
>>>> (+ (await-iter a) (await-iter b)))))
>>>
>>> I must say I don't understand this example: in which sense is it using
>>> "iter"? I don't see any `iter-yield`.
>>
>> await-iter and async-iter macros are using iter under the hood.
>
> The point of `iter` is to provide something that will iterate through
> a sequence of things. Here I don't see any form of iteration.
I sent the whole self-contained example. The iteration happens in the
top-level loop (see await-in-background).
> You seem to use your `iter`s just as (expensive) thunks (futures).
What do you mean?
Are thunks expensive?
More expensive than cl-struct and CLOS in futur.el?
Surely it's the other way round.
In the use-case of iter (no asynchronous processes or threads), iter is
used to do cps rewriting needed by async-iter.
> Maybe what you mean by "iter" is the use of CPS-translation
> (implemented by `generator.el`)?
Yes, iter provides the sequence of steps needed to compute the whole
async-iter expression. See
(iter-make (iter-yield (progn ,@body)))
in async-iter macro.
>>> Of course. You could do something like
>>>
>>> (futur-let*
>>> ((a (futur-let* ((_ <- (futur-process-make
>>> :command '("sleep" "9"))))
>>> 9))
>>> (b (futur-let* ((_ <- (futur-process-make
>>> :command '("sleep" "8"))))
>>> 8))
>>> (a-val <- a)
>>> (b-val <- b))
>>> (message "Result = %s" (+ a-val b-val))))
>>
>> So will futur.el take 9sec or 17sec?
>
> 9 secs, of course: the above creates 2 futures and emits the message
> when they're both done. Since those futures are executed in
> subprocesses, they execute concurrently.
Good. So I don't understand your remark about parallelism. Maybe I
should believe you that it would take 9sec, but I would rather verify
that by myself because being "executed in subprocesses" does not
necessarily mean "they execute concurrently".
>>> Similarly the "intended return value" of a process will depend on what
>>> the process does. In some cases it will be the stdout, but I see no
>>> reason to restrict my fundamental function to such a choice.
>> This overgeneralized thinking is beyond usefulness and harmfully leads
>> to the problem of how to maintain state.
>
> While I do like to over-generalize, in this case, there is no
> generalization involved. The code is the simple result of a thin
> wrapper around the existing `make-process` to make it obey the
> `futur.el` API. So if it's overgeneralized, it's not my fault, it's
> `make-process`s :-)
Not really, make-process is general because it covers many use-cases.
Futures are quite specific. Anything beyond the specifics cause issues.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-11 23:07 ` Tomas Hlavaty
@ 2023-04-12 6:13 ` Eli Zaretskii
2023-04-17 20:51 ` Tomas Hlavaty
0 siblings, 1 reply; 53+ messages in thread
From: Eli Zaretskii @ 2023-04-12 6:13 UTC (permalink / raw)
To: Tomas Hlavaty; +Cc: monnier, jporterbugs, karthikchikmagalur, emacs-devel
> From: Tomas Hlavaty <tom@logand.com>
> Cc: Jim Porter <jporterbugs@gmail.com>,
> Karthik Chikmagalur <karthikchikmagalur@gmail.com>,
> "emacs-devel@gnu.org" <emacs-devel@gnu.org>
> Date: Wed, 12 Apr 2023 01:07:32 +0200
>
> Strange that futur.el is "primarily concerned with making it easier to
> write asynchronous code" but limits itself to asynchronous processes
> only.
Async subprocesses are currently the only feature in Emacs that
provides an opportunity for writing asynchronous code.
> I do not know how useable threads in Emacs are at the moment,
> but they are already there and the examples I tried worked well.
If you think Lisp threads in Emacs allow asynchronous processing, you
are mistaken: they don't. Only one such thread can be running at any
given time. Whereas with async subprocesses, several such
subprocesses could be running at the same time each one doing its own
job (provided that your CPU has more than a single execution unit).
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-03-29 18:47 ` Stefan Monnier
@ 2023-04-17 3:46 ` Lynn Winebarger
2023-04-17 19:50 ` Stefan Monnier
2023-04-17 21:00 ` Tomas Hlavaty
0 siblings, 2 replies; 53+ messages in thread
From: Lynn Winebarger @ 2023-04-17 3:46 UTC (permalink / raw)
To: Stefan Monnier
Cc: Tomas Hlavaty, Jim Porter, Karthik Chikmagalur, Thomas Koch,
emacs-devel@gnu.org
On Wed, Mar 29, 2023 at 2:48 PM Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>
> >> Part of the issue is the management of `current-buffer`: should the
> >> composition of futures with `futur-let*` save&restore
> >> `current-buffer` to mimic more closely the behavior one would get
> >> with plain old sequential execution? If so, should we do the same
> >> with `point`? What about other such state?
> >
> > I do not think there is a good implicit solution.
> > Either it would save too much state or too little,
> > or save it the wrong way.
>
> Currently it doesn't save anything, which is "ideal" in terms of
> efficiency, but sometimes leads to code that's more verbose than
> I'd like.
>
> Someone suggested to save as much as threads do, but that's not
> practical.
>
This whole thread seems to echo the difference between "stackless" and
"stackful" coroutines discussed in
https://nullprogram.com/blog/2019/03/10/ by the author of emacs-aio,
with generator-style rewriting corresponding to stackless and threads
to "stackful". So when you say "save as much as threads do", I'm not
clear if you're talking about rewriting code to essentially create a
heap allocated version of the same information that a thread has in
the form of its stack, or something more limited like some particular
set of special bindings.
It seems to me what one would really like is for primitives that might
block to just return a future that's treated like any other value,
except that "futurep" would return true and primitive operations would
implicitly wait on the futures in their arguments. But making
something like that work would require extensive reengineering of
emacs internals.
Looking at src/thread.c, it appears emacs threads are just thin layer
over system threads with a global lock. An alternative would be to
use a basic user-space cooperative threading implementation running on
top of the system threads, which would simply run a trampoline to
whatever user space thread was assigned to it. The user space threads
would not be locked to any particular system thread, but go back to
the queue of some emacs-owned scheduler after yielding control. Then
if a primitive will block, it could switch to a fresh user-space
thread running in the same user-space thread, put a future that
references this new thread in the continuation for the original thread
(which is placed back in emacs's scheduler queue), then give up the
GIL before making the blocking call on the system thread. Then the
emacs scheduler would choose some available system thread in its pool
and dispatch the next user space continuation to it, eventually
redispatching the original user-space thread. The whole sequence
would play out again if a primitive operation blocked trying to read
the value of the first future.
I think that would provide the asynchronous but not concurrent
semantics you're talking about. But it would be a lot of work.
Lynn
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-17 3:46 ` Lynn Winebarger
@ 2023-04-17 19:50 ` Stefan Monnier
2023-04-18 2:56 ` Lynn Winebarger
2023-04-18 6:19 ` Jim Porter
2023-04-17 21:00 ` Tomas Hlavaty
1 sibling, 2 replies; 53+ messages in thread
From: Stefan Monnier @ 2023-04-17 19:50 UTC (permalink / raw)
To: Lynn Winebarger
Cc: Tomas Hlavaty, Jim Porter, Karthik Chikmagalur, Thomas Koch,
emacs-devel@gnu.org
> This whole thread seems to echo the difference between "stackless" and
> "stackful" coroutines discussed in
> https://nullprogram.com/blog/2019/03/10/ by the author of emacs-aio,
> with generator-style rewriting corresponding to stackless and threads
> to "stackful". So when you say "save as much as threads do", I'm not
> clear if you're talking about rewriting code to essentially create a
> heap allocated version of the same information that a thread has in
> the form of its stack, or something more limited like some particular
> set of special bindings.
Indeed to "save as much as threads do" we'd have to essentially create
a heap allocated version of the same info.
[ I don't think that's what we want. ]
> It seems to me what one would really like is for primitives that might
> block to just return a future that's treated like any other value,
> except that "futurep" would return true and primitive operations would
> implicitly wait on the futures in their arguments.
I think experience shows that doing that implicitly everywhere is not
a good idea, because it makes it all too easy to accidentally block
waiting for a future.
Instead, you want to replace this "implicit" by a mechanism that is "as
lightweight as possible" (so it's "almost implicit") and that makes it
easy for the programmer to control whether the code should rather block
for the future's result (e.g. `futur-wait`) or "delay itself" until
after the future's completion (e.g. `future-let*`).
> I think that would provide the asynchronous but not concurrent
> semantics you're talking about.
FWIW, I'm in favor of both more concurrency and more parallelism.
My earlier remark was simply pointing out that the design of `futur.el`
is not trying to make Emacs faster.
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-12 6:13 ` Eli Zaretskii
@ 2023-04-17 20:51 ` Tomas Hlavaty
2023-04-18 2:25 ` Eli Zaretskii
0 siblings, 1 reply; 53+ messages in thread
From: Tomas Hlavaty @ 2023-04-17 20:51 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: monnier, jporterbugs, karthikchikmagalur, emacs-devel
On Wed 12 Apr 2023 at 09:13, Eli Zaretskii <eliz@gnu.org> wrote:
> Async subprocesses are currently the only feature in Emacs that
> provides an opportunity for writing asynchronous code.
Do you not consider, for example, using implementations of async/await
using promisses and CPS rewriting "writing asynchronous code"?
Do you not consider, for example, doing the same using callbacks as
"writing asynchronous code"?
>> I do not know how useable threads in Emacs are at the moment,
>> but they are already there and the examples I tried worked well.
>
> If you think Lisp threads in Emacs allow asynchronous processing, you
> are mistaken: they don't. Only one such thread can be running at any
> given time.
The examples I wrote worked fine with threads. The examples did not
require parallelism. I do not think that what you suggest disqualifies
threads for "writing asynchronous code".
It would be great to have better thread implementation, but that does
not seem to have anything to do with "writing asynchronous code".
Here is what I understand under synchronous code:
(plus 1 2)
returns 3 immediatelly
Here is what I understand under asynchronous code:
(plus 1 2)
returns something immediately
and then some time later 3 appers in the *Message* buffer, for
example
How that is achieved is an implementation (possibly leaky) detail.
It does not require anything running in parallel.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-17 3:46 ` Lynn Winebarger
2023-04-17 19:50 ` Stefan Monnier
@ 2023-04-17 21:00 ` Tomas Hlavaty
1 sibling, 0 replies; 53+ messages in thread
From: Tomas Hlavaty @ 2023-04-17 21:00 UTC (permalink / raw)
To: Lynn Winebarger, Stefan Monnier
Cc: Jim Porter, Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
On Sun 16 Apr 2023 at 23:46, Lynn Winebarger <owinebar@gmail.com> wrote:
> It seems to me what one would really like is for primitives that might
> block to just return a future that's treated like any other value,
> except that "futurep" would return true and primitive operations would
> implicitly wait on the futures in their arguments.
Which programming language uses implicit await?
Sounds like a bad idea.
How would the language distinguish if the future should be awaited or
simply passed around? E.g. should (push future list) await the future
and push the value or push the future without awaiting?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-17 20:51 ` Tomas Hlavaty
@ 2023-04-18 2:25 ` Eli Zaretskii
2023-04-18 5:01 ` Tomas Hlavaty
2023-04-18 10:35 ` Konstantin Kharlamov
0 siblings, 2 replies; 53+ messages in thread
From: Eli Zaretskii @ 2023-04-18 2:25 UTC (permalink / raw)
To: Tomas Hlavaty; +Cc: monnier, jporterbugs, karthikchikmagalur, emacs-devel
> From: Tomas Hlavaty <tom@logand.com>
> Cc: monnier@iro.umontreal.ca, jporterbugs@gmail.com,
> karthikchikmagalur@gmail.com, emacs-devel@gnu.org
> Date: Mon, 17 Apr 2023 22:51:22 +0200
>
> On Wed 12 Apr 2023 at 09:13, Eli Zaretskii <eliz@gnu.org> wrote:
> > Async subprocesses are currently the only feature in Emacs that
> > provides an opportunity for writing asynchronous code.
>
> Do you not consider, for example, using implementations of async/await
> using promisses and CPS rewriting "writing asynchronous code"?
>
> Do you not consider, for example, doing the same using callbacks as
> "writing asynchronous code"?
Not necessarily.
> >> I do not know how useable threads in Emacs are at the moment,
> >> but they are already there and the examples I tried worked well.
> >
> > If you think Lisp threads in Emacs allow asynchronous processing, you
> > are mistaken: they don't. Only one such thread can be running at any
> > given time.
>
> The examples I wrote worked fine with threads. The examples did not
> require parallelism. I do not think that what you suggest disqualifies
> threads for "writing asynchronous code".
>
> It would be great to have better thread implementation, but that does
> not seem to have anything to do with "writing asynchronous code".
>
> Here is what I understand under synchronous code:
>
> (plus 1 2)
> returns 3 immediatelly
>
> Here is what I understand under asynchronous code:
>
> (plus 1 2)
> returns something immediately
> and then some time later 3 appers in the *Message* buffer, for
> example
>
> How that is achieved is an implementation (possibly leaky) detail.
In my book, asynchronous means parallel processing, not just delayed
results.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-17 19:50 ` Stefan Monnier
@ 2023-04-18 2:56 ` Lynn Winebarger
2023-04-18 3:48 ` Stefan Monnier
2023-04-18 6:19 ` Jim Porter
1 sibling, 1 reply; 53+ messages in thread
From: Lynn Winebarger @ 2023-04-18 2:56 UTC (permalink / raw)
To: Stefan Monnier
Cc: Tomas Hlavaty, Jim Porter, Karthik Chikmagalur, Thomas Koch,
emacs-devel@gnu.org
On Mon, Apr 17, 2023 at 3:50 PM Stefan Monnier <monnier@iro.umontreal.ca> wrote:
> > This whole thread seems to echo the difference between "stackless" and
> > "stackful" coroutines discussed in
> > https://nullprogram.com/blog/2019/03/10/ by the author of emacs-aio,
> > with generator-style rewriting corresponding to stackless and threads
> > to "stackful". So when you say "save as much as threads do", I'm not
> > clear if you're talking about rewriting code to essentially create a
> > heap allocated version of the same information that a thread has in
> > the form of its stack, or something more limited like some particular
> > set of special bindings.
>
> Indeed to "save as much as threads do" we'd have to essentially create
> a heap allocated version of the same info.
> [ I don't think that's what we want. ]
It sounds like you would end up with a user-implemented call/cc or
"spaghetti stack" construct, so I would agree.
> > It seems to me what one would really like is for primitives that might
> > block to just return a future that's treated like any other value,
> > except that "futurep" would return true and primitive operations would
> > implicitly wait on the futures in their arguments.
>
> I think experience shows that doing that implicitly everywhere is not
> a good idea, because it makes it all too easy to accidentally block
> waiting for a future.
I wrote that incorrectly - I meant that primitive operations would add
a continuation to the future and return a future for their result.
Basically, a computation would never block, it would just build
continuation trees (in the form of futures) and return to the
top-level. Although that assumes the system would be able to allocate
those futures without blocking for GC work.
> Instead, you want to replace this "implicit" by a mechanism that is "as
> lightweight as possible" (so it's "almost implicit") and that makes it
> easy for the programmer to control whether the code should rather block
> for the future's result (e.g. `futur-wait`) or "delay itself" until
> after the future's completion (e.g. `future-let*`).
At some point in this thread you stated you weren't sure what the
right semantics are in terms of the information to save, etc. I posed
this implicit semantics as a way to think about what "the right thing"
would be. Would all operations preserve the same (lisp) machine
state, or would it differ depending on the nature of the operator? [
is the kind of question it might be useful to work out in this thought
experiment ]
The way you've defined future-let, the variable being bound is a
future because you are constructing it as one, but it is still a
normal variable.
What if, instead, we define a "futur-abstraction" (lambda/futur (v)
body ...) in which v is treated as a future by default, and a
future-conditional form (if-available v ready-expr not-ready-expr)
with the obvious meaning. If v appears as the argument to a
lambda/future function object it will be passed as is. Otherwise, the
reference to v would be rewritten as (futur-wait v). Some syntactic
sugar (futur-escape v) => (if-available v v) could be used to pass the
future to arbitrary functions. Then futur-let and futur-let* could be
defined with the standard expansion with lambda replaced by
lambda/futur.
Otherwise, I'm not sure what the syntax really buys you.
> > I think that would provide the asynchronous but not concurrent
> > semantics you're talking about.
>
> FWIW, I'm in favor of both more concurrency and more parallelism.
> My earlier remark was simply pointing out that the design of `futur.el`
> is not trying to make Emacs faster.
It would be easier if elisp threads were orthogonal to system threads,
so that any elisp thread could be run on any available system thread.
Multiprocessing could be done by creating multiple lisp VMs in a
process (i.e. lisp VM orthogonal to a physical core), each with their
own heap and globals in addition to some shared heap with well-defined
synchronization. The "global interpreter lock" would become a "lisp
machine lock", with (non-preemptive, one-shot continuation type) elisp
threads being local to the machine. That seems to me the simplest way
to coherently extend the lisp semantics to multi-processing. The
display would presumably have to exist in the shared space for
anything interesting to happen in terms of editing, but buffers could
be local to a particular lisp machine.
I thought I saw segmented stack allocation implemented in master last
year (by Mattias Engdegård?), but it doesn't appear to be there any
longer. If that infrastructure were there, then it would seem user
space cooperative threading via one-shot continuations (+ trampolining
by kernel threads + user-space scheduling of user-space threads) would
be viable.
Lynn
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 2:56 ` Lynn Winebarger
@ 2023-04-18 3:48 ` Stefan Monnier
2023-04-22 2:48 ` Lynn Winebarger
0 siblings, 1 reply; 53+ messages in thread
From: Stefan Monnier @ 2023-04-18 3:48 UTC (permalink / raw)
To: Lynn Winebarger
Cc: Tomas Hlavaty, Jim Porter, Karthik Chikmagalur, Thomas Koch,
emacs-devel@gnu.org
> I wrote that incorrectly - I meant that primitive operations would add
> a continuation to the future and return a future for their result.
> Basically, a computation would never block, it would just build
> continuation trees (in the form of futures) and return to the
> top-level. Although that assumes the system would be able to allocate
> those futures without blocking for GC work.
I think this would end up being extremely inefficient, since for every
tiny operation you'd end up creating a future, linked to the
previous computation (basically a sort of fine-grained dataflow graph).
I'm sure in theory it can be made tolerably efficient, but it'd need
a very different implementation strategy than what we have now.
Furthermore I expect it would lead to surprising semantics when used
with side-effecting operations.
IOW you can probably create a usable system that uses this approach, but
with a different language and a different implementation :-)
`futur.el` instead forces the programmer to be somewhat explicit about
the concurrency points, so they have *some* control over efficiency and
interaction with side-effects.
> At some point in this thread you stated you weren't sure what the
> right semantics are in terms of the information to save, etc. I posed
> this implicit semantics as a way to think about what "the right thing"
> would be. Would all operations preserve the same (lisp) machine
> state, or would it differ depending on the nature of the operator? [
> is the kind of question it might be useful to work out in this thought
> experiment ]
I can't imagine it working sanely if the kind of state that's saved
depends on the operation: the saved-state is basically private to the
continuation, so it might make sense to do it differently depending on
the continuation (tho even that would introduce a lot of complexity),
but not depending on the operation.
The coders will need to know what is saved and what isn't, so the
more complex this rule is, the harder it is to learn to use this
tool correctly.
> The way you've defined future-let, the variable being bound is a
> future because you are constructing it as one, but it is still a
> normal variable.
>
> What if, instead, we define a "futur-abstraction" (lambda/futur (v)
> body ...) in which v is treated as a future by default, and a
> future-conditional form (if-available v ready-expr not-ready-expr)
> with the obvious meaning. If v appears as the argument to a
> lambda/future function object it will be passed as is. Otherwise, the
> reference to v would be rewritten as (futur-wait v). Some syntactic
> sugar (futur-escape v) => (if-available v v) could be used to pass the
> future to arbitrary functions.
Seems complex, and I'm not sure it would buy you anything in practice.
> Otherwise, I'm not sure what the syntax really buys you.
Not very much.
To some extent it simply helps reduce the indentation due to nesting.
> It would be easier if elisp threads were orthogonal to system threads,
> so that any elisp thread could be run on any available system thread.
Currently, only one thread can run ELisp at a time. Whether that's
implemented using several system threads or not is largely an
internal detail.
> Multiprocessing could be done by creating multiple lisp VMs in a
> process (i.e. lisp VM orthogonal to a physical core),
Yes, "could".
Other than approaches like `async.el` we don't really know how to
implement that, sadly.
[ Many years ago I proposed to rely on `fork`, but it so happens that
it's not really an option for the w32 builds :-( ]
> each with their own heap and globals in addition to some shared heap
> with well-defined synchronization. The "global interpreter lock"
> would become a "lisp machine lock", with (non-preemptive, one-shot
> continuation type) elisp threads being local to the machine.
> That seems to me the simplest way to coherently extend the lisp
> semantics to multi-processing. The display would presumably have to
> exist in the shared space for anything interesting to happen in terms
> of editing, but buffers could be local to a particular lisp machine.
It's not too hard to come up with a design that makes sense, indeed, the
problem is to actually do the work of bringing the current code to
that design.
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 2:25 ` Eli Zaretskii
@ 2023-04-18 5:01 ` Tomas Hlavaty
2023-04-18 10:35 ` Konstantin Kharlamov
1 sibling, 0 replies; 53+ messages in thread
From: Tomas Hlavaty @ 2023-04-18 5:01 UTC (permalink / raw)
To: Eli Zaretskii; +Cc: monnier, jporterbugs, karthikchikmagalur, emacs-devel
On Tue 18 Apr 2023 at 05:25, Eli Zaretskii <eliz@gnu.org> wrote:
>> On Wed 12 Apr 2023 at 09:13, Eli Zaretskii <eliz@gnu.org> wrote:
>> > Async subprocesses are currently the only feature in Emacs that
>> > provides an opportunity for writing asynchronous code.
>>
>> Do you not consider, for example, using implementations of async/await
>> using promisses and CPS rewriting "writing asynchronous code"?
>>
>> Do you not consider, for example, doing the same using callbacks as
>> "writing asynchronous code"?
>
> Not necessarily.
Fascinating.
>> >> I do not know how useable threads in Emacs are at the moment,
>> >> but they are already there and the examples I tried worked well.
>> >
>> > If you think Lisp threads in Emacs allow asynchronous processing, you
>> > are mistaken: they don't. Only one such thread can be running at any
>> > given time.
>>
>> The examples I wrote worked fine with threads. The examples did not
>> require parallelism. I do not think that what you suggest disqualifies
>> threads for "writing asynchronous code".
>>
>> It would be great to have better thread implementation, but that does
>> not seem to have anything to do with "writing asynchronous code".
>>
>> Here is what I understand under synchronous code:
>>
>> (plus 1 2)
>> returns 3 immediatelly
>>
>> Here is what I understand under asynchronous code:
>>
>> (plus 1 2)
>> returns something immediately
>> and then some time later 3 appers in the *Message* buffer, for
>> example
>>
>> How that is achieved is an implementation (possibly leaky) detail.
>
> In my book, asynchronous means parallel processing, not just delayed
> results.
Interesting, this is the first time I encoutered such definition of
asynchronous.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-17 19:50 ` Stefan Monnier
2023-04-18 2:56 ` Lynn Winebarger
@ 2023-04-18 6:19 ` Jim Porter
2023-04-18 9:52 ` Po Lu
1 sibling, 1 reply; 53+ messages in thread
From: Jim Porter @ 2023-04-18 6:19 UTC (permalink / raw)
To: Stefan Monnier, Lynn Winebarger
Cc: Tomas Hlavaty, Karthik Chikmagalur, Thomas Koch,
emacs-devel@gnu.org
On 4/17/2023 12:50 PM, Stefan Monnier wrote:
>> This whole thread seems to echo the difference between "stackless" and
>> "stackful" coroutines discussed in
>> https://nullprogram.com/blog/2019/03/10/ by the author of emacs-aio,
>> with generator-style rewriting corresponding to stackless and threads
>> to "stackful". So when you say "save as much as threads do", I'm not
>> clear if you're talking about rewriting code to essentially create a
>> heap allocated version of the same information that a thread has in
>> the form of its stack, or something more limited like some particular
>> set of special bindings.
>
> Indeed to "save as much as threads do" we'd have to essentially create
> a heap allocated version of the same info.
> [ I don't think that's what we want. ]
I think this subthread is about two different aspects, which is probably
due in part to me not distinguishing the two enough initially; one of
the reasons I'd find it useful to "save as much as threads do" is so
that there could be a path towards packaging tasks up to run on another
thread, and for them to eventually have enough context that we could run
multiple packaged tasks *concurrently*. That's separate from a
more-general asynchronous programming library (though they would likely
interact with one another).
Javascript might be a useful analogue here, since it too was originally
single-threaded with an event loop, and more concurrency features were
added in later. Similarly to "modern" JS, we could have async/await
constructs that (primarily) work on the main thread, plus something
similar to web workers, which operate as mostly-independent threads that
you can communicate with via messages.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 6:19 ` Jim Porter
@ 2023-04-18 9:52 ` Po Lu
2023-04-18 12:38 ` Lynn Winebarger
2023-04-18 13:14 ` Stefan Monnier
0 siblings, 2 replies; 53+ messages in thread
From: Po Lu @ 2023-04-18 9:52 UTC (permalink / raw)
To: Jim Porter
Cc: Stefan Monnier, Lynn Winebarger, Tomas Hlavaty,
Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
Jim Porter <jporterbugs@gmail.com> writes:
> On 4/17/2023 12:50 PM, Stefan Monnier wrote:
>>> This whole thread seems to echo the difference between "stackless"
>>> and
>>> "stackful" coroutines discussed in
>>> https://nullprogram.com/blog/2019/03/10/ by the author of
>>> emacs-aio,
>>> with generator-style rewriting corresponding to stackless and
>>> threads
>>> to "stackful". So when you say "save as much as threads do", I'm
>>> not
>>> clear if you're talking about rewriting code to essentially create
>>> a
>>> heap allocated version of the same information that a thread has in
>>> the form of its stack, or something more limited like some
>>> particular
>>> set of special bindings.
>> Indeed to "save as much as threads do" we'd have to essentially
>> create
>> a heap allocated version of the same info.
>> [ I don't think that's what we want. ]
>
> I think this subthread is about two different aspects, which is
> probably due in part to me not distinguishing the two enough
> initially; one of the reasons I'd find it useful to "save as much as
> threads do" is so that there could be a path towards packaging tasks
> up to run on another thread, and for them to eventually have enough
> context that we could run multiple packaged tasks
> *concurrently*. That's separate from a more-general asynchronous
> programming library (though they would likely interact with one
> another).
>
> Javascript might be a useful analogue here, since it too was
> originally single-threaded with an event loop, and more concurrency
> features were added in later. Similarly to "modern" JS, we could have
> async/await constructs that (primarily) work on the main thread, plus
> something similar to web workers, which operate as mostly-independent
> threads that you can communicate with via messages.
Btw, even though I don't know exactly what this is about, ISTM that
whatever you're doing could use a name that isn't missing a letter, like
`future'. Why `futur'?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 2:25 ` Eli Zaretskii
2023-04-18 5:01 ` Tomas Hlavaty
@ 2023-04-18 10:35 ` Konstantin Kharlamov
2023-04-18 15:31 ` [External] : " Drew Adams
1 sibling, 1 reply; 53+ messages in thread
From: Konstantin Kharlamov @ 2023-04-18 10:35 UTC (permalink / raw)
To: Eli Zaretskii, Tomas Hlavaty
Cc: monnier, jporterbugs, karthikchikmagalur, emacs-devel
On Tue, 2023-04-18 at 05:25 +0300, Eli Zaretskii wrote:
> > From: Tomas Hlavaty <tom@logand.com>
> > Cc: monnier@iro.umontreal.ca, jporterbugs@gmail.com,
> > karthikchikmagalur@gmail.com, emacs-devel@gnu.org
> > Date: Mon, 17 Apr 2023 22:51:22 +0200
> >
> > On Wed 12 Apr 2023 at 09:13, Eli Zaretskii <eliz@gnu.org> wrote:
> > > Async subprocesses are currently the only feature in Emacs that
> > > provides an opportunity for writing asynchronous code.
> >
> > Do you not consider, for example, using implementations of async/await
> > using promisses and CPS rewriting "writing asynchronous code"?
> >
> > Do you not consider, for example, doing the same using callbacks as
> > "writing asynchronous code"?
>
> Not necessarily.
>
> > > > I do not know how useable threads in Emacs are at the moment,
> > > > but they are already there and the examples I tried worked well.
> > >
> > > If you think Lisp threads in Emacs allow asynchronous processing, you
> > > are mistaken: they don't. Only one such thread can be running at any
> > > given time.
> >
> > The examples I wrote worked fine with threads. The examples did not
> > require parallelism. I do not think that what you suggest disqualifies
> > threads for "writing asynchronous code".
> >
> > It would be great to have better thread implementation, but that does
> > not seem to have anything to do with "writing asynchronous code".
> >
> > Here is what I understand under synchronous code:
> >
> > (plus 1 2)
> > returns 3 immediatelly
> >
> > Here is what I understand under asynchronous code:
> >
> > (plus 1 2)
> > returns something immediately
> > and then some time later 3 appers in the *Message* buffer, for
> > example
> >
> > How that is achieved is an implementation (possibly leaky) detail.
>
> In my book, asynchronous means parallel processing, not just delayed
> results.
The widely used definition is different though. A good summary is in this
StackOverflow answer¹:
> When you run something asynchronously it means it is non-blocking, you execute it
> without waiting for it to complete and carry on with other things. Parallelism
> means to run multiple things at the same time, in parallel. Parallelism works well
> when you can separate tasks into independent pieces of work.
If you want some Wikipedia links, they might sound a bit more confusing, but here's
what it says²:
> Asynchrony, in computer programming, refers to the occurrence of events independent
> of the main program flow and ways to deal with such events. These may be "outside"
> events such as the arrival of signals, or actions instigated by a program that take
> place concurrently with program execution, without the program blocking to wait for
> results.
And then "concurrency" article says³:
> The concept of concurrent computing is frequently confused with the related but
> distinct concept of parallel computing,[3][4] although both can be described as
> "multiple processes executing during the same period of time". In parallel
> computing, execution occurs at the same physical instant: for example, on separate
> processors of a multi-processor machine, with the goal of speeding up
> computations—parallel computing is impossible on a (one-core) single processor, as
> only one computation can occur at any instant (during any single clock cycle).[a]
> By contrast, concurrent computing consists of process lifetimes overlapping, but
> execution need not happen at the same instant.
1: https://stackoverflow.com/a/6133756/2388257
2: https://en.wikipedia.org/wiki/Asynchrony_(computer_programming)
3: https://en.wikipedia.org/wiki/Concurrent_computing
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 9:52 ` Po Lu
@ 2023-04-18 12:38 ` Lynn Winebarger
2023-04-18 13:14 ` Stefan Monnier
1 sibling, 0 replies; 53+ messages in thread
From: Lynn Winebarger @ 2023-04-18 12:38 UTC (permalink / raw)
To: Po Lu
Cc: Jim Porter, Stefan Monnier, Tomas Hlavaty, Karthik Chikmagalur,
Thomas Koch, emacs-devel
[-- Attachment #1: Type: text/plain, Size: 332 bytes --]
On Tue, Apr 18, 2023, 5:54 AM Po Lu <luangruo@yahoo.com> wrote:
> Btw, even though I don't know exactly what this is about, ISTM that
> whatever you're doing could use a name that isn't missing a letter, like
> `future'. Why `futur'?
>
That is Stefan M's flourish. I'm deferring to it as we're discussing his
WIP library.
Lynn
[-- Attachment #2: Type: text/html, Size: 960 bytes --]
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 9:52 ` Po Lu
2023-04-18 12:38 ` Lynn Winebarger
@ 2023-04-18 13:14 ` Stefan Monnier
2023-04-19 0:28 ` Basil L. Contovounesios
2023-04-19 1:11 ` Po Lu
1 sibling, 2 replies; 53+ messages in thread
From: Stefan Monnier @ 2023-04-18 13:14 UTC (permalink / raw)
To: Po Lu
Cc: Jim Porter, Lynn Winebarger, Tomas Hlavaty, Karthik Chikmagalur,
Thomas Koch, emacs-devel@gnu.org
> Btw, even though I don't know exactly what this is about, ISTM that
> whatever you're doing could use a name that isn't missing a letter, like
> `future'. Why `futur'?
It's one letter shorter, and it's my mother's tongue translation of
English's "future" :-)
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* RE: [External] : Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 10:35 ` Konstantin Kharlamov
@ 2023-04-18 15:31 ` Drew Adams
0 siblings, 0 replies; 53+ messages in thread
From: Drew Adams @ 2023-04-18 15:31 UTC (permalink / raw)
To: Konstantin Kharlamov, Eli Zaretskii, Tomas Hlavaty
Cc: monnier@iro.umontreal.ca, jporterbugs@gmail.com,
karthikchikmagalur@gmail.com, emacs-devel@gnu.org
https://lamport.azurewebsites.net/pubs/sometime.pdf
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 13:14 ` Stefan Monnier
@ 2023-04-19 0:28 ` Basil L. Contovounesios
2023-04-19 2:59 ` Stefan Monnier
2023-04-19 1:11 ` Po Lu
1 sibling, 1 reply; 53+ messages in thread
From: Basil L. Contovounesios @ 2023-04-19 0:28 UTC (permalink / raw)
To: Stefan Monnier
Cc: Po Lu, Jim Porter, Lynn Winebarger, Tomas Hlavaty,
Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
Stefan Monnier [2023-04-18 09:14 -0400] wrote:
>> Btw, even though I don't know exactly what this is about, ISTM that
>> whatever you're doing could use a name that isn't missing a letter, like
>> `future'. Why `futur'?
>
> It's one letter shorter, and it's my mother's tongue translation of
> English's "future" :-)
Clearly this is all too complicated: it calls for futur-simp.el ;).
--
Basil
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 13:14 ` Stefan Monnier
2023-04-19 0:28 ` Basil L. Contovounesios
@ 2023-04-19 1:11 ` Po Lu
1 sibling, 0 replies; 53+ messages in thread
From: Po Lu @ 2023-04-19 1:11 UTC (permalink / raw)
To: Stefan Monnier
Cc: Jim Porter, Lynn Winebarger, Tomas Hlavaty, Karthik Chikmagalur,
Thomas Koch, emacs-devel@gnu.org
Stefan Monnier <monnier@iro.umontreal.ca> writes:
> It's one letter shorter, and it's my mother's tongue translation of
> English's "future" :-)
I think it is better to use actual English words, seeing as Emacs is
written in English.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-19 0:28 ` Basil L. Contovounesios
@ 2023-04-19 2:59 ` Stefan Monnier
2023-04-19 13:25 ` [External] : " Drew Adams
0 siblings, 1 reply; 53+ messages in thread
From: Stefan Monnier @ 2023-04-19 2:59 UTC (permalink / raw)
To: Basil L. Contovounesios
Cc: Po Lu, Jim Porter, Lynn Winebarger, Tomas Hlavaty,
Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
>> It's one letter shorter, and it's my mother's tongue translation of
>> English's "future" :-)
> Clearly this is all too complicated: it calls for futur-simp.el ;).
There's always `zukunft.el` to avoid confusion,
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* RE: [External] : Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-19 2:59 ` Stefan Monnier
@ 2023-04-19 13:25 ` Drew Adams
2023-04-19 13:34 ` Robert Pluim
0 siblings, 1 reply; 53+ messages in thread
From: Drew Adams @ 2023-04-19 13:25 UTC (permalink / raw)
To: Stefan Monnier, Basil L. Contovounesios
Cc: Po Lu, Jim Porter, Lynn Winebarger, Tomas Hlavaty,
Karthik Chikmagalur, Thomas Koch, emacs-devel@gnu.org
> There's always `zukunft.el` to avoid confusion,
tfnukuz.el is maybe less confusing.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [External] : Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-19 13:25 ` [External] : " Drew Adams
@ 2023-04-19 13:34 ` Robert Pluim
2023-04-19 14:19 ` Stefan Monnier
0 siblings, 1 reply; 53+ messages in thread
From: Robert Pluim @ 2023-04-19 13:34 UTC (permalink / raw)
To: Drew Adams
Cc: Stefan Monnier, Basil L. Contovounesios, Po Lu, Jim Porter,
Lynn Winebarger, Tomas Hlavaty, Karthik Chikmagalur, Thomas Koch,
emacs-devel@gnu.org
>>>>> On Wed, 19 Apr 2023 13:25:37 +0000, Drew Adams <drew.adams@oracle.com> said:
>> There's always `zukunft.el` to avoid confusion,
Drew> tfnukuz.el is maybe less confusing.
Or we could use a cousin of German and have "tsmokeot", which has a
certain "je ne sais quoi" to it 😺
Robert
--
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [External] : Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-19 13:34 ` Robert Pluim
@ 2023-04-19 14:19 ` Stefan Monnier
2023-04-21 1:33 ` Richard Stallman
0 siblings, 1 reply; 53+ messages in thread
From: Stefan Monnier @ 2023-04-19 14:19 UTC (permalink / raw)
To: Robert Pluim
Cc: Drew Adams, Basil L. Contovounesios, Po Lu, Jim Porter,
Lynn Winebarger, Tomas Hlavaty, Karthik Chikmagalur, Thomas Koch,
emacs-devel@gnu.org
> Or we could use a cousin of German and have "tsmokeot", which has a
> certain "je ne sais quoi" to it 😺
Sold!
Stefan
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [External] : Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-19 14:19 ` Stefan Monnier
@ 2023-04-21 1:33 ` Richard Stallman
0 siblings, 0 replies; 53+ messages in thread
From: Richard Stallman @ 2023-04-21 1:33 UTC (permalink / raw)
To: Stefan Monnier
Cc: rpluim, drew.adams, contovob, luangruo, jporterbugs, owinebar,
tom, karthikchikmagalur, thomas, emacs-devel
[[[ To any NSA and FBI agents reading my email: please consider ]]]
[[[ whether defending the US Constitution against all enemies, ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]
> > Or we could use a cousin of German and have "tsmokeot", which has a
> > certain "je ne sais quoi" to it 😺
> Sold!
We could make it more Lispy by calling it G5289.
It would be even more unhelpful to users.
--
Dr Richard Stallman (https://stallman.org)
Chief GNUisance of the GNU Project (https://gnu.org)
Founder, Free Software Foundation (https://fsf.org)
Internet Hall-of-Famer (https://internethalloffame.org)
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: continuation passing in Emacs vs. JUST-THIS-ONE
2023-04-18 3:48 ` Stefan Monnier
@ 2023-04-22 2:48 ` Lynn Winebarger
0 siblings, 0 replies; 53+ messages in thread
From: Lynn Winebarger @ 2023-04-22 2:48 UTC (permalink / raw)
To: Stefan Monnier
Cc: Tomas Hlavaty, Jim Porter, Karthik Chikmagalur, Thomas Koch,
emacs-devel
[-- Attachment #1: Type: text/plain, Size: 8669 bytes --]
On Mon, Apr 17, 2023, 11:48 PM Stefan Monnier <monnier@iro.umontreal.ca>
wrote:
> > I wrote that incorrectly - I meant that primitive operations would add
> > a continuation to the future and return a future for their result.
> > Basically, a computation would never block, it would just build
> > continuation trees (in the form of futures) and return to the
> > top-level. Although that assumes the system would be able to allocate
> > those futures without blocking for GC work.
>
> I think this would end up being extremely inefficient, since for every
> tiny operation you'd end up creating a future, linked to the
> previous computation (basically a sort of fine-grained dataflow graph).
> I'm sure in theory it can be made tolerably efficient, but it'd need
> a very different implementation strategy than what we have now.
> Furthermore I expect it would lead to surprising semantics when used
> with side-effecting operations.
>
Figuring those surprises out might be useful in determining what should be
saved.
IOW you can probably create a usable system that uses this approach, but
> with a different language and a different implementation :-)
>
I'm not advocating for it. It would be like RABBIT with inverted
continuation construction, but less sophisticated - dynamically
constructing the closures from Plotkin's CPS transform without the
optimizations of RABBIT or subsequent compilers. And that proviso about
the GC not blocking seems like a significant challenge.
Instead, I view it as an exercise in trying to understand what a syntax for
working with futures as a semantics is supposed to represent - how to
condense the shape of the computation into the shape of the expression.
> `futur.el` instead forces the programmer to be somewhat explicit about
> the concurrency points, so they have *some* control over efficiency and
> interaction with side-effects.
>
> > At some point in this thread you stated you weren't sure what the
> > right semantics are in terms of the information to save, etc. I posed
> > this implicit semantics as a way to think about what "the right thing"
> > would be. Would all operations preserve the same (lisp) machine
> > state, or would it differ depending on the nature of the operator? [
> > is the kind of question it might be useful to work out in this thought
> > experiment ]
>
> I can't imagine it working sanely if the kind of state that's saved
> depends on the operation: the saved-state is basically private to the
> continuation, so it might make sense to do it differently depending on
> the continuation (tho even that would introduce a lot of complexity),
> but not depending on the operation.
>
Am I right in thinking that the only real question is around the buffer
state and buffer-local variables? Global variables are something users can
easily define locks for, and dynamically bound variables are already
thread-local, but buffer state is particularly subject to I/O events and
buffer local variables don't really have a strong thread semantics - in one
context a variable might be global and in another buffer-local, and the
programmer would have to figure out whether to use a buffer-local lock or
global lock on the fly. Or, maybe it would be fair to say that we would
expect most interesting asynchronous code to involve work on buffers so
that use-case is worth special consideration?
> The coders will need to know what is saved and what isn't, so the
> more complex this rule is, the harder it is to learn to use this
> tool correctly.
>
I feel like I have read you refer to using purely functional data
structures for concurrency in emacs (or elsewhere), but I don't have any
concrete reference. So, I don't think my suggestion that buffers might be
extended or replaced with a functional data structure + merging of
asynchronous changes per
https://lists.gnu.org/archive/html/emacs-devel/2023-04/msg00587.html is
novel to you. For all I know, it reflects something you wrote in the past
as munged through my memory.
In any case, synchronizing though immutable data structures and merging is
probably a lot easier and more performant path to concurrent buffers that
going through the existent code trying to impose fine-grained
synchronization. I don't know if some kind of buffer-rope based on the
existing buffer code is a feasible path, or if there would need to be a
wholesale reimplementation, but that kind of buffer would not only be good
for asynchronous/concurrent editing with non-preemptive threading, but also
between multiple in-process lisp machines with shared memory or even
multi-process "collaborative" editing.
If an emacs developer just wanted to get something going to see how it
might work, though, maybe there's a kluge involving overlays with embedded
temporary buffers, where the main buffer could be made read-only when it
was being accessed asynchronously, and "copy-on-write" used when an error
is thrown when one of the asynchronous functions attempts to write to the
buffer. Then the async function would merge its changes on yield or return
or something - this is one place you could provide explicit control over
synchronization.
> > The way you've defined future-let, the variable being bound is a
> > future because you are constructing it as one, but it is still a
> > normal variable.
> >
> > What if, instead, we define a "futur-abstraction" (lambda/futur (v)
> > body ...) in which v is treated as a future by default, and a
> > future-conditional form (if-available v ready-expr not-ready-expr)
> > with the obvious meaning. If v appears as the argument to a
> > lambda/future function object it will be passed as is. Otherwise, the
> > reference to v would be rewritten as (futur-wait v). Some syntactic
> > sugar (futur-escape v) => (if-available v v) could be used to pass the
> > future to arbitrary functions.
>
> Seems complex, and I'm not sure it would buy you anything in practice.
>
It might not be much - I'm thinking it is one way to find a middle-ground
between full-blown CPS-conversion and "stackless" coroutines. Plus, a
future is essentially an operator on continuations, so it's not too wacky
to define a class of operators on futures.
Given the correspondence of futures to variables bound by continuations,
maybe Felleisen's representation of "holes" in evaluation contexts would be
visually helpful. Square brackets aren't convenient, but angular or curly
brackets could be used to connote when a variable is being referenced as a
future rather than the value returned by the future. I would consider that
to be a concise form of explicitness.
So, maybe "future-bind" could be replaced by "{}<-", and the "<-" in
"future-let*" by {}<-. Somehow I suspect ({x} <- ...) being used to bind
"x" would be over the lispy line, but that could be interesting.
Either way, in the body, "{x}" would refer to the future and "x" to the
value yielded by the future. Or maybe it should be the other way around.
Either way, the point of the syntax is to visually represent the flow of
the future to multiple (syntactic) contexts. I'm not sure how else that
inversion of control can be concisely represented when it *doesn't* happen
in a linear fashion.
> > It would be easier if elisp threads were orthogonal to system threads,
>
> so that any elisp thread could be run on any available system thread.
>
> Currently, only one thread can run ELisp at a time. Whether that's
> implemented using several system threads or not is largely an
> internal detail.
>
> > Multiprocessing could be done by creating multiple lisp VMs in a
> > process (i.e. lisp VM orthogonal to a physical core),
>
> Yes, "could".
>
I would go so far as to say it's the safest approach, especially if
preserving the semantics of existing programs is a goal. I don't think
attempting to make the lisp multiprocessing semantics slavishly replicate
the host multiprocessing semantics is a good idea at all. At least with
portable dumps/loads, there is a path to creating independent VMs (where
the symbol table is local to the VM, and probably "headless" to start) in
the same process space.
It's not too hard to come up with a design that makes sense, indeed, the
> problem is to actually do the work of bringing the current code to
> that design.
>
True enough. There are shorter paths than (whether naive or
sophisticated) attempts to impose fine-grained locking on the emacs
run-time to replicate the native facilities for parallelism. I am under
the impression that the latter is the prevailing view of what it would mean
to bring efficient parallelism to emacs, which is unfortunate if the
impression is correct.
Lynn
[-- Attachment #2: Type: text/html, Size: 11097 bytes --]
^ permalink raw reply [flat|nested] 53+ messages in thread
end of thread, other threads:[~2023-04-22 2:48 UTC | newest]
Thread overview: 53+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-11 12:53 continuation passing in Emacs vs. JUST-THIS-ONE Thomas Koch
2023-03-12 1:45 ` Jim Porter
2023-03-12 6:33 ` tomas
2023-03-14 6:39 ` Karthik Chikmagalur
2023-03-14 18:58 ` Jim Porter
2023-03-15 17:48 ` Stefan Monnier
2023-03-17 0:17 ` Tomas Hlavaty
2023-03-17 3:08 ` Stefan Monnier
2023-03-17 5:37 ` Jim Porter
2023-03-25 18:42 ` Tomas Hlavaty
2023-03-26 19:35 ` Tomas Hlavaty
2023-03-28 7:23 ` Tomas Hlavaty
2023-03-29 19:00 ` Stefan Monnier
2023-04-03 0:39 ` Tomas Hlavaty
2023-04-03 1:44 ` Emanuel Berg
2023-04-03 2:09 ` Stefan Monnier
2023-04-03 4:03 ` Po Lu
2023-04-03 4:51 ` Jim Porter
2023-04-10 21:47 ` Tomas Hlavaty
2023-04-11 2:53 ` Stefan Monnier
2023-04-11 19:59 ` Tomas Hlavaty
2023-04-11 20:22 ` Stefan Monnier
2023-04-11 23:07 ` Tomas Hlavaty
2023-04-12 6:13 ` Eli Zaretskii
2023-04-17 20:51 ` Tomas Hlavaty
2023-04-18 2:25 ` Eli Zaretskii
2023-04-18 5:01 ` Tomas Hlavaty
2023-04-18 10:35 ` Konstantin Kharlamov
2023-04-18 15:31 ` [External] : " Drew Adams
2023-03-29 18:47 ` Stefan Monnier
2023-04-17 3:46 ` Lynn Winebarger
2023-04-17 19:50 ` Stefan Monnier
2023-04-18 2:56 ` Lynn Winebarger
2023-04-18 3:48 ` Stefan Monnier
2023-04-22 2:48 ` Lynn Winebarger
2023-04-18 6:19 ` Jim Porter
2023-04-18 9:52 ` Po Lu
2023-04-18 12:38 ` Lynn Winebarger
2023-04-18 13:14 ` Stefan Monnier
2023-04-19 0:28 ` Basil L. Contovounesios
2023-04-19 2:59 ` Stefan Monnier
2023-04-19 13:25 ` [External] : " Drew Adams
2023-04-19 13:34 ` Robert Pluim
2023-04-19 14:19 ` Stefan Monnier
2023-04-21 1:33 ` Richard Stallman
2023-04-19 1:11 ` Po Lu
2023-04-17 21:00 ` Tomas Hlavaty
2023-03-14 3:58 ` Richard Stallman
2023-03-14 6:28 ` Jim Porter
2023-03-16 21:35 ` miha
2023-03-16 22:14 ` Jim Porter
2023-03-25 21:05 ` Tomas Hlavaty
2023-03-26 23:50 ` Tomas Hlavaty
Code repositories for project(s) associated with this external index
https://git.savannah.gnu.org/cgit/emacs.git
https://git.savannah.gnu.org/cgit/emacs/org-mode.git
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.