all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
From: Lynn Winebarger <owinebar@gmail.com>
To: Stefan Monnier <monnier@iro.umontreal.ca>
Cc: Tomas Hlavaty <tom@logand.com>,
	Jim Porter <jporterbugs@gmail.com>,
	 Karthik Chikmagalur <karthikchikmagalur@gmail.com>,
	Thomas Koch <thomas@koch.ro>,
	"emacs-devel@gnu.org" <emacs-devel@gnu.org>
Subject: Re: continuation passing in Emacs vs. JUST-THIS-ONE
Date: Mon, 17 Apr 2023 22:56:13 -0400	[thread overview]
Message-ID: <CAM=F=bBOaOVamSkFb5JSdpDUZnUkVVVTHQ=bd_HRt5GioRTVpQ@mail.gmail.com> (raw)
In-Reply-To: <jwvcz42guyv.fsf-monnier+emacs@gnu.org>

On Mon, Apr 17, 2023 at 3:50 PM Stefan Monnier <monnier@iro.umontreal.ca> wrote:
> > This whole thread seems to echo the difference between "stackless" and
> > "stackful" coroutines discussed in
> > https://nullprogram.com/blog/2019/03/10/ by the author of emacs-aio,
> > with generator-style rewriting corresponding to stackless and threads
> > to "stackful".  So when you say "save as much as threads do", I'm not
> > clear if you're talking about rewriting code to essentially create a
> > heap allocated version of the same information that a thread has in
> > the form of its stack, or something more limited like some particular
> > set of special bindings.
>
> Indeed to "save as much as threads do" we'd have to essentially create
> a heap allocated version of the same info.
> [ I don't think that's what we want.  ]

It sounds like you would end up with a user-implemented call/cc or
"spaghetti stack" construct, so I would agree.

> > It seems to me what one would really like is for primitives that might
> > block to just return a future that's treated like any other value,
> > except that "futurep" would return true and primitive operations would
> > implicitly wait on the futures in their arguments.
>
> I think experience shows that doing that implicitly everywhere is not
> a good idea, because it makes it all too easy to accidentally block
> waiting for a future.

I wrote that incorrectly - I meant that primitive operations would add
a continuation to the future and return a future for their result.
Basically, a computation would never block, it would just build
continuation trees (in the form of futures) and return to the
top-level.  Although that assumes the system would be able to allocate
those futures without blocking for GC work.

> Instead, you want to replace this "implicit" by a mechanism that is "as
> lightweight as possible" (so it's "almost implicit") and that makes it
> easy for the programmer to control whether the code should rather block
> for the future's result (e.g. `futur-wait`) or "delay itself" until
> after the future's completion (e.g. `future-let*`).

At some point in this thread you stated you weren't sure what the
right semantics are in terms of the information to save, etc.  I posed
this implicit semantics as a way to think about what "the right thing"
would be.  Would all operations preserve the same (lisp) machine
state, or would it differ depending on the nature of the operator?  [
is the kind of question it might be useful to work out in this thought
experiment ]

The way you've defined future-let, the variable being bound is a
future because you are constructing it as one, but it is still a
normal variable.

What if, instead, we define a "futur-abstraction" (lambda/futur (v)
body ...) in which v is treated as a future by default, and a
future-conditional form (if-available v ready-expr not-ready-expr)
with the obvious meaning.  If v appears as the argument to a
lambda/future function object it will be passed as is.  Otherwise, the
reference to v would be rewritten as (futur-wait v).  Some syntactic
sugar (futur-escape v) => (if-available v v) could be used to pass the
future to arbitrary functions.  Then futur-let and futur-let* could be
defined with the standard expansion with lambda replaced by
lambda/futur.

Otherwise, I'm not sure what the syntax really buys you.

> > I think that would provide the asynchronous but not concurrent
> > semantics you're talking about.
>
> FWIW, I'm in favor of both more concurrency and more parallelism.
> My earlier remark was simply pointing out that the design of `futur.el`
> is not trying to make Emacs faster.

It would be easier if elisp threads were orthogonal to system threads,
so that any elisp thread could be run on any available system thread.
Multiprocessing could be done by creating multiple lisp VMs in a
process (i.e. lisp VM orthogonal to a physical core), each with their
own heap and globals in addition to some shared heap with well-defined
synchronization.  The "global interpreter lock" would become a "lisp
machine lock", with (non-preemptive, one-shot continuation type) elisp
threads being local to the machine.  That seems to me the simplest way
to coherently extend the lisp semantics to multi-processing.  The
display would presumably have to exist in the shared space for
anything interesting to happen in terms of editing, but buffers could
be local to a particular lisp machine.

I thought I saw segmented stack allocation implemented in master last
year (by Mattias Engdegård?), but it doesn't appear to be there any
longer.  If that infrastructure were there, then it would seem user
space cooperative threading via one-shot continuations (+ trampolining
by kernel threads + user-space scheduling of user-space threads) would
be viable.

Lynn



  reply	other threads:[~2023-04-18  2:56 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-11 12:53 continuation passing in Emacs vs. JUST-THIS-ONE Thomas Koch
2023-03-12  1:45 ` Jim Porter
2023-03-12  6:33   ` tomas
2023-03-14  6:39   ` Karthik Chikmagalur
2023-03-14 18:58     ` Jim Porter
2023-03-15 17:48       ` Stefan Monnier
2023-03-17  0:17         ` Tomas Hlavaty
2023-03-17  3:08           ` Stefan Monnier
2023-03-17  5:37             ` Jim Porter
2023-03-25 18:42             ` Tomas Hlavaty
2023-03-26 19:35               ` Tomas Hlavaty
2023-03-28  7:23                 ` Tomas Hlavaty
2023-03-29 19:00                 ` Stefan Monnier
2023-04-03  0:39                   ` Tomas Hlavaty
2023-04-03  1:44                     ` Emanuel Berg
2023-04-03  2:09                     ` Stefan Monnier
2023-04-03  4:03                       ` Po Lu
2023-04-03  4:51                         ` Jim Porter
2023-04-10 21:47                       ` Tomas Hlavaty
2023-04-11  2:53                         ` Stefan Monnier
2023-04-11 19:59                           ` Tomas Hlavaty
2023-04-11 20:22                             ` Stefan Monnier
2023-04-11 23:07                               ` Tomas Hlavaty
2023-04-12  6:13                                 ` Eli Zaretskii
2023-04-17 20:51                                   ` Tomas Hlavaty
2023-04-18  2:25                                     ` Eli Zaretskii
2023-04-18  5:01                                       ` Tomas Hlavaty
2023-04-18 10:35                                       ` Konstantin Kharlamov
2023-04-18 15:31                                         ` [External] : " Drew Adams
2023-03-29 18:47               ` Stefan Monnier
2023-04-17  3:46                 ` Lynn Winebarger
2023-04-17 19:50                   ` Stefan Monnier
2023-04-18  2:56                     ` Lynn Winebarger [this message]
2023-04-18  3:48                       ` Stefan Monnier
2023-04-22  2:48                         ` Lynn Winebarger
2023-04-18  6:19                     ` Jim Porter
2023-04-18  9:52                       ` Po Lu
2023-04-18 12:38                         ` Lynn Winebarger
2023-04-18 13:14                         ` Stefan Monnier
2023-04-19  0:28                           ` Basil L. Contovounesios
2023-04-19  2:59                             ` Stefan Monnier
2023-04-19 13:25                               ` [External] : " Drew Adams
2023-04-19 13:34                                 ` Robert Pluim
2023-04-19 14:19                                   ` Stefan Monnier
2023-04-21  1:33                                     ` Richard Stallman
2023-04-19  1:11                           ` Po Lu
2023-04-17 21:00                   ` Tomas Hlavaty
2023-03-14  3:58 ` Richard Stallman
2023-03-14  6:28   ` Jim Porter
2023-03-16 21:35 ` miha
2023-03-16 22:14   ` Jim Porter
2023-03-25 21:05 ` Tomas Hlavaty
2023-03-26 23:50 ` Tomas Hlavaty

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAM=F=bBOaOVamSkFb5JSdpDUZnUkVVVTHQ=bd_HRt5GioRTVpQ@mail.gmail.com' \
    --to=owinebar@gmail.com \
    --cc=emacs-devel@gnu.org \
    --cc=jporterbugs@gmail.com \
    --cc=karthikchikmagalur@gmail.com \
    --cc=monnier@iro.umontreal.ca \
    --cc=thomas@koch.ro \
    --cc=tom@logand.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.