From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Lynn Winebarger Newsgroups: gmane.emacs.devel Subject: Re: continuation passing in Emacs vs. JUST-THIS-ONE Date: Mon, 17 Apr 2023 22:56:13 -0400 Message-ID: References: <627090382.312345.1678539189382@office.mailbox.org> <87sfe7suog.fsf@gmail.com> <1c6fedae-10b4-5d97-5036-eaa736e1b816@gmail.com> <87mt4c6xju.fsf@logand.com> <87a6001xm0.fsf@logand.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="38690"; mail-complaints-to="usenet@ciao.gmane.io" Cc: Tomas Hlavaty , Jim Porter , Karthik Chikmagalur , Thomas Koch , "emacs-devel@gnu.org" To: Stefan Monnier Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Tue Apr 18 04:56:35 2023 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1pobWF-0009tE-1Z for ged-emacs-devel@m.gmane-mx.org; Tue, 18 Apr 2023 04:56:35 +0200 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pobVV-0006m1-3s; Mon, 17 Apr 2023 22:55:49 -0400 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pobVR-0006kL-IZ for emacs-devel@gnu.org; Mon, 17 Apr 2023 22:55:46 -0400 Original-Received: from mail-pf1-x42d.google.com ([2607:f8b0:4864:20::42d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pobVO-0003eh-KY for emacs-devel@gnu.org; Mon, 17 Apr 2023 22:55:44 -0400 Original-Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-63b35789313so1257154b3a.3 for ; Mon, 17 Apr 2023 19:55:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681786541; x=1684378541; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=KSmvsraNMlKYlsaeZupoIikn5OPA2V2C+VUDYZlQiN0=; b=IYRJWSNhLvxIfvWV10xrQCWxUb3DLusyOc3gnWUJt4Y2cyZTdW3AggJh1CRFu5iWXa GvhMZrXfVi4FjNO01n5UHulwA8bXta9dPG89hZh2xuXG+t5JoDuIkEB/HNoh99xEL+cq IVfDUE829fA5q+sShV1TlpjBuLxX/uN+t9WAuGET+pqR/meajSMJ0/hmEY7SFgt7QtsO rr50/cNCZHPIXsay6VTKYAFBqKIqzrGDlCEdBS4c+0NUx8r8NAXEkPGB/XR1ez6UqcmD tZu75/JTfFzYJG/BUSaFa76d+MfACjwGVsHa+6wGjzTHZHst3uNSjUmJyzUwkkF7XDVK mt0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681786541; x=1684378541; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KSmvsraNMlKYlsaeZupoIikn5OPA2V2C+VUDYZlQiN0=; b=jNz/AEOdx45FYIuDyH0IWuMpPgAHRD7cT56AHBI0YIAKjNqiTDdGBKB0C3yTr2/OhD mRKkk97QeJ23BDBARZfr/dTgFgXTsWHqYsDAJCnAfZIu2GLlT5rMpAn8EF1TWqhzXsf9 NmIKUoCme2ZaDqXal7dZFWvt9LsyRspqgpJ+vjeOgoj4793kOJOb6Te85jONWKkGXvQq q4HTYusdM2Dw6x0vDen5lT/NOHZufNACyDnGR7apGsoVkyReKSnaUj4WyjAwADQGsK7N UFBb+cYNcH219WZAgzC734rscu71Lx8aQ7/nyhDOcM2la+WXPketz6yN/0xrZXZMvNP5 EFRQ== X-Gm-Message-State: AAQBX9dJi+XPMVgC1dEvSaflZjv/rys+Tw2ArbIP4MXj6ExFy6wm9M4G CF0iJJGWq4/D6tUdT+3/HbVhIV1DxDERiHrBz9Q= X-Google-Smtp-Source: AKy350Zx+zhmMguVxMxRnds3HqbdjrcMOyp5kqoe7RGxczhlXBWbXW1YleG9ioZo7oXFaQha/a6eBlAJ4NFPgdnYaQ4= X-Received: by 2002:a05:6a00:2e09:b0:633:4fea:3d06 with SMTP id fc9-20020a056a002e0900b006334fea3d06mr8629330pfb.1.1681786540719; Mon, 17 Apr 2023 19:55:40 -0700 (PDT) In-Reply-To: Received-SPF: pass client-ip=2607:f8b0:4864:20::42d; envelope-from=owinebar@gmail.com; helo=mail-pf1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:305395 Archived-At: On Mon, Apr 17, 2023 at 3:50=E2=80=AFPM Stefan Monnier wrote: > > This whole thread seems to echo the difference between "stackless" and > > "stackful" coroutines discussed in > > https://nullprogram.com/blog/2019/03/10/ by the author of emacs-aio, > > with generator-style rewriting corresponding to stackless and threads > > to "stackful". So when you say "save as much as threads do", I'm not > > clear if you're talking about rewriting code to essentially create a > > heap allocated version of the same information that a thread has in > > the form of its stack, or something more limited like some particular > > set of special bindings. > > Indeed to "save as much as threads do" we'd have to essentially create > a heap allocated version of the same info. > [ I don't think that's what we want. ] It sounds like you would end up with a user-implemented call/cc or "spaghetti stack" construct, so I would agree. > > It seems to me what one would really like is for primitives that might > > block to just return a future that's treated like any other value, > > except that "futurep" would return true and primitive operations would > > implicitly wait on the futures in their arguments. > > I think experience shows that doing that implicitly everywhere is not > a good idea, because it makes it all too easy to accidentally block > waiting for a future. I wrote that incorrectly - I meant that primitive operations would add a continuation to the future and return a future for their result. Basically, a computation would never block, it would just build continuation trees (in the form of futures) and return to the top-level. Although that assumes the system would be able to allocate those futures without blocking for GC work. > Instead, you want to replace this "implicit" by a mechanism that is "as > lightweight as possible" (so it's "almost implicit") and that makes it > easy for the programmer to control whether the code should rather block > for the future's result (e.g. `futur-wait`) or "delay itself" until > after the future's completion (e.g. `future-let*`). At some point in this thread you stated you weren't sure what the right semantics are in terms of the information to save, etc. I posed this implicit semantics as a way to think about what "the right thing" would be. Would all operations preserve the same (lisp) machine state, or would it differ depending on the nature of the operator? [ is the kind of question it might be useful to work out in this thought experiment ] The way you've defined future-let, the variable being bound is a future because you are constructing it as one, but it is still a normal variable. What if, instead, we define a "futur-abstraction" (lambda/futur (v) body ...) in which v is treated as a future by default, and a future-conditional form (if-available v ready-expr not-ready-expr) with the obvious meaning. If v appears as the argument to a lambda/future function object it will be passed as is. Otherwise, the reference to v would be rewritten as (futur-wait v). Some syntactic sugar (futur-escape v) =3D> (if-available v v) could be used to pass the future to arbitrary functions. Then futur-let and futur-let* could be defined with the standard expansion with lambda replaced by lambda/futur. Otherwise, I'm not sure what the syntax really buys you. > > I think that would provide the asynchronous but not concurrent > > semantics you're talking about. > > FWIW, I'm in favor of both more concurrency and more parallelism. > My earlier remark was simply pointing out that the design of `futur.el` > is not trying to make Emacs faster. It would be easier if elisp threads were orthogonal to system threads, so that any elisp thread could be run on any available system thread. Multiprocessing could be done by creating multiple lisp VMs in a process (i.e. lisp VM orthogonal to a physical core), each with their own heap and globals in addition to some shared heap with well-defined synchronization. The "global interpreter lock" would become a "lisp machine lock", with (non-preemptive, one-shot continuation type) elisp threads being local to the machine. That seems to me the simplest way to coherently extend the lisp semantics to multi-processing. The display would presumably have to exist in the shared space for anything interesting to happen in terms of editing, but buffers could be local to a particular lisp machine. I thought I saw segmented stack allocation implemented in master last year (by Mattias Engdeg=C3=A5rd?), but it doesn't appear to be there any longer. If that infrastructure were there, then it would seem user space cooperative threading via one-shot continuations (+ trampolining by kernel threads + user-space scheduling of user-space threads) would be viable. Lynn