From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.io!.POSTED.blaine.gmane.org!not-for-mail From: Lynn Winebarger Newsgroups: gmane.emacs.devel Subject: Re: Concurrency via isolated process/thread Date: Wed, 5 Jul 2023 09:02:40 -0400 Message-ID: References: <871qhnr4ty.fsf@localhost> <83v8ezk3cj.fsf@gnu.org> <87v8ezpov0.fsf@localhost> <83r0pnk2az.fsf@gnu.org> <87pm57pns8.fsf@localhost> <83pm57k01f.fsf@gnu.org> <87v8ey8uv7.fsf@localhost> <83bkgqk28a.fsf@gnu.org> <87mt0a8rak.fsf@localhost> Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="0000000000008005ad05ffbd0450" Injection-Info: ciao.gmane.io; posting-host="blaine.gmane.org:116.202.254.214"; logging-data="15035"; mail-complaints-to="usenet@ciao.gmane.io" Cc: Eli Zaretskii , emacs-devel To: Ihor Radchenko Original-X-From: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Wed Jul 05 15:03:24 2023 Return-path: Envelope-to: ged-emacs-devel@m.gmane-mx.org Original-Received: from lists.gnu.org ([209.51.188.17]) by ciao.gmane.io with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1qH2AG-0003aR-2k for ged-emacs-devel@m.gmane-mx.org; Wed, 05 Jul 2023 15:03:24 +0200 Original-Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qH29r-0001SO-5h; Wed, 05 Jul 2023 09:02:59 -0400 Original-Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qH29p-0001S6-Ca for emacs-devel@gnu.org; Wed, 05 Jul 2023 09:02:57 -0400 Original-Received: from mail-pl1-x62c.google.com ([2607:f8b0:4864:20::62c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qH29n-0003OK-Ek; Wed, 05 Jul 2023 09:02:57 -0400 Original-Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1b89600a37fso15555025ad.2; Wed, 05 Jul 2023 06:02:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1688562173; x=1691154173; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=5nQwz0YbhSbav93m2Qg7u6xiRO5YFPBXlTwB269CPGs=; b=PdHiLE3sL0q36A/F78plZM8BsEwioNTDYq5YxOb86/l2a2cFfyLix0CjOa4a+IByUF +d0iyz3m5Y3Zx1A5lEkysyUo+dPFyjYICJ9YEpZomq/dStSLOJ+8F18IIdDIXP5YKwzI lt4sKfAYUeBcWF07ow7ppguNA6FoqY1FSlDmTgYId8Lh+MBsbh4W/K1u4bdfIMM2KLru +nfzB/zr7kWCcjQAups/BN1wgzti41wzgfQSvWnm39rcKSQF+uvOPWWVY/esO1muAp5g yi3Vr/uNwHh8HwwP0AMERtj2+4Y246T6GjjONeVLUCTaUjy1Rrk21R8/S5xTskKARbWb a1Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688562173; x=1691154173; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5nQwz0YbhSbav93m2Qg7u6xiRO5YFPBXlTwB269CPGs=; b=dcnHnliWt/oZ3ZDGepiLt0kC7A3nU2hbQpFL3xcVI3T51Y0EaFES8Y8HwxUk1ba7xI p+dohKf6MYfBrUUJE04/z+KobDSu6r2uXBQPL+Xj/mM9CP3rs+LRzM7KUhze6B+dfr9Q 1JEqOz2gwWaQEKo3ITy/M/8HciDJ1JwositxWggCKSuSLg+9SAvg0VYZRWXJnIK5FU/e zG6NQNJx9duSxEEhmjRC+1iHQO/ImTFBZsp5IrA9T7T2vanvICrsJ+noO4BZp3mva5ld BFc2FiLi8wEnXOAmted979OS6qag3mkFsfq1Y9yuHuyITv1i02VWCvBaaomXK5mrWDGi FWgw== X-Gm-Message-State: ABy/qLZ1C+0epU7bVApYmzBSHgjCA6CfeYUPn9+GYP+e5BNurhm1hIHx 6oond9Z5zzb98pLFz2qq0i1+MS5Bd5ysjYLt5Jc= X-Google-Smtp-Source: APBJJlHhF2bNScTvoXpnTIzoSKJxUVp+E8gUPOOX/7cI9obbBmXQa5Lyy3MEO1sLkyNDUWHSxwZM6ULbeEKdgmQyEc4= X-Received: by 2002:a17:90a:4fc2:b0:259:343:86b5 with SMTP id q60-20020a17090a4fc200b00259034386b5mr12873629pjh.47.1688562172952; Wed, 05 Jul 2023 06:02:52 -0700 (PDT) In-Reply-To: <87mt0a8rak.fsf@localhost> Received-SPF: pass client-ip=2607:f8b0:4864:20::62c; envelope-from=owinebar@gmail.com; helo=mail-pl1-x62c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: emacs-devel@gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Emacs development discussions." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Original-Sender: emacs-devel-bounces+ged-emacs-devel=m.gmane-mx.org@gnu.org Xref: news.gmane.io gmane.emacs.devel:307463 Archived-At: --0000000000008005ad05ffbd0450 Content-Type: text/plain; charset="UTF-8" On Wed, Jul 5, 2023, 8:41 AM Ihor Radchenko wrote: > Eli Zaretskii writes: > > >> It may be dumb (I have no experience with processes in C), but I have > >> something like the following in mind: > >> > >> 1. Main Emacs process has a normal Elisp thread that watches for async > >> Emacs process requests. > >> 2. Once a request arrives, asking to get/modify main Emacs process data, > >> the request is fulfilled synchronously and signaled back by writing > >> to memory accessible by the async process. > > > > That solves part of the problem, maybe (assuming we'd want to allow > > shared memory in Emacs). > > My idea is basically similar to the current schema of interacting > between process input/output and Emacs. But using data stream rather > than text stream. > > Shared memory is one way. Or it may be something like sockets. > It's just that shared memory will be faster, AFAIU. > > > ... The other parts -- how to implement async > > process requests so that they don't suffer from the same problem, and > > how to reference objects outside of the shared memory -- are still > > there. > > I imagine that there will be a special "remote Lisp object" type. > > 1. Imagine that child Emacs process asks for a value of variable `foo', > which is a list (1 2 3 4). > 2. The child process requests parent Emacs to put the variable value > into shared memory. > 3. The parent process creates a new variable storing a link to (1 2 3 > 4), to prevent (1 . (2 3 4)) cons cell from GC in the parent process > - `foo#'. Then, it informs the child process about this variable. > 4. The child process creates a new remote Lisp object #. > > 5. Now consider that child process tries (setcar # > value). > The `setcar' and other primitives will be modified to query parent > process to perform the actual modification to > (# . (2 3 4)) > > 6. Before exiting the child thread, or every time we need to copy remote > object, # will be replaced by an actual newly created > traditional object. The best idea I've had for a general solution would be to make "concurrent" versions of the fundamental lisp objects that act like immutable git repositories, with the traditional versions of the objects acting as working copies but only recording changes. Then each checked out copy could push charges back, and if the merge fails an exception would be thrown in the thread of that working copy which the elisp code could decide how to handle. That would work for inter-process shared memory or plain in-process memory between threads. Then locks are only needed for updating the main reference to the concurrent object. Lynn --0000000000008005ad05ffbd0450 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
On Wed, Jul 5, 2023, 8:41 AM Ihor Radchenko <yantar92@posteo.net> wrote:
Eli Zaretskii <eliz@gnu.org> writes:

>> It may be dumb (I have no experience with processes in C), but I h= ave
>> something like the following in mind:
>>
>> 1. Main Emacs process has a normal Elisp thread that watches for a= sync
>>=C2=A0 =C2=A0 Emacs process requests.
>> 2. Once a request arrives, asking to get/modify main Emacs process= data,
>>=C2=A0 =C2=A0 the request is fulfilled synchronously and signaled b= ack by writing
>>=C2=A0 =C2=A0 to memory accessible by the async process.
>
> That solves part of the problem, maybe (assuming we'd want to allo= w
> shared memory in Emacs).

My idea is basically similar to the current schema of interacting
between process input/output and Emacs. But using data stream rather
than text stream.

Shared memory is one way. Or it may be something like sockets.
It's just that shared memory will be faster, AFAIU.

> ... The other parts -- how to implement async
> process requests so that they don't suffer from the same problem, = and
> how to reference objects outside of the shared memory -- are still
> there.

I imagine that there will be a special "remote Lisp object" type.=

1. Imagine that child Emacs process asks for a value of variable `foo',=
=C2=A0 =C2=A0which is a list (1 2 3 4).
2. The child process requests parent Emacs to put the variable value
=C2=A0 =C2=A0into shared memory.
3. The parent process creates a new variable storing a link to (1 2 3
=C2=A0 =C2=A04), to prevent (1 . (2 3 4)) cons cell from GC in the parent p= rocess
=C2=A0 =C2=A0- `foo#'. Then, it informs the child process about this va= riable.
4. The child process creates a new remote Lisp object #<remote cons foo#= >.

5. Now consider that child process tries (setcar #<remote cons foo#> = value).
=C2=A0 =C2=A0The `setcar' and other primitives will be modified to quer= y parent
=C2=A0 =C2=A0process to perform the actual modification to
=C2=A0 =C2=A0(#<remote value> . (2 3 4))

6. Before exiting the child thread, or every time we need to copy remote =C2=A0 =C2=A0object, #<remote ...> will be replaced by an actual newl= y created
=C2=A0 =C2=A0traditional object.
=
The best idea I've had for a general soluti= on would be to make "concurrent" versions of the fundamental lisp= objects that act like immutable git repositories, with the traditional ver= sions of the objects acting as working copies but only recording changes.= =C2=A0 Then each checked out copy could push charges back, and if the merge= fails an exception would be thrown in the thread of that working copy whic= h the elisp code could decide how to handle.=C2=A0 That would work for inte= r-process shared memory or plain in-process memory between threads.=C2=A0 T= hen locks are only needed for updating the main reference to the concurrent= object.

Lynn




--0000000000008005ad05ffbd0450--