all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
* Concurrency via isolated process/thread
@ 2023-07-04 16:58 Ihor Radchenko
  2023-07-04 17:12 ` Eli Zaretskii
  2023-07-05  0:33 ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-04 16:58 UTC (permalink / raw)
  To: emacs-devel


[ Discussion moved from
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=64423 ]

> > I feel that I am repeating an already proposed idea, but what about
> > concurrent isolated process that interacts with the main process?
> 
> (This stuff should not be discussed on the bug tracker, but on
> emacs-devel.)

Ok.
 
> If you mean what the async package does, then yes, this is a workable
> idea.  But it doesn't need to change anything in Emacs, and it has
> some downsides, like the difficulties in sharing state with the other
> process.

I had something similar in mind, but slightly different.

emacs-async package explicitly transfers pre-defined set of variables to
the async Emacs process and cannot transfer non-printable variables
(like markers or buffers).

But may it be possible to

1. Limit the async process memory to a small lexical subset of symbols
   (within a function).

2. Every time the async process needs to read/write a symbol slot
   outside its lexical scope, query the main Emacs process.

More concretely, is it possible to copy internal Elisp object
representations between Emacs processes and arrange mutability to query
the right Emacs process that "owns" the object?

The inter-process communication does not have to be asynchronous, but
may work similar to the existing thread implementation.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-04 16:58 Concurrency via isolated process/thread Ihor Radchenko
@ 2023-07-04 17:12 ` Eli Zaretskii
  2023-07-04 17:29   ` Ihor Radchenko
  2023-07-05  0:33 ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-04 17:12 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Date: Tue, 04 Jul 2023 16:58:49 +0000
> 
> emacs-async package explicitly transfers pre-defined set of variables to
> the async Emacs process and cannot transfer non-printable variables
> (like markers or buffers).
> 
> But may it be possible to
> 
> 1. Limit the async process memory to a small lexical subset of symbols
>    (within a function).
> 
> 2. Every time the async process needs to read/write a symbol slot
>    outside its lexical scope, query the main Emacs process.

If it queries the main process, it will have to wait when the main
process is busy.  So this is not really asynchronous.

> More concretely, is it possible to copy internal Elisp object
> representations between Emacs processes and arrange mutability to query
> the right Emacs process that "owns" the object?

This is software: anything's possible ;-).  But Someone™ needs to
write the code, like marshalling and unmarshalling of such objects
between two processes.  (We do something like that when we write then
load the pdumper file.)  There's more than one way of skinning this
particular cat.

> The inter-process communication does not have to be asynchronous, but
> may work similar to the existing thread implementation.

I wouldn't recommend designing anything by the example of Lisp
threads.  'Nough said.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-04 17:12 ` Eli Zaretskii
@ 2023-07-04 17:29   ` Ihor Radchenko
  2023-07-04 17:35     ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-04 17:29 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> 1. Limit the async process memory to a small lexical subset of symbols
>>    (within a function).
>> 
>> 2. Every time the async process needs to read/write a symbol slot
>>    outside its lexical scope, query the main Emacs process.
>
> If it queries the main process, it will have to wait when the main
> process is busy.  So this is not really asynchronous.

Sure. But not every asynchronous process would need to query the main
process frequently. As a dumb example:

(defun test/F (n) "Factorial"
  (declare (pure t))
  (let ((mult 1))
    (dotimes (i n)
      (setq mult (* mult i)))
    mult))

This function, if called asynchronously, will need to query input (N)
and later return the output. The main loop will not require any
interactions and will take most of the CPU time.

If the async process have a separate garbage collector, such process
will free the main Emacs process from allocating all the memory for i
and mult values at each loop iteration.

>> More concretely, is it possible to copy internal Elisp object
>> representations between Emacs processes and arrange mutability to query
>> the right Emacs process that "owns" the object?
>
> This is software: anything's possible ;-).  But Someone™ needs to
> write the code, like marshalling and unmarshalling of such objects
> between two processes.  (We do something like that when we write then
> load the pdumper file.)  There's more than one way of skinning this
> particular cat.

As the first step, I wanted to hear if there is any blocker that
prevents memcpy between processes without going through print/read.

>> The inter-process communication does not have to be asynchronous, but
>> may work similar to the existing thread implementation.
>
> I wouldn't recommend designing anything by the example of Lisp
> threads.  'Nough said.

IMHO, the main problem with threads is that they cannot be interrupted
or fire too frequently.

In the proposed idea, this will not be such a big deal - inter-process
communication is a known bottleneck for any asynchronous code and should
be avoided anyway.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-04 17:29   ` Ihor Radchenko
@ 2023-07-04 17:35     ` Eli Zaretskii
  2023-07-04 17:52       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-04 17:35 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: emacs-devel@gnu.org
> Date: Tue, 04 Jul 2023 17:29:07 +0000
> 
> > This is software: anything's possible ;-).  But Someone™ needs to
> > write the code, like marshalling and unmarshalling of such objects
> > between two processes.  (We do something like that when we write then
> > load the pdumper file.)  There's more than one way of skinning this
> > particular cat.
> 
> As the first step, I wanted to hear if there is any blocker that
> prevents memcpy between processes without going through print/read.

I don't think you can design on this base.  Security and all that.
Also, complex structures include pointers and references, which you
cannot safely copy as-is anyway.

> >> The inter-process communication does not have to be asynchronous, but
> >> may work similar to the existing thread implementation.
> >
> > I wouldn't recommend designing anything by the example of Lisp
> > threads.  'Nough said.
> 
> IMHO, the main problem with threads is that they cannot be interrupted
> or fire too frequently.

I wish this were the only problem with threads.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-04 17:35     ` Eli Zaretskii
@ 2023-07-04 17:52       ` Ihor Radchenko
  2023-07-04 18:24         ` Eli Zaretskii
  2023-07-05  0:34         ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-04 17:52 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> As the first step, I wanted to hear if there is any blocker that
>> prevents memcpy between processes without going through print/read.
>
> I don't think you can design on this base.  Security and all that.

But why is it a problem? Isn't it normal for a C++ thread to get pointer
to the parent heap? For Emacs async process, it can, for example, be a
pointer to obarray.

> Also, complex structures include pointers and references, which you
> cannot safely copy as-is anyway.

May you please elaborate?

>> IMHO, the main problem with threads is that they cannot be interrupted
>> or fire too frequently.
>
> I wish this were the only problem with threads.

Maybe. But I haven't seen other problems preventing threads from being used.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-04 17:52       ` Ihor Radchenko
@ 2023-07-04 18:24         ` Eli Zaretskii
  2023-07-05 11:23           ` Ihor Radchenko
  2023-07-05  0:34         ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-04 18:24 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: emacs-devel@gnu.org
> Date: Tue, 04 Jul 2023 17:52:23 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> As the first step, I wanted to hear if there is any blocker that
> >> prevents memcpy between processes without going through print/read.
> >
> > I don't think you can design on this base.  Security and all that.
> 
> But why is it a problem?

I'm not an expert, but AFAIK reading from, and writing to, the memory
of another process is something allowed basically only for debuggers.

And how would you know the address in another process anyway, given
today's ASLR techniques?

> Isn't it normal for a C++ thread to get pointer to the parent heap?

That's inside the same process: a huge difference.

> > Also, complex structures include pointers and references, which you
> > cannot safely copy as-is anyway.
> 
> May you please elaborate?

For example, a list whose member is a string includes a pointer to
that string's data.

> >> IMHO, the main problem with threads is that they cannot be interrupted
> >> or fire too frequently.
> >
> > I wish this were the only problem with threads.
> 
> Maybe. But I haven't seen other problems preventing threads from being used.

I have, too many of them.  Some are semi-fixed, but I'm afraid we only
don't hear about them because threads are not used in serious,
production-quality programs.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-04 16:58 Concurrency via isolated process/thread Ihor Radchenko
  2023-07-04 17:12 ` Eli Zaretskii
@ 2023-07-05  0:33 ` Po Lu
  2023-07-05  2:31   ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-05  0:33 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> More concretely, is it possible to copy internal Elisp object
> representations between Emacs processes and arrange mutability to query
> the right Emacs process that "owns" the object?

It is not.

But anyway I have a sinking suspicion that any solution that involves
special IPC implemented in C code will prove to be more trouble than
allowing multiple Lisp threads to run simultaneously and interlocking
Emacs itself.  It would require a lot of manpower, but it isn't
impossible: other large programs have been interlocked to run on SMPs,
most notably Unix.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-04 17:52       ` Ihor Radchenko
  2023-07-04 18:24         ` Eli Zaretskii
@ 2023-07-05  0:34         ` Po Lu
  2023-07-05 11:26           ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-05  0:34 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> But why is it a problem? Isn't it normal for a C++ thread to get pointer
> to the parent heap? For Emacs async process, it can, for example, be a
> pointer to obarray.

How will you interlock access to obarray?  What if both processes call
gensym at the same time?



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05  0:33 ` Po Lu
@ 2023-07-05  2:31   ` Eli Zaretskii
  2023-07-17 20:43     ` Hugo Thunnissen
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-05  2:31 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: emacs-devel@gnu.org
> Date: Wed, 05 Jul 2023 08:33:33 +0800
> 
> But anyway I have a sinking suspicion that any solution that involves
> special IPC implemented in C code will prove to be more trouble than
> allowing multiple Lisp threads to run simultaneously and interlocking
> Emacs itself.

No, because we already handle sub-process output in a way that doesn't
require true concurrency.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-04 18:24         ` Eli Zaretskii
@ 2023-07-05 11:23           ` Ihor Radchenko
  2023-07-05 11:49             ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-05 11:23 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> > I don't think you can design on this base.  Security and all that.
>> 
>> But why is it a problem?
>
> I'm not an expert, but AFAIK reading from, and writing to, the memory
> of another process is something allowed basically only for debuggers.
>
> And how would you know the address in another process anyway, given
> today's ASLR techniques?

I am looking at
https://stackoverflow.com/questions/5656530/how-to-use-shared-memory-with-linux-in-c

AFAIU, it is possible to create shared memory only readable by child
processes.

Then, exchanging data between the two Emacs processes may be done using
memcpy to/from shared memory.

It may be dumb (I have no experience with processes in C), but I have
something like the following in mind:

1. Main Emacs process has a normal Elisp thread that watches for async
   Emacs process requests.
2. Once a request arrives, asking to get/modify main Emacs process data,
   the request is fulfilled synchronously and signaled back by writing
   to memory accessible by the async process.

>> > Also, complex structures include pointers and references, which you
>> > cannot safely copy as-is anyway.
>> 
>> May you please elaborate?
>
> For example, a list whose member is a string includes a pointer to
> that string's data.

I imagine that instead of trying to copy Lisp objects recursively, there
will need to be a special "remote" Lisp object type. Getting/setting its
value will involve talking to other Emacs process.

>> > I wish this were the only problem with threads.
>> 
>> Maybe. But I haven't seen other problems preventing threads from being used.
>
> I have, too many of them.  Some are semi-fixed, but I'm afraid we only
> don't hear about them because threads are not used in serious,
> production-quality programs.

You are talking about bugs? If nobody goes far enough to discover those,
they are probably not the real reason why people do not use Elisp threads.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05  0:34         ` Po Lu
@ 2023-07-05 11:26           ` Ihor Radchenko
  2023-07-05 12:11             ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-05 11:26 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> But why is it a problem? Isn't it normal for a C++ thread to get pointer
>> to the parent heap? For Emacs async process, it can, for example, be a
>> pointer to obarray.
>
> How will you interlock access to obarray?  What if both processes call
> gensym at the same time?

I imagine that only a single, main Emacs process will truly own the
obarray.

The child async Emacs process will (1) partially have its own obarray
for lexical bindings; (2) query the parent process when there is a need
to read/write the shared obarray.  The query will be processed by the
parent process synchronously, so the situation you described will be
impossible.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 11:23           ` Ihor Radchenko
@ 2023-07-05 11:49             ` Eli Zaretskii
  2023-07-05 12:40               ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-05 11:49 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: emacs-devel@gnu.org
> Date: Wed, 05 Jul 2023 11:23:40 +0000
> 
> AFAIU, it is possible to create shared memory only readable by child
> processes.
> 
> Then, exchanging data between the two Emacs processes may be done using
> memcpy to/from shared memory.
> 
> It may be dumb (I have no experience with processes in C), but I have
> something like the following in mind:
> 
> 1. Main Emacs process has a normal Elisp thread that watches for async
>    Emacs process requests.
> 2. Once a request arrives, asking to get/modify main Emacs process data,
>    the request is fulfilled synchronously and signaled back by writing
>    to memory accessible by the async process.

That solves part of the problem, maybe (assuming we'd want to allow
shared memory in Emacs).  The other parts -- how to implement async
process requests so that they don't suffer from the same problem, and
how to reference objects outside of the shared memory -- are still
there.

> > I have, too many of them.  Some are semi-fixed, but I'm afraid we only
> > don't hear about them because threads are not used in serious,
> > production-quality programs.
> 
> You are talking about bugs?

Bugs that are there "by design".

> If nobody goes far enough to discover those,
> they are probably not the real reason why people do not use Elisp threads.

I'm saying that it could be the other way around: we don't hear about
those bugs because threads are not used seriously.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 11:26           ` Ihor Radchenko
@ 2023-07-05 12:11             ` Po Lu
  2023-07-05 12:44               ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-05 12:11 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I imagine that only a single, main Emacs process will truly own the
> obarray.
>
> The child async Emacs process will (1) partially have its own obarray
> for lexical bindings; (2) query the parent process when there is a need
> to read/write the shared obarray.  The query will be processed by the
> parent process synchronously, so the situation you described will be
> impossible.

The obarray contains symbols, not their values.  So I don't understand
what you refer to by a separate obarray for lexical bindings.




^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 11:49             ` Eli Zaretskii
@ 2023-07-05 12:40               ` Ihor Radchenko
  2023-07-05 13:02                 ` Lynn Winebarger
  2023-07-05 13:33                 ` Eli Zaretskii
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-05 12:40 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> It may be dumb (I have no experience with processes in C), but I have
>> something like the following in mind:
>> 
>> 1. Main Emacs process has a normal Elisp thread that watches for async
>>    Emacs process requests.
>> 2. Once a request arrives, asking to get/modify main Emacs process data,
>>    the request is fulfilled synchronously and signaled back by writing
>>    to memory accessible by the async process.
>
> That solves part of the problem, maybe (assuming we'd want to allow
> shared memory in Emacs).

My idea is basically similar to the current schema of interacting
between process input/output and Emacs. But using data stream rather
than text stream.

Shared memory is one way. Or it may be something like sockets.
It's just that shared memory will be faster, AFAIU.

> ... The other parts -- how to implement async
> process requests so that they don't suffer from the same problem, and
> how to reference objects outside of the shared memory -- are still
> there.

I imagine that there will be a special "remote Lisp object" type.

1. Imagine that child Emacs process asks for a value of variable `foo',
   which is a list (1 2 3 4).
2. The child process requests parent Emacs to put the variable value
   into shared memory.
3. The parent process creates a new variable storing a link to (1 2 3
   4), to prevent (1 . (2 3 4)) cons cell from GC in the parent process
   - `foo#'. Then, it informs the child process about this variable.
4. The child process creates a new remote Lisp object #<remote cons foo#>.

5. Now consider that child process tries (setcar #<remote cons foo#> value).
   The `setcar' and other primitives will be modified to query parent
   process to perform the actual modification to
   (#<remote value> . (2 3 4))

6. Before exiting the child thread, or every time we need to copy remote
   object, #<remote ...> will be replaced by an actual newly created
   traditional object.

>> If nobody goes far enough to discover those,
>> they are probably not the real reason why people do not use Elisp threads.
>
> I'm saying that it could be the other way around: we don't hear about
> those bugs because threads are not used seriously.

I feel like this part of the discussion is not contributing to the main
topic. At the end, it is not critical if Elisp threads are used to
implement the discussed idea. Or timers, or something else.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 12:11             ` Po Lu
@ 2023-07-05 12:44               ` Ihor Radchenko
  2023-07-05 13:21                 ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-05 12:44 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> I imagine that only a single, main Emacs process will truly own the
>> obarray.
>>
>> The child async Emacs process will (1) partially have its own obarray
>> for lexical bindings; (2) query the parent process when there is a need
>> to read/write the shared obarray.  The query will be processed by the
>> parent process synchronously, so the situation you described will be
>> impossible.
>
> The obarray contains symbols, not their values.  So I don't understand
> what you refer to by a separate obarray for lexical bindings.

I imagine that the child Emacs will have a very limited obarray.
If a symbol is not found in that process-local obarray, a query to
parent Emacs process will be sent to retrieve the symbol slot values.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 12:40               ` Ihor Radchenko
@ 2023-07-05 13:02                 ` Lynn Winebarger
  2023-07-05 13:10                   ` Ihor Radchenko
  2023-07-05 13:33                 ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Lynn Winebarger @ 2023-07-05 13:02 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

[-- Attachment #1: Type: text/plain, Size: 2759 bytes --]

On Wed, Jul 5, 2023, 8:41 AM Ihor Radchenko <yantar92@posteo.net> wrote:

> Eli Zaretskii <eliz@gnu.org> writes:
>
> >> It may be dumb (I have no experience with processes in C), but I have
> >> something like the following in mind:
> >>
> >> 1. Main Emacs process has a normal Elisp thread that watches for async
> >>    Emacs process requests.
> >> 2. Once a request arrives, asking to get/modify main Emacs process data,
> >>    the request is fulfilled synchronously and signaled back by writing
> >>    to memory accessible by the async process.
> >
> > That solves part of the problem, maybe (assuming we'd want to allow
> > shared memory in Emacs).
>
> My idea is basically similar to the current schema of interacting
> between process input/output and Emacs. But using data stream rather
> than text stream.
>
> Shared memory is one way. Or it may be something like sockets.
> It's just that shared memory will be faster, AFAIU.
>
> > ... The other parts -- how to implement async
> > process requests so that they don't suffer from the same problem, and
> > how to reference objects outside of the shared memory -- are still
> > there.
>
> I imagine that there will be a special "remote Lisp object" type.
>
> 1. Imagine that child Emacs process asks for a value of variable `foo',
>    which is a list (1 2 3 4).
> 2. The child process requests parent Emacs to put the variable value
>    into shared memory.
> 3. The parent process creates a new variable storing a link to (1 2 3
>    4), to prevent (1 . (2 3 4)) cons cell from GC in the parent process
>    - `foo#'. Then, it informs the child process about this variable.
> 4. The child process creates a new remote Lisp object #<remote cons foo#>.
>
> 5. Now consider that child process tries (setcar #<remote cons foo#>
> value).
>    The `setcar' and other primitives will be modified to query parent
>    process to perform the actual modification to
>    (#<remote value> . (2 3 4))
>
> 6. Before exiting the child thread, or every time we need to copy remote
>    object, #<remote ...> will be replaced by an actual newly created
>    traditional object.


The best idea I've had for a general solution would be to make "concurrent"
versions of the fundamental lisp objects that act like immutable git
repositories, with the traditional versions of the objects acting as
working copies but only recording changes.  Then each checked out copy
could push charges back, and if the merge fails an exception would be
thrown in the thread of that working copy which the elisp code could decide
how to handle.  That would work for inter-process shared memory or plain
in-process memory between threads.  Then locks are only needed for updating
the main reference to the concurrent object.

Lynn

[-- Attachment #2: Type: text/html, Size: 3623 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 13:02                 ` Lynn Winebarger
@ 2023-07-05 13:10                   ` Ihor Radchenko
  2023-07-06 18:35                     ` Lynn Winebarger
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-05 13:10 UTC (permalink / raw)
  To: Lynn Winebarger; +Cc: Eli Zaretskii, emacs-devel

Lynn Winebarger <owinebar@gmail.com> writes:

> The best idea I've had for a general solution would be to make "concurrent"
> versions of the fundamental lisp objects that act like immutable git
> repositories, with the traditional versions of the objects acting as
> working copies but only recording changes.  Then each checked out copy
> could push charges back, and if the merge fails an exception would be
> thrown in the thread of that working copy which the elisp code could decide
> how to handle.  That would work for inter-process shared memory or plain
> in-process memory between threads.  Then locks are only needed for updating
> the main reference to the concurrent object.

Honestly, it sounds overengineered.
Even if not, it is probably easier to implement a more limited version
first and only then think about fancier staff like you described (not
that I understand your idea fully).

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 12:44               ` Ihor Radchenko
@ 2023-07-05 13:21                 ` Po Lu
  2023-07-05 13:26                   ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-05 13:21 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I imagine that the child Emacs will have a very limited obarray.
> If a symbol is not found in that process-local obarray, a query to
> parent Emacs process will be sent to retrieve the symbol slot values.

How will you retrieve the value of a symbol that does not exist in the
child process?  And what about the negative performance implications of
contacting another process to retrieve a symbol's value?  The object
will have to be copied from the parent, and the parent may also be busy.

Anyway, we already have sufficient mechanisms for communicating with
subprocesses.  If Emacs is to take any more advantage of SMP systems, it
must be properly interlocked, with multiple processors sharing the same
memory.  This lesson was learned decades ago with another program:
vmunix.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 13:21                 ` Po Lu
@ 2023-07-05 13:26                   ` Ihor Radchenko
  2023-07-05 13:51                     ` Eli Zaretskii
  2023-07-06  0:27                     ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-05 13:26 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> I imagine that the child Emacs will have a very limited obarray.
>> If a symbol is not found in that process-local obarray, a query to
>> parent Emacs process will be sent to retrieve the symbol slot values.
>
> How will you retrieve the value of a symbol that does not exist in the
> child process?

See my other reply.
The idea is to create a special Lisp Object type.

> And what about the negative performance implications of
> contacting another process to retrieve a symbol's value?  The object
> will have to be copied from the parent, and the parent may also be busy.

Yes, such requests will be synchronous and will need to be avoided.
Ideally, child process just need to query input at the beginning and
transfer the output at the end without communicating with parent
process. Any other communication will be costly.

> Anyway, we already have sufficient mechanisms for communicating with
> subprocesses.

I am only aware of text-based communication. Is there anything else?

> If Emacs is to take any more advantage of SMP systems, it
> must be properly interlocked, with multiple processors sharing the same
> memory.  This lesson was learned decades ago with another program:
> vmunix.

AFAIU, one of the big blockers of this is single-threaded GC. Emacs
cannot free/allocate memory asynchronously. Or do I miss something?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 12:40               ` Ihor Radchenko
  2023-07-05 13:02                 ` Lynn Winebarger
@ 2023-07-05 13:33                 ` Eli Zaretskii
  2023-07-05 13:35                   ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-05 13:33 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: emacs-devel@gnu.org
> Date: Wed, 05 Jul 2023 12:40:51 +0000
> 
> I imagine that there will be a special "remote Lisp object" type.
> 
> 1. Imagine that child Emacs process asks for a value of variable `foo',
>    which is a list (1 2 3 4).
> 2. The child process requests parent Emacs to put the variable value
>    into shared memory.
> 3. The parent process creates a new variable storing a link to (1 2 3
>    4), to prevent (1 . (2 3 4)) cons cell from GC in the parent process
>    - `foo#'. Then, it informs the child process about this variable.
> 4. The child process creates a new remote Lisp object #<remote cons foo#>.
> 
> 5. Now consider that child process tries (setcar #<remote cons foo#> value).
>    The `setcar' and other primitives will be modified to query parent
>    process to perform the actual modification to
>    (#<remote value> . (2 3 4))
> 
> 6. Before exiting the child thread, or every time we need to copy remote
>    object, #<remote ...> will be replaced by an actual newly created
>    traditional object.

How is this different from communicating via stdout, like we do with
start-process today?  You don't have to send only textual data via the
pipe, you can send binary data stream as well, if what bothers you is
the conversion.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 13:33                 ` Eli Zaretskii
@ 2023-07-05 13:35                   ` Ihor Radchenko
  0 siblings, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-05 13:35 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> I imagine that there will be a special "remote Lisp object" type.
> ...
> How is this different from communicating via stdout, like we do with
> start-process today?  You don't have to send only textual data via the
> pipe, you can send binary data stream as well, if what bothers you is
> the conversion.

Piping binary data will also work.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 13:26                   ` Ihor Radchenko
@ 2023-07-05 13:51                     ` Eli Zaretskii
  2023-07-05 14:00                       ` Ihor Radchenko
  2023-07-06  0:27                     ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-05 13:51 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Eli Zaretskii <eliz@gnu.org>, emacs-devel@gnu.org
> Date: Wed, 05 Jul 2023 13:26:54 +0000
> 
> > If Emacs is to take any more advantage of SMP systems, it
> > must be properly interlocked, with multiple processors sharing the same
> > memory.  This lesson was learned decades ago with another program:
> > vmunix.
> 
> AFAIU, one of the big blockers of this is single-threaded GC. Emacs
> cannot free/allocate memory asynchronously. Or do I miss something?

Why is that a "big blocker"?  GC runs "whenever it is convenient"
anyway, so it can be delayed until that time, and run from the main
thread, perhaps sacrificing the memory footprint a bit.

The real "big blocker" is that a typical Emacs session has a huge
global state, which cannot be safely modified by more than a single
thread at a time, and even if one thread writes while the other reads
is in many cases problematic.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 13:51                     ` Eli Zaretskii
@ 2023-07-05 14:00                       ` Ihor Radchenko
  2023-07-06  0:32                         ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-05 14:00 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> AFAIU, one of the big blockers of this is single-threaded GC. Emacs
>> cannot free/allocate memory asynchronously. Or do I miss something?
>
> Why is that a "big blocker"?  GC runs "whenever it is convenient"
> anyway, so it can be delayed until that time, and run from the main
> thread, perhaps sacrificing the memory footprint a bit.

Emm. I meant memory allocation. AFAIK, just like GC allocating heap
cannot be asynchronous.

For the GC, most of the memory objects in long-running Emacs sessions
are short-living. So, having a separate GC and memory heap in async
process will likely improve the main thread performance GC-wise, as a
side effect.

> The real "big blocker" is that a typical Emacs session has a huge
> global state, which cannot be safely modified by more than a single
> thread at a time, and even if one thread writes while the other reads
> is in many cases problematic.

This too, although isn't is already solved by mutexes?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 13:26                   ` Ihor Radchenko
  2023-07-05 13:51                     ` Eli Zaretskii
@ 2023-07-06  0:27                     ` Po Lu
  2023-07-06 10:48                       ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-06  0:27 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I am only aware of text-based communication. Is there anything else?

It doesn't have to be text, no, it could be binary data as well, on top
of which any RPC mechanism could be built.

> AFAIU, one of the big blockers of this is single-threaded GC. Emacs
> cannot free/allocate memory asynchronously. Or do I miss something?

Garbage collection can be made to suspend all other threads and ``pin''
string blocks that are referenced from other threads as a temporary
measure.  It's hardly the reason that makes Emacs difficult to
interlock.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 14:00                       ` Ihor Radchenko
@ 2023-07-06  0:32                         ` Po Lu
  2023-07-06 10:46                           ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-06  0:32 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Emm. I meant memory allocation. AFAIK, just like GC allocating heap
> cannot be asynchronous.

The garbage collector and object allocation can be interlocked, as with
everything else...

> This too, although isn't is already solved by mutexes?

... which you proceed to admit here, and is the crux of the problem.
Getting rid of the Lisp interpreter state that is still not thread-local
(BLV redirects come to mind) is only a minor challenge, compared to the
painstaking and careful work that will be required to interlock access
to the rest of the global state and objects like buffers.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06  0:32                         ` Po Lu
@ 2023-07-06 10:46                           ` Ihor Radchenko
  2023-07-06 12:24                             ` Po Lu
  2023-07-06 14:08                             ` Eli Zaretskii
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 10:46 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> Emm. I meant memory allocation. AFAIK, just like GC allocating heap
>> cannot be asynchronous.
>
> The garbage collector and object allocation can be interlocked, as with
> everything else...

I may be wrong, but from my previous experience with performance
benchmarks, memory allocation often takes a significant fraction of CPU
time. And memory allocation is a routine process on pretty much every
iteration of CPU-intensive code.

I am afraid that interlocking object allocation will practically create
race condition between threads and make Emacs unresponsive.

Or am I missing something?
Is there a way to measure how much CPU time is spent allocating memory?

>> This too, although isn't is already solved by mutexes?
>
> ... which you proceed to admit here, and is the crux of the problem.
> Getting rid of the Lisp interpreter state that is still not thread-local
> (BLV redirects come to mind) is only a minor challenge, compared to the
> painstaking and careful work that will be required to interlock access
> to the rest of the global state and objects like buffers.

Would it be of interest to allow locking objects for read/write using
semantics similar to `with-mutex'?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06  0:27                     ` Po Lu
@ 2023-07-06 10:48                       ` Ihor Radchenko
  2023-07-06 12:15                         ` Po Lu
  2023-07-06 14:10                         ` Eli Zaretskii
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 10:48 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> I am only aware of text-based communication. Is there anything else?
>
> It doesn't have to be text, no, it could be binary data as well, on top
> of which any RPC mechanism could be built.

Do you mean that binary communication is already possible? If so, is it
documented somewhere?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 10:48                       ` Ihor Radchenko
@ 2023-07-06 12:15                         ` Po Lu
  2023-07-06 14:10                         ` Eli Zaretskii
  1 sibling, 0 replies; 192+ messages in thread
From: Po Lu @ 2023-07-06 12:15 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Do you mean that binary communication is already possible? If so, is it
> documented somewhere?

Binary data can be sent and read from any subprocess, as with text
input.  The only special requirement is that the process must not have
an associated coding system, which is achieved by setting its coding
system to `no-conversion' AFAIK.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 10:46                           ` Ihor Radchenko
@ 2023-07-06 12:24                             ` Po Lu
  2023-07-06 12:31                               ` Ihor Radchenko
  2023-07-06 14:08                             ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-06 12:24 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I may be wrong, but from my previous experience with performance
> benchmarks, memory allocation often takes a significant fraction of CPU
> time. And memory allocation is a routine process on pretty much every
> iteration of CPU-intensive code.
>
> I am afraid that interlocking object allocation will practically create
> race condition between threads and make Emacs unresponsive.
>
> Or am I missing something?
> Is there a way to measure how much CPU time is spent allocating memory?

The detail of the interlocking can be increased if and when this is
demonstrated to be problematic.  Allocating individual Lisp objects
usually takes a short amount of time: even if no two threads can do so
at the same time, they will all have ample opportunities to run in
between consings.

> Would it be of interest to allow locking objects for read/write using
> semantics similar to `with-mutex'?

The problem is interlocking access to low level C state within objects
and not from Lisp code itself, and also avoiding constructs such as:

  CHECK_STRING (XCAR (foo));
  foo = XSTRING (XCAR (foo));

where the second load from XCAR (foo)->u.s.car might load a different
pointer from the one whose type was checked.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 12:24                             ` Po Lu
@ 2023-07-06 12:31                               ` Ihor Radchenko
  2023-07-06 12:41                                 ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 12:31 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> Or am I missing something?
>> Is there a way to measure how much CPU time is spent allocating memory?
>
> The detail of the interlocking can be increased if and when this is
> demonstrated to be problematic.  Allocating individual Lisp objects
> usually takes a short amount of time: even if no two threads can do so
> at the same time, they will all have ample opportunities to run in
> between consings.

That's why I asked if there is a way to measure how much CPU time
allocating takes.

>> Would it be of interest to allow locking objects for read/write using
>> semantics similar to `with-mutex'?
>
> The problem is interlocking access to low level C state within objects
> and not from Lisp code itself, and also avoiding constructs such as:
>
>   CHECK_STRING (XCAR (foo));
>   foo = XSTRING (XCAR (foo));
>
> where the second load from XCAR (foo)->u.s.car might load a different
> pointer from the one whose type was checked.

I am thinking about some kind of extra flag that will mark an object
locked:

   LOCK_OBJECT (foo);
   LOCK_OBJECT (XCAR (foo));
   CHECK_STRING (XCAR (foo));
   foo = XSTRING (XCAR (foo));
   UNLOCK_OBJECT (XCAR (foo));
   UNLOCK_OBJECT (foo);

LOCK_OBJECT will block until the object is available for use.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 12:31                               ` Ihor Radchenko
@ 2023-07-06 12:41                                 ` Po Lu
  2023-07-06 12:51                                   ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-06 12:41 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I am thinking about some kind of extra flag that will mark an object
> locked:
>
>    LOCK_OBJECT (foo);
>    LOCK_OBJECT (XCAR (foo));
>    CHECK_STRING (XCAR (foo));
>    foo = XSTRING (XCAR (foo));
>    UNLOCK_OBJECT (XCAR (foo));
>    UNLOCK_OBJECT (foo);
>
> LOCK_OBJECT will block until the object is available for use.

This is unnecessary.  Loads and stores of Lisp_Object values are cache
coherent except on 32 bit systems --with-wide-int.  XCAR (foo) will
always load one of the values previously written.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 12:41                                 ` Po Lu
@ 2023-07-06 12:51                                   ` Ihor Radchenko
  2023-07-06 12:58                                     ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 12:51 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>>    LOCK_OBJECT (foo);
>>    LOCK_OBJECT (XCAR (foo));
> ...
> This is unnecessary.  Loads and stores of Lisp_Object values are cache
> coherent except on 32 bit systems --with-wide-int.  XCAR (foo) will
> always load one of the values previously written.

Do you mean that locking XCAR (foo) is unnecessary when foo is locked?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 12:51                                   ` Ihor Radchenko
@ 2023-07-06 12:58                                     ` Po Lu
  2023-07-06 13:13                                       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-06 12:58 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Po Lu <luangruo@yahoo.com> writes:
>
>>>    LOCK_OBJECT (foo);
>>>    LOCK_OBJECT (XCAR (foo));
>> ...
>> This is unnecessary.  Loads and stores of Lisp_Object values are cache
>> coherent except on 32 bit systems --with-wide-int.  XCAR (foo) will
>> always load one of the values previously written.
>
> Do you mean that locking XCAR (foo) is unnecessary when foo is locked?

No, that there is no need to lock a cons (or a vector, or anything else
with a fixed number of Lisp_Object slots) before reading or writing to
it.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 12:58                                     ` Po Lu
@ 2023-07-06 13:13                                       ` Ihor Radchenko
  2023-07-06 14:13                                         ` Eli Zaretskii
  2023-07-07  0:21                                         ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 13:13 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>>>>    LOCK_OBJECT (foo);
>>>>    LOCK_OBJECT (XCAR (foo));
>>> ...
>> ...
>> Do you mean that locking XCAR (foo) is unnecessary when foo is locked?
>
> No, that there is no need to lock a cons (or a vector, or anything else
> with a fixed number of Lisp_Object slots) before reading or writing to
> it.

I feel confused here.

My understanding is

  CHECK_STRING (XCAR (foo));
  <we do not want XCAR (foo) to be altered here>
  foo = XSTRING (XCAR (foo));

So, locking is needed to ensure that CHECK_STRING assertion remains valid.

Or did you refer to something else?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 10:46                           ` Ihor Radchenko
  2023-07-06 12:24                             ` Po Lu
@ 2023-07-06 14:08                             ` Eli Zaretskii
  2023-07-06 15:01                               ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-06 14:08 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Eli Zaretskii <eliz@gnu.org>, emacs-devel@gnu.org
> Date: Thu, 06 Jul 2023 10:46:47 +0000
> 
> Po Lu <luangruo@yahoo.com> writes:
> 
> >> Emm. I meant memory allocation. AFAIK, just like GC allocating heap
> >> cannot be asynchronous.
> >
> > The garbage collector and object allocation can be interlocked, as with
> > everything else...
> 
> I may be wrong, but from my previous experience with performance
> benchmarks, memory allocation often takes a significant fraction of CPU
> time. And memory allocation is a routine process on pretty much every
> iteration of CPU-intensive code.

Do you have any evidence for that which you can share?  GC indeed
takes significant time, but memory allocation? never heard of that.

> Is there a way to measure how much CPU time is spent allocating memory?

If you don't know about such a way, then how did you conclude it could
take significant time?

> Would it be of interest to allow locking objects for read/write using
> semantics similar to `with-mutex'?

Locking slows down software and should be avoided, I'm sure you know
it.  But the global lock used by the Lisp threads we have is actually
such a lock, and the results are well known.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 10:48                       ` Ihor Radchenko
  2023-07-06 12:15                         ` Po Lu
@ 2023-07-06 14:10                         ` Eli Zaretskii
  2023-07-06 15:09                           ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-06 14:10 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Eli Zaretskii <eliz@gnu.org>, emacs-devel@gnu.org
> Date: Thu, 06 Jul 2023 10:48:04 +0000
> 
> Po Lu <luangruo@yahoo.com> writes:
> 
> > Ihor Radchenko <yantar92@posteo.net> writes:
> >
> >> I am only aware of text-based communication. Is there anything else?
> >
> > It doesn't have to be text, no, it could be binary data as well, on top
> > of which any RPC mechanism could be built.
> 
> Do you mean that binary communication is already possible? If so, is it
> documented somewhere?

What is the difference between binary and text in this context, in
your interpretation?  (I'm surprised to hear they are perceived as
different by someone who comes from Posix background, not MS-Windows
background.)

Because there's no difference, there's nothing to document.  For the
same reason we don't document that "C-x C-f" can visit binary files,
not just text files.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 13:13                                       ` Ihor Radchenko
@ 2023-07-06 14:13                                         ` Eli Zaretskii
  2023-07-06 14:47                                           ` Ihor Radchenko
  2023-07-07  0:21                                         ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-06 14:13 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Eli Zaretskii <eliz@gnu.org>, emacs-devel@gnu.org
> Date: Thu, 06 Jul 2023 13:13:07 +0000
> 
> Po Lu <luangruo@yahoo.com> writes:
> 
> >>>>    LOCK_OBJECT (foo);
> >>>>    LOCK_OBJECT (XCAR (foo));
> >>> ...
> >> ...
> >> Do you mean that locking XCAR (foo) is unnecessary when foo is locked?
> >
> > No, that there is no need to lock a cons (or a vector, or anything else
> > with a fixed number of Lisp_Object slots) before reading or writing to
> > it.
> 
> I feel confused here.
> 
> My understanding is
> 
>   CHECK_STRING (XCAR (foo));
>   <we do not want XCAR (foo) to be altered here>
>   foo = XSTRING (XCAR (foo));
> 
> So, locking is needed to ensure that CHECK_STRING assertion remains valid.
> 
> Or did you refer to something else?

I don't know what Po Lu had in mind, but one aspect of this is that a
string object might keep its memory address, but its data could be
relocated.  This can happen as part of GC, and is the reason why
string data is kept separate from the string itself.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 14:13                                         ` Eli Zaretskii
@ 2023-07-06 14:47                                           ` Ihor Radchenko
  2023-07-06 15:10                                             ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 14:47 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> My understanding is
>> 
>>   CHECK_STRING (XCAR (foo));
>>   <we do not want XCAR (foo) to be altered here>
>>   foo = XSTRING (XCAR (foo));
>> 
>> So, locking is needed to ensure that CHECK_STRING assertion remains valid.
>> 
>> Or did you refer to something else?
>
> I don't know what Po Lu had in mind, but one aspect of this is that a
> string object might keep its memory address, but its data could be
> relocated.  This can happen as part of GC, and is the reason why
> string data is kept separate from the string itself.

This does not change my understanding.
Locking should prevent manipulations with object data over the time span
the code expects the object to be unchanged.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 14:08                             ` Eli Zaretskii
@ 2023-07-06 15:01                               ` Ihor Radchenko
  2023-07-06 15:16                                 ` Eli Zaretskii
  2023-07-07  0:27                                 ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 15:01 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> I may be wrong, but from my previous experience with performance
>> benchmarks, memory allocation often takes a significant fraction of CPU
>> time. And memory allocation is a routine process on pretty much every
>> iteration of CPU-intensive code.
>
> Do you have any evidence for that which you can share?  GC indeed
> takes significant time, but memory allocation? never heard of that.

This is from my testing of Org parser.
I noticed that storing a pair of buffer positions is noticeably faster
compared to storing string copies of buffer text.

The details usually do not show up in M-x profiler reports, but I now
tried perf out of curiosity:

    14.76%  emacs         emacs                                  [.] re_match_2_internal
     9.39%  emacs         emacs                                  [.] re_compile_pattern
     4.45%  emacs         emacs                                  [.] re_search_2
     3.98%  emacs         emacs                                  [.] funcall_subr

     
     AFAIU, this is memory allocation. Taking a good one second in this case.
     3.37%  emacs         emacs                                  [.] allocate_vectorlike


     3.17%  emacs         emacs                                  [.] Ffuncall
     3.01%  emacs         emacs                                  [.] exec_byte_code
     2.90%  emacs         emacs                                  [.] buf_charpos_to_bytepos
     2.82%  emacs         emacs                                  [.] find_interval
     2.74%  emacs         emacs                                  [.] re_iswctype
     2.57%  emacs         emacs                                  [.] set_default_internal
     2.48%  emacs         emacs                                  [.] plist_get
     2.24%  emacs         emacs                                  [.] Fmemq
     1.95%  emacs         emacs                                  [.] process_mark_stack

These are just CPU cycles. I am not sure if there are any other
overheads related to memory allocation that translate into extra user time. 

>> Would it be of interest to allow locking objects for read/write using
>> semantics similar to `with-mutex'?
>
> Locking slows down software and should be avoided, I'm sure you know
> it.

I am not sure anymore.
Po Lu appears to advocate for locking instead of isolated process
approach, referring to other software.

> ... But the global lock used by the Lisp threads we have is actually
> such a lock, and the results are well known.

To be fair, global lock is an extreme worst-case scenario.
Locking specific Lisp objects is unlikely to lock normal Emacs usage,
especially when the async code is written carefully. Except when there
is a need to lock something from global and frequently used Emacs state,
like heap or obarray. Which is why I asked about memory allocation.

The need to avoid locked heap is the main reason I proposed isolated
processes.

... well, also things like lisp_eval_depth and other global interpreter
state variables.

But, AFAIU, Po Lu is advocating that trying isolated processes is not
worth the effort and it is better to byte the bullet and implement
proper locking.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 14:10                         ` Eli Zaretskii
@ 2023-07-06 15:09                           ` Ihor Radchenko
  2023-07-06 15:18                             ` Eli Zaretskii
  2023-07-07  0:22                             ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 15:09 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> Do you mean that binary communication is already possible? If so, is it
>> documented somewhere?
>
> What is the difference between binary and text in this context, in
> your interpretation?

AFAIK, process communication is now implemented using buffers that, even
in the absence of coding system, index the data stream into byte array.
I am not sure if it is something that can be directly fed to memcpy (thus
avoiding too much of extra cost for passing Lisp data around).

> (I'm surprised to hear they are perceived as
> different by someone who comes from Posix background, not MS-Windows
> background.)

I was looking at this from C perspective.

And I am used to see Posix as text-based. At most, structured text like
https://www.nushell.sh/ Maybe I am too young.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 14:47                                           ` Ihor Radchenko
@ 2023-07-06 15:10                                             ` Eli Zaretskii
  2023-07-06 16:17                                               ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-06 15:10 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Thu, 06 Jul 2023 14:47:13 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> My understanding is
> >> 
> >>   CHECK_STRING (XCAR (foo));
> >>   <we do not want XCAR (foo) to be altered here>
> >>   foo = XSTRING (XCAR (foo));
> >> 
> >> So, locking is needed to ensure that CHECK_STRING assertion remains valid.
> >> 
> >> Or did you refer to something else?
> >
> > I don't know what Po Lu had in mind, but one aspect of this is that a
> > string object might keep its memory address, but its data could be
> > relocated.  This can happen as part of GC, and is the reason why
> > string data is kept separate from the string itself.
> 
> This does not change my understanding.
> Locking should prevent manipulations with object data over the time span
> the code expects the object to be unchanged.

That could defeat GC, should Emacs decide to run it while the lock is
in place.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 15:01                               ` Ihor Radchenko
@ 2023-07-06 15:16                                 ` Eli Zaretskii
  2023-07-06 16:32                                   ` Ihor Radchenko
  2023-07-07  0:27                                 ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-06 15:16 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Thu, 06 Jul 2023 15:01:39 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> I may be wrong, but from my previous experience with performance
> >> benchmarks, memory allocation often takes a significant fraction of CPU
> >> time. And memory allocation is a routine process on pretty much every
> >> iteration of CPU-intensive code.
> >
> > Do you have any evidence for that which you can share?  GC indeed
> > takes significant time, but memory allocation? never heard of that.
> 
> This is from my testing of Org parser.
> I noticed that storing a pair of buffer positions is noticeably faster
> compared to storing string copies of buffer text.
> 
> The details usually do not show up in M-x profiler reports, but I now
> tried perf out of curiosity:
> 
>     14.76%  emacs         emacs                                  [.] re_match_2_internal
>      9.39%  emacs         emacs                                  [.] re_compile_pattern
>      4.45%  emacs         emacs                                  [.] re_search_2
>      3.98%  emacs         emacs                                  [.] funcall_subr
> 
>      
>      AFAIU, this is memory allocation. Taking a good one second in this case.
>      3.37%  emacs         emacs                                  [.] allocate_vectorlike

It is?  Which part(s) of allocate_vectorlike take these 3.37% of run
time?  It does much more than just allocate memory.

> These are just CPU cycles. I am not sure if there are any other
> overheads related to memory allocation that translate into extra user time. 

Well, we need to be pretty damn sure before we consider this a fact,
don't we?

> > ... But the global lock used by the Lisp threads we have is actually
> > such a lock, and the results are well known.
> 
> To be fair, global lock is an extreme worst-case scenario.

If you consider the fact that the global state in Emacs is huge, maybe
it is a good approximation to what will need to be locked anyway?

> Locking specific Lisp objects is unlikely to lock normal Emacs usage,
> especially when the async code is written carefully. Except when there
> is a need to lock something from global and frequently used Emacs state,
> like heap or obarray. Which is why I asked about memory allocation.

You forget buffers, windows, frames, variables, and other global stuff.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 15:09                           ` Ihor Radchenko
@ 2023-07-06 15:18                             ` Eli Zaretskii
  2023-07-06 16:36                               ` Ihor Radchenko
  2023-07-07  0:22                             ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-06 15:18 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Thu, 06 Jul 2023 15:09:06 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> Do you mean that binary communication is already possible? If so, is it
> >> documented somewhere?
> >
> > What is the difference between binary and text in this context, in
> > your interpretation?
> 
> AFAIK, process communication is now implemented using buffers that, even
> in the absence of coding system, index the data stream into byte array.

Yes, but isn't binary data also a stream of bytes?

> I am not sure if it is something that can be directly fed to memcpy (thus
> avoiding too much of extra cost for passing Lisp data around).

If you don't want the incoming data to be inserted into a buffer or
produce a string from it, then what do you want to do with it instead?
To use something in Emacs, we _must_ make some Lisp object out of it,
right?

> > (I'm surprised to hear they are perceived as
> > different by someone who comes from Posix background, not MS-Windows
> > background.)
> 
> I was looking at this from C perspective.

There's no difference there as well.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 15:10                                             ` Eli Zaretskii
@ 2023-07-06 16:17                                               ` Ihor Radchenko
  2023-07-06 18:19                                                 ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 16:17 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> This does not change my understanding.
>> Locking should prevent manipulations with object data over the time span
>> the code expects the object to be unchanged.
>
> That could defeat GC, should Emacs decide to run it while the lock is
> in place.

May you elaborate?

From my understanding, GC is now called in specific places in subr code.
If we consider scenario when multiple Emacs threads are running and one
is requesting GC, it should be acceptable to delay that request and wait
until all other threads eventually arrive to GC call. Once that is done,
GC is safe to run.

Of course, GC calls must not be done while Object lock is in place. But
that's not too different from the existing requirement for GC calls -
they are not sprinkled in arbitrary places.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 15:16                                 ` Eli Zaretskii
@ 2023-07-06 16:32                                   ` Ihor Radchenko
  2023-07-06 17:50                                     ` Eli Zaretskii
  2023-07-07  0:41                                     ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 16:32 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>>      AFAIU, this is memory allocation. Taking a good one second in this case.
>>      3.37%  emacs         emacs                                  [.] allocate_vectorlike
>
> It is?  Which part(s) of allocate_vectorlike take these 3.37% of run
> time?  It does much more than just allocate memory.

Sorry, but I have no idea. The above is what I see from perf report.

For comparison, this is how things look like with Org parser version
that allocated 1.5-2x more memory (proper strings instead of buffer
positions and proper strings instead of interned constant strings):

    18.39%  emacs         emacs                           [.] exec_byte_code
    13.80%  emacs         emacs                           [.] re_match_2_internal
     6.56%  emacs         emacs                           [.] re_compile_pattern

     5.09%  emacs         emacs                           [.] allocate_vectorlike

     4.35%  emacs         emacs                           [.] re_search_2
     3.57%  emacs         emacs                           [.] Fmemq
     3.13%  emacs         emacs                           [.] find_interval

So, my efforts did reduce the time spent in allocate_vectorlike.
Note, however, that these two datapoints differ more than just by how
memory is allocated.

But 5% CPU time spend allocating memory is not insignificant.

>> These are just CPU cycles. I am not sure if there are any other
>> overheads related to memory allocation that translate into extra user time. 
>
> Well, we need to be pretty damn sure before we consider this a fact,
> don't we?

Sure. Though my argument was less about how long Emacs spends allocating
memory and more about how frequently a typical Elisp code requests such
allocations. I have a gut feeling that even if taking short time,
frequent interrupts may create intermittent typing delays.

>> > ... But the global lock used by the Lisp threads we have is actually
>> > such a lock, and the results are well known.
>> 
>> To be fair, global lock is an extreme worst-case scenario.
>
> If you consider the fact that the global state in Emacs is huge, maybe
> it is a good approximation to what will need to be locked anyway?

Not every thread will need to use global state, except maybe memory
allocation. Or do I miss an elephant in the room?

> You forget buffers, windows, frames, variables, and other global stuff.

Those will only matter when we try to access them from multiple threads,
no? If a thread is working with a temporary buffer and locks it, that
buffer has almost 0 chance to be accessed by another thread.
Same with variables - even if some global variable needs to be locked,
it is unlikely that it will need to be accessed by another thread.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 15:18                             ` Eli Zaretskii
@ 2023-07-06 16:36                               ` Ihor Radchenko
  2023-07-06 17:53                                 ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-06 16:36 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> AFAIK, process communication is now implemented using buffers that, even
>> in the absence of coding system, index the data stream into byte array.
>
> Yes, but isn't binary data also a stream of bytes?

It is, but storing that data in buffer will involve non-trivial extra
memory allocation.

>> I am not sure if it is something that can be directly fed to memcpy (thus
>> avoiding too much of extra cost for passing Lisp data around).
>
> If you don't want the incoming data to be inserted into a buffer or
> produce a string from it, then what do you want to do with it instead?
> To use something in Emacs, we _must_ make some Lisp object out of it,
> right?

What I had in mind is, for example, memcpy wide_int_object -> child
process -> memcpy to child process own heap.

So that we do not need to go through creating a new wide_int object in
the child process.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 16:32                                   ` Ihor Radchenko
@ 2023-07-06 17:50                                     ` Eli Zaretskii
  2023-07-07 12:30                                       ` Ihor Radchenko
  2023-07-07  0:41                                     ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-06 17:50 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Thu, 06 Jul 2023 16:32:03 +0000
> 
> > It is?  Which part(s) of allocate_vectorlike take these 3.37% of run
> > time?  It does much more than just allocate memory.
> 
> Sorry, but I have no idea. The above is what I see from perf report.
> 
> For comparison, this is how things look like with Org parser version
> that allocated 1.5-2x more memory (proper strings instead of buffer
> positions and proper strings instead of interned constant strings):
> 
>     18.39%  emacs         emacs                           [.] exec_byte_code
>     13.80%  emacs         emacs                           [.] re_match_2_internal
>      6.56%  emacs         emacs                           [.] re_compile_pattern
> 
>      5.09%  emacs         emacs                           [.] allocate_vectorlike
> 
>      4.35%  emacs         emacs                           [.] re_search_2
>      3.57%  emacs         emacs                           [.] Fmemq
>      3.13%  emacs         emacs                           [.] find_interval
> 
> So, my efforts did reduce the time spent in allocate_vectorlike.
> Note, however, that these two datapoints differ more than just by how
> memory is allocated.
> 
> But 5% CPU time spend allocating memory is not insignificant.

Once again, it isn't necessarily memory allocation per se.  For
example, it could be find_suspicious_object_in_range, called from
allocate_vectorlike.

> Sure. Though my argument was less about how long Emacs spends allocating
> memory and more about how frequently a typical Elisp code requests such
> allocations. I have a gut feeling that even if taking short time,
> frequent interrupts may create intermittent typing delays.

I very much doubt these interrupts are because Emacs waits for memory
allocation.

> >> > ... But the global lock used by the Lisp threads we have is actually
> >> > such a lock, and the results are well known.
> >> 
> >> To be fair, global lock is an extreme worst-case scenario.
> >
> > If you consider the fact that the global state in Emacs is huge, maybe
> > it is a good approximation to what will need to be locked anyway?
> 
> Not every thread will need to use global state, except maybe memory
> allocation. Or do I miss an elephant in the room?
> 
> > You forget buffers, windows, frames, variables, and other global stuff.
> 
> Those will only matter when we try to access them from multiple threads,
> no?

Aren't we talking precisely about several threads running
concurrently?

> If a thread is working with a temporary buffer and locks it, that
> buffer has almost 0 chance to be accessed by another thread.

But "working on a buffer" requires access and modification of many
global structures.  Just walk the code in set-buffer and its
subroutines, and you will see that.

> Same with variables - even if some global variable needs to be locked,
> it is unlikely that it will need to be accessed by another thread.

I think you misunderstand the frequency of such collisions.
case-fold-search comes to mind.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 16:36                               ` Ihor Radchenko
@ 2023-07-06 17:53                                 ` Eli Zaretskii
  0 siblings, 0 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-06 17:53 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Thu, 06 Jul 2023 16:36:04 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> AFAIK, process communication is now implemented using buffers that, even
> >> in the absence of coding system, index the data stream into byte array.
> >
> > Yes, but isn't binary data also a stream of bytes?
> 
> It is, but storing that data in buffer will involve non-trivial extra
> memory allocation.

And in your memcpy idea, from where will the buffer come to which you
copy? won't you need to allocate memory?

Besides, if you need to insert reasonably small amounts of data, the
Emacs buffer-with-gap model avoids allocating too much, sometimes
nothing at all.

> >> I am not sure if it is something that can be directly fed to memcpy (thus
> >> avoiding too much of extra cost for passing Lisp data around).
> >
> > If you don't want the incoming data to be inserted into a buffer or
> > produce a string from it, then what do you want to do with it instead?
> > To use something in Emacs, we _must_ make some Lisp object out of it,
> > right?
> 
> What I had in mind is, for example, memcpy wide_int_object -> child
> process -> memcpy to child process own heap.

Emacs Lisp objects are never just one int.  The integer you see in C
are just a kind of handles -- they are pointers in disguise, and the
pointer points to a reasonably large struct.

> So that we do not need to go through creating a new wide_int object in
> the child process.

I don't see how you can avoid that.  I feel that there's some serious
misunderstanding here.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 16:17                                               ` Ihor Radchenko
@ 2023-07-06 18:19                                                 ` Eli Zaretskii
  2023-07-07 12:04                                                   ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-06 18:19 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Thu, 06 Jul 2023 16:17:02 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> This does not change my understanding.
> >> Locking should prevent manipulations with object data over the time span
> >> the code expects the object to be unchanged.
> >
> > That could defeat GC, should Emacs decide to run it while the lock is
> > in place.
> 
> May you elaborate?

GC doesn't only free memory used by dead objects.  It also performs
bookkeeping on live objects: compacts data of strings, relocates text
of buffers, compacts the gap in buffers where it became too large,
etc.  This bookkeeping is more important when Emacs is short on
memory: in those cases these bookkeeping tasks might mean the
difference between being able to keep the session healthy enough to
allow the user to shut down in an orderly fashion.

Locking objects means these bookkeeping tasks will be disabled.  That
could adversely affect the available memory and the memory footprint
in general.

> Of course, GC calls must not be done while Object lock is in place. But
> that's not too different from the existing requirement for GC calls -
> they are not sprinkled in arbitrary places.

Currently, once GC starts it is free to do all of its parts.  If
locking prevents GC altogether, your proposal is even worse than I
feared.  Especially since as long as some thread runs, some objects
will be locked, and GC will not be able to run at all.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05 13:10                   ` Ihor Radchenko
@ 2023-07-06 18:35                     ` Lynn Winebarger
  2023-07-07 11:48                       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Lynn Winebarger @ 2023-07-06 18:35 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

On Wed, Jul 5, 2023 at 9:10 AM Ihor Radchenko <yantar92@posteo.net> wrote:
>
> Lynn Winebarger <owinebar@gmail.com> writes:
>
> > The best idea I've had for a general solution would be to make "concurrent"
> > versions of the fundamental lisp objects that act like immutable git
> > repositories, with the traditional versions of the objects acting as
> > working copies but only recording changes.  Then each checked out copy
> > could push charges back, and if the merge fails an exception would be
> > thrown in the thread of that working copy which the elisp code could decide
> > how to handle.  That would work for inter-process shared memory or plain
> > in-process memory between threads.  Then locks are only needed for updating
> > the main reference to the concurrent object.
>
> Honestly, it sounds overengineered.
> Even if not, it is probably easier to implement a more limited version
> first and only then think about fancier staff like you described (not
> that I understand your idea fully).
>

Maybe - I'm not claiming it's trivial.  I do think the locking for
that kind of design is generally much simpler and less prone to
deadlocks than trying to make operations on the current set of
fundamental objects individually atomic.  Given how many lisp objects,
e.g. buffers and functions, are referenceable by name through global
tables, it's difficult to see how emacs could ever have fine-grained
parallelism that's efficient, correct, and deadlock-free any other
way.

The inspiration for this approach (for me) was from reviewing the
version control section of the emacs manual on approaches to
concurrent access.  Version control systems like git are essentially
"concurrent editing in the large".  What is required for emacs is
"concurrent editing in the small", but the issues with sharing and
updating are very much the same.

Lynn



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 13:13                                       ` Ihor Radchenko
  2023-07-06 14:13                                         ` Eli Zaretskii
@ 2023-07-07  0:21                                         ` Po Lu
  1 sibling, 0 replies; 192+ messages in thread
From: Po Lu @ 2023-07-07  0:21 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Po Lu <luangruo@yahoo.com> writes:
>
>>>>>    LOCK_OBJECT (foo);
>>>>>    LOCK_OBJECT (XCAR (foo));
>>>> ...
>>> ...
>>> Do you mean that locking XCAR (foo) is unnecessary when foo is locked?
>>
>> No, that there is no need to lock a cons (or a vector, or anything else
>> with a fixed number of Lisp_Object slots) before reading or writing to
>> it.
>
> I feel confused here.
>
> My understanding is
>
>   CHECK_STRING (XCAR (foo));
>   <we do not want XCAR (foo) to be altered here>
>   foo = XSTRING (XCAR (foo));
>
> So, locking is needed to ensure that CHECK_STRING assertion remains valid.

No, what we want to make sure is that the same string whose type was
checked is extracted.  By not loading XCAR (foo) twice.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 15:09                           ` Ihor Radchenko
  2023-07-06 15:18                             ` Eli Zaretskii
@ 2023-07-07  0:22                             ` Po Lu
  1 sibling, 0 replies; 192+ messages in thread
From: Po Lu @ 2023-07-07  0:22 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I was looking at this from C perspective.

C distinguishes between text and binary data even less than POSIX...



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 15:01                               ` Ihor Radchenko
  2023-07-06 15:16                                 ` Eli Zaretskii
@ 2023-07-07  0:27                                 ` Po Lu
  2023-07-07 12:45                                   ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-07  0:27 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

>      3.37%  emacs         emacs                                  [.] allocate_vectorlike
>      2.90%  emacs         emacs                                  [.] buf_charpos_to_bytepos
>      2.82%  emacs         emacs                                  [.] find_interval

Out of all those functions, I think only these three will require some
form of interlocking.  So assuming that the Org parser is being run
concurrently, less than 10% of it will be unable to run simultaneously.

Admittedly these ballpark estimates are somewhat contrived, but they're
enough to get my point across.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 16:32                                   ` Ihor Radchenko
  2023-07-06 17:50                                     ` Eli Zaretskii
@ 2023-07-07  0:41                                     ` Po Lu
  2023-07-07 12:42                                       ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-07  0:41 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Same with variables - even if some global variable needs to be locked,
> it is unlikely that it will need to be accessed by another thread.

I doubt symbol value cells will need to be individually interlocked.
Emacs Lisp code will need to insert the appropriate instructions to
prevent the CPU from undertaking optimizations on load and store
operations on those cells when necessary.  Imagine a situation where a
secondary thread writes the result of an expensive computation to X, and
then sets a flag to indicate its completion:

(defvar X nil)
(defvar Y nil)

thread 1:

       (setq X (some-expensive-computation))
       ;; __machine_w_barrier ()
       (setq Y t)
       ;; On machines that don't do this automatically, flush the cache
       ;; eventually.

A second thread then waits for Y to be set, indicating completion:

       (when Y
	 ;; __machine_r_barrier ()
	 (do-something-with X))

If the barrier instructions are not inserted, Y could be set to t before
X is set to the result of the computation, and the main thread could
also load X prior to loading (and inspecting) Y.  But there would be no
chance of a previously read value of either X or Y appearing after a new
value becoming visible or a partially written value of X or Y being
read, forgoing the need for any locking being performed on the
individual symbols themselves.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 18:35                     ` Lynn Winebarger
@ 2023-07-07 11:48                       ` Ihor Radchenko
  0 siblings, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 11:48 UTC (permalink / raw)
  To: Lynn Winebarger; +Cc: Eli Zaretskii, emacs-devel

Lynn Winebarger <owinebar@gmail.com> writes:

>> Honestly, it sounds overengineered.
>> Even if not, it is probably easier to implement a more limited version
>> first and only then think about fancier staff like you described (not
>> that I understand your idea fully).
>>
>
> Maybe - I'm not claiming it's trivial.

It is not very clear for me how to implement your suggestion at all.
So, unless you do it yourself or elaborate in much more details, it will
not happen.
I am not even sure how it can work. Especially given that Elisp heap is
not a persistent structure.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 18:19                                                 ` Eli Zaretskii
@ 2023-07-07 12:04                                                   ` Ihor Radchenko
  2023-07-07 13:16                                                     ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 12:04 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> > That could defeat GC, should Emacs decide to run it while the lock is
>> > in place.
>> 
>> May you elaborate?
>
> GC doesn't only free memory used by dead objects.  It also performs
> bookkeeping on live objects: compacts data of strings, relocates text
> of buffers, compacts the gap in buffers where it became too large,
> etc.  This bookkeeping is more important when Emacs is short on
> memory: in those cases these bookkeeping tasks might mean the
> difference between being able to keep the session healthy enough to
> allow the user to shut down in an orderly fashion.

What you are describing will only affect subr primitives that work
directly with C structs and address space.

So, we can distinguish two locks: (1) low-level, only available to C
subroutines; (2) Elisp-level, where the lock merely prevents other Elisp
code from modifying the data. GC is safe to run when type-2 lock is in
place as it will never clear the data in use and never alter the data in
any way visible on Elisp level.

> Locking objects means these bookkeeping tasks will be disabled.  That
> could adversely affect the available memory and the memory footprint
> in general.

I do not think that it is that bad if we consider type-1 locks.

Consider two parallel threads:

----- 1 ------
(let ((i 0)) (while t (cl-incf i)))
--------------

----- 2 -----
(while t (read-char))
-------------

Both the threads will call eval_sub frequently that will trigger
maybe_gc.

I consider that maybe_gc, when decided that GC is necessary, but cannot
continue because an object is locked using type-1 lock, will pause the
current thread.

Let's consider the current thread to be thread 2 paused because thread 1
is doing (setq i ...) at the same time and locked object corresponding
to obarray variable slot for "i".

Thread 1 will continue executing until (very soon) it calls maybe_gc
itself. This time, no further object lock is active and gc may proceed,
continuing both the threads once GC is done.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-06 17:50                                     ` Eli Zaretskii
@ 2023-07-07 12:30                                       ` Ihor Radchenko
  2023-07-07 13:34                                         ` Eli Zaretskii
  2023-07-07 13:35                                         ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 12:30 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> So, my efforts did reduce the time spent in allocate_vectorlike.
>> Note, however, that these two datapoints differ more than just by how
>> memory is allocated.
>> 
>> But 5% CPU time spend allocating memory is not insignificant.
>
> Once again, it isn't necessarily memory allocation per se.  For
> example, it could be find_suspicious_object_in_range, called from
> allocate_vectorlike.

I did not have ENABLE_CHECKING in this benchmark.
It is just ./configure --with-native-compilation
So, find_suspicious_object_in_range should not run at all.

>> Sure. Though my argument was less about how long Emacs spends allocating
>> memory and more about how frequently a typical Elisp code requests such
>> allocations. I have a gut feeling that even if taking short time,
>> frequent interrupts may create intermittent typing delays.
>
> I very much doubt these interrupts are because Emacs waits for memory
> allocation.

I guess we can be optimistic. And if not, maybe need to have multiple heaps.

>> If a thread is working with a temporary buffer and locks it, that
>> buffer has almost 0 chance to be accessed by another thread.
>
> But "working on a buffer" requires access and modification of many
> global structures.  Just walk the code in set-buffer and its
> subroutines, and you will see that.

I was only able to identify the following:

interrupt_input_blocked
current_buffer
last_known_column_point

AFAIU, current_buffer might be made thread-local and
last_known_column_point can be made buffer-local.

interrupt_input_blocked is more tricky. But it is just one global state
variable. Surely we can find a solution to make it work with multiple threads.

>> Same with variables - even if some global variable needs to be locked,
>> it is unlikely that it will need to be accessed by another thread.
>
> I think you misunderstand the frequency of such collisions.
> case-fold-search comes to mind.

How so? What is the problem with a buffer-local variable that is rarely
set directly (other than by major modes)? let-binding is common, but it
is not a problem.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07  0:41                                     ` Po Lu
@ 2023-07-07 12:42                                       ` Ihor Radchenko
  2023-07-07 13:31                                         ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 12:42 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> Same with variables - even if some global variable needs to be locked,
>> it is unlikely that it will need to be accessed by another thread.
>
> I doubt symbol value cells will need to be individually interlocked.
> Emacs Lisp code will need to insert the appropriate instructions to
> prevent the CPU from undertaking optimizations on load and store
> operations on those cells when necessary. ...

I had a different simple scenario in mind:

(let ((oldval case-fold-search))
  (unwind-protect
    (progn (setq case-fold-search nil)
      ;; We surely do not want the value of `case-fold-search' to be
      ;; changed in the middle of `expensive-computation'.
      (expensive-computation))
    (setq case-fold-search oldval))

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07  0:27                                 ` Po Lu
@ 2023-07-07 12:45                                   ` Ihor Radchenko
  0 siblings, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 12:45 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>>      3.37%  emacs         emacs                                  [.] allocate_vectorlike
>>      2.90%  emacs         emacs                                  [.] buf_charpos_to_bytepos
>>      2.82%  emacs         emacs                                  [.] find_interval
>
> Out of all those functions, I think only these three will require some
> form of interlocking.  So assuming that the Org parser is being run
> concurrently, less than 10% of it will be unable to run simultaneously.

No, the whole buffer will need to be locked for modification for the
duration of the parsing. Or the parser state info about AST buffer
positions will be broken.

But my point was not about the details of how Org parser works. I just
wanted to show that memory allocation is not necessarily negligible.
Is it slow enough to block interlocking idea? Maybe, maybe not.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 12:04                                                   ` Ihor Radchenko
@ 2023-07-07 13:16                                                     ` Eli Zaretskii
  2023-07-07 14:29                                                       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-07 13:16 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Fri, 07 Jul 2023 12:04:36 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > GC doesn't only free memory used by dead objects.  It also performs
> > bookkeeping on live objects: compacts data of strings, relocates text
> > of buffers, compacts the gap in buffers where it became too large,
> > etc.  This bookkeeping is more important when Emacs is short on
> > memory: in those cases these bookkeeping tasks might mean the
> > difference between being able to keep the session healthy enough to
> > allow the user to shut down in an orderly fashion.
> 
> What you are describing will only affect subr primitives that work
> directly with C structs and address space.

But that's how _everything_ works in Emacs.  No Lisp runs except by
calling primitives.

> So, we can distinguish two locks: (1) low-level, only available to C
> subroutines; (2) Elisp-level, where the lock merely prevents other Elisp
> code from modifying the data. GC is safe to run when type-2 lock is in
> place as it will never clear the data in use and never alter the data in
> any way visible on Elisp level.

Emacs doesn't know whether some C code which runs was invoked from C
or from Lisp.  (Basically, everything is invoked from Lisp, one way or
another, as soon as we call recursive-edit from 'main' for the first
time after startup.)

> > Locking objects means these bookkeeping tasks will be disabled.  That
> > could adversely affect the available memory and the memory footprint
> > in general.
> 
> I do not think that it is that bad if we consider type-1 locks.

There are no type-1 and type-2 locks.  They are indistinguishable.

> Let's consider the current thread to be thread 2 paused because thread 1
> is doing (setq i ...) at the same time and locked object corresponding
> to obarray variable slot for "i".
> 
> Thread 1 will continue executing until (very soon) it calls maybe_gc
> itself. This time, no further object lock is active and gc may proceed,
> continuing both the threads once GC is done.

You are trying to solve what constitutes a very small, almost
negligible, part of the problem.  The elephant in the room is
something else.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 12:42                                       ` Ihor Radchenko
@ 2023-07-07 13:31                                         ` Po Lu
  0 siblings, 0 replies; 192+ messages in thread
From: Po Lu @ 2023-07-07 13:31 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Po Lu <luangruo@yahoo.com> writes:
>
>> Ihor Radchenko <yantar92@posteo.net> writes:
>>
>>> Same with variables - even if some global variable needs to be locked,
>>> it is unlikely that it will need to be accessed by another thread.
>>
>> I doubt symbol value cells will need to be individually interlocked.
>> Emacs Lisp code will need to insert the appropriate instructions to
>> prevent the CPU from undertaking optimizations on load and store
>> operations on those cells when necessary. ...
>
> I had a different simple scenario in mind:
>
> (let ((oldval case-fold-search))
>   (unwind-protect
>     (progn (setq case-fold-search nil)
>       ;; We surely do not want the value of `case-fold-search' to be
>       ;; changed in the middle of `expensive-computation'.
>       (expensive-computation))
>     (setq case-fold-search oldval))

That's easy: case-fold-search can be made local to each thread.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 12:30                                       ` Ihor Radchenko
@ 2023-07-07 13:34                                         ` Eli Zaretskii
  2023-07-07 15:17                                           ` Ihor Radchenko
  2023-07-07 13:35                                         ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-07 13:34 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Fri, 07 Jul 2023 12:30:16 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> But 5% CPU time spend allocating memory is not insignificant.
> >
> > Once again, it isn't necessarily memory allocation per se.  For
> > example, it could be find_suspicious_object_in_range, called from
> > allocate_vectorlike.
> 
> I did not have ENABLE_CHECKING in this benchmark.
> It is just ./configure --with-native-compilation
> So, find_suspicious_object_in_range should not run at all.

Then maybe you should invest some serious time looking into this and
figuring out why this happens.  Although in my book 5% of run time or
even 10% of run time is not the first place where I'd look for
optimizations.

> maybe need to have multiple heaps.

All modern implementation of malloc already do use several different
heaps internally.

> >> If a thread is working with a temporary buffer and locks it, that
> >> buffer has almost 0 chance to be accessed by another thread.
> >
> > But "working on a buffer" requires access and modification of many
> > global structures.  Just walk the code in set-buffer and its
> > subroutines, and you will see that.
> 
> I was only able to identify the following:
> 
> interrupt_input_blocked
> current_buffer
> last_known_column_point

There are much more:

  buffer-alist
  buffer's base-buffer
  buffer's undo-list
  buffer's point and begv/zv markers
  buffer's marker list
  buffer's local variables

(Where the above says "buffer's", it means the buffer that was current
before set-buffer.)

> AFAIU, current_buffer might be made thread-local and
> last_known_column_point can be made buffer-local.

The current buffer is already thread-local.

> interrupt_input_blocked is more tricky. But it is just one global state
> variable. Surely we can find a solution to make it work with multiple threads.

Yes, but we have just looked at a single primitive: set-buffer.  Once
in the buffer, any useful Lisp program will do gobs of stuff, and each
one of those accesses more and more globals.  How do you protect all
that in a 100% reliable way? by identifying the variables and
structures one by one? what if tomorrow some change in Emacs adds one
more?

> >> Same with variables - even if some global variable needs to be locked,
> >> it is unlikely that it will need to be accessed by another thread.
> >
> > I think you misunderstand the frequency of such collisions.
> > case-fold-search comes to mind.
> 
> How so? What is the problem with a buffer-local variable that is rarely
> set directly (other than by major modes)? let-binding is common, but it
> is not a problem.

Searching for "setq case-fold-search" finds more than 30 hits in Emacs
alone.  And this variable is just an example.

Like I said: your mental model of the Emacs global state is too
optimistic.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 12:30                                       ` Ihor Radchenko
  2023-07-07 13:34                                         ` Eli Zaretskii
@ 2023-07-07 13:35                                         ` Po Lu
  2023-07-07 15:31                                           ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-07 13:35 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> AFAIU, current_buffer might be made thread-local and

It _is_ thread local.

> interrupt_input_blocked is more tricky. But it is just one global state
> variable. Surely we can find a solution to make it work with multiple threads.

interrupt_input_blocked should only be relevant to the thread reading
input, i.e. the main thread.

The problem with buffer modification lies in two threads writing to the
same buffer at the same time, changing not only the buffer text, but
also PT, ZV, GPT, its markers and its overlay lists.  No two threads may
be allowed to modify the buffer simultaneously.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 13:16                                                     ` Eli Zaretskii
@ 2023-07-07 14:29                                                       ` Ihor Radchenko
  2023-07-07 14:47                                                         ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 14:29 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> What you are describing will only affect subr primitives that work
>> directly with C structs and address space.
>
> But that's how _everything_ works in Emacs.  No Lisp runs except by
> calling primitives.

Looks like we have some misunderstanding here, because I do not clearly
see how this kind of catch-all argument can be obvious.

>> So, we can distinguish two locks: (1) low-level, only available to C
>> subroutines; (2) Elisp-level, where the lock merely prevents other Elisp
>> code from modifying the data. GC is safe to run when type-2 lock is in
>> place as it will never clear the data in use and never alter the data in
>> any way visible on Elisp level.
>
> Emacs doesn't know whether some C code which runs was invoked from C
> or from Lisp.  (Basically, everything is invoked from Lisp, one way or
> another, as soon as we call recursive-edit from 'main' for the first
> time after startup.)

Let me elaborate.

What GC is doing may affect C pointers to internal representations of
Elisp objects. But never the Lisp representations.

So, GC running only matters during a subroutine execution. And not every
subroutine - just for a subset where we directly work with internal
object structs.

The subroutines that are GC-sensitive will need to set and release the
object lock before/after they are done working with that object. That
object lock type will be set in C code directly and will not be
available from Elisp.

This approach will, in the worst case, delay the GC by N_threads *
time_between_maybe_gc_calls_in_code. This is a rather small price to pay
in my book. GC is much less frequent (orders less) than the time between
calls to maybe_gc.

>> I do not think that it is that bad if we consider type-1 locks.
>
> There are no type-1 and type-2 locks.  They are indistinguishable.

I suggest to create two distinguishable locks - one that prevents GC and
one that does not. Type-1 will ever only be set from some subset of
C subroutines and will generally not lock for too long. Type-2 can be
set from the Elisp code, can hold for a long time, but will not prevent
GC.

> You are trying to solve what constitutes a very small, almost
> negligible, part of the problem.  The elephant in the room is
> something else.

Ok. Please, describe the elephant in details. Then, we will be able to
focus on this real big problem.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 14:29                                                       ` Ihor Radchenko
@ 2023-07-07 14:47                                                         ` Eli Zaretskii
  2023-07-07 15:21                                                           ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-07 14:47 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Fri, 07 Jul 2023 14:29:48 +0000
> 
> What GC is doing may affect C pointers to internal representations of
> Elisp objects. But never the Lisp representations.

What do you mean by "Lisp representation"?

> So, GC running only matters during a subroutine execution. And not every
> subroutine - just for a subset where we directly work with internal
> object structs.

We always do work with the internal object structs.

> The subroutines that are GC-sensitive will need to set and release the
> object lock before/after they are done working with that object. That
> object lock type will be set in C code directly and will not be
> available from Elisp.

You are describing something that is not Emacs.

> > You are trying to solve what constitutes a very small, almost
> > negligible, part of the problem.  The elephant in the room is
> > something else.
> 
> Ok. Please, describe the elephant in details.

I already did: it's the huge global state used implicitly by every
Lisp program out there.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 13:34                                         ` Eli Zaretskii
@ 2023-07-07 15:17                                           ` Ihor Radchenko
  2023-07-07 19:31                                             ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 15:17 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> maybe need to have multiple heaps.
>
> All modern implementation of malloc already do use several different
> heaps internally.

I was talking about Elisp heaps.
AFAIU, Elisp memory management is conceptually single-threaded.

>> I was only able to identify the following:
>> 
>> interrupt_input_blocked
>> current_buffer
>> last_known_column_point
>
> There are much more:
>
>   buffer-alist

I do not see how this is a problem to lock/unlock this variable.

>   buffer's base-buffer

I do not see it. May you point me to where this is changed?

>   buffer's undo-list

That's just a synchronization between old_buffer and
old_buffer->base_buffer.
I am not 100% sure why it is necessary to be done this way and manually
instead of making undo-list values in indirect buffers point to base
buffer.

>   buffer's point and begv/zv markers

AFAIU, these store the last point position and narrowing state.
I do not see much problem here, except a need to lock these variables
while writing them. They will not affect PT, BEGZ, and ZV in other
threads, even if those operate on the same buffer now.

>   buffer's marker list

May you point me where it is?

>   buffer's local variables

I admit that I do not understand what the following comment is talking
about:

  /* Look down buffer's list of local Lisp variables
     to find and update any that forward into C variables.  */

>> AFAIU, current_buffer might be made thread-local and
>> last_known_column_point can be made buffer-local.
>
> The current buffer is already thread-local.

Thanks for the pointer. I did not expect
#define current_buffer (current_thread->m_current_buffer)

>> interrupt_input_blocked is more tricky. But it is just one global state
>> variable. Surely we can find a solution to make it work with multiple threads.
>
> Yes, but we have just looked at a single primitive: set-buffer.  Once
> in the buffer, any useful Lisp program will do gobs of stuff, and each
> one of those accesses more and more globals.  How do you protect all
> that in a 100% reliable way? by identifying the variables and
> structures one by one? what if tomorrow some change in Emacs adds one
> more?

This sounds like a problem that is already solved by any program that
uses async threads. Maybe Po Lu can provide good insights.

>> > I think you misunderstand the frequency of such collisions.
>> > case-fold-search comes to mind.
>> 
>> How so? What is the problem with a buffer-local variable that is rarely
>> set directly (other than by major modes)? let-binding is common, but it
>> is not a problem.
>
> Searching for "setq case-fold-search" finds more than 30 hits in Emacs
> alone.  And this variable is just an example.

These are mostly major/minor modes and constructs like

(setq case-fold-search ...)
(do staff)
(setq case-fold-search old-value)

The last one is legitimate problem with logic. Although not much
different from Elisp threads doing the same and changing buffer-local
variables in the midst of other thread running in the same buffer.
So, I do not see how this should prevent async threads. We just need to
take care about setq not attempting to write into buffer-local variables
at the same time, which is solved by locking.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 14:47                                                         ` Eli Zaretskii
@ 2023-07-07 15:21                                                           ` Ihor Radchenko
  2023-07-07 18:04                                                             ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 15:21 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> What GC is doing may affect C pointers to internal representations of
>> Elisp objects. But never the Lisp representations.
>
> What do you mean by "Lisp representation"?

We clearly misunderstand each other.
May you please provide a concrete example how running GC breaks things?

>> Ok. Please, describe the elephant in details.
>
> I already did: it's the huge global state used implicitly by every
> Lisp program out there.

I am interested in specific details. The way you say it now is general
and sounds like "it is impossible and there is no point trying to do
anything".

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 13:35                                         ` Po Lu
@ 2023-07-07 15:31                                           ` Ihor Radchenko
  2023-07-08  0:44                                             ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 15:31 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> interrupt_input_blocked is more tricky. But it is just one global state
>> variable. Surely we can find a solution to make it work with multiple threads.
>
> interrupt_input_blocked should only be relevant to the thread reading
> input, i.e. the main thread.

So, we can hold when interrupt_input_blocked is changed until the thread
becomes main thread?

> The problem with buffer modification lies in two threads writing to the
> same buffer at the same time, changing not only the buffer text, but
> also PT, ZV, GPT, its markers and its overlay lists.  No two threads may
> be allowed to modify the buffer simultaneously.

Hmm. What about locking thread->current_buffer for all other threads?
This appears to solve the majority of the discussed problems.

I am not sure how to deal with buffer->pt for multiple threads running
in the same buffer.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 15:21                                                           ` Ihor Radchenko
@ 2023-07-07 18:04                                                             ` Eli Zaretskii
  2023-07-07 18:24                                                               ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-07 18:04 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Fri, 07 Jul 2023 15:21:04 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> What GC is doing may affect C pointers to internal representations of
> >> Elisp objects. But never the Lisp representations.
> >
> > What do you mean by "Lisp representation"?
> 
> We clearly misunderstand each other.
> May you please provide a concrete example how running GC breaks things?

I already did: see my description of compacting strings, relocating
buffer text, and other stuff GC does.

> >> Ok. Please, describe the elephant in details.
> >
> > I already did: it's the huge global state used implicitly by every
> > Lisp program out there.
> 
> I am interested in specific details. The way you say it now is general
> and sounds like "it is impossible and there is no point trying to do
> anything".

That's indeed what I want to say.  You cannot have true concurrency as
long as Emacs Lisp programs use this huge global state.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 18:04                                                             ` Eli Zaretskii
@ 2023-07-07 18:24                                                               ` Ihor Radchenko
  2023-07-07 19:36                                                                 ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 18:24 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> May you please provide a concrete example how running GC breaks things?
>
> I already did: see my description of compacting strings, relocating
> buffer text, and other stuff GC does.

But how exactly does, say, relocating buffer text can break things? May
you provide a pseudo-code example?

>> I am interested in specific details. The way you say it now is general
>> and sounds like "it is impossible and there is no point trying to do
>> anything".
>
> That's indeed what I want to say.  You cannot have true concurrency as
> long as Emacs Lisp programs use this huge global state.

Then the question is: can the global state be reduced?
May it be acceptable to have a limited concurrency where async threads
are only allowed to modify a portion of the global state?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 15:17                                           ` Ihor Radchenko
@ 2023-07-07 19:31                                             ` Eli Zaretskii
  2023-07-07 20:01                                               ` Ihor Radchenko
  2023-07-08  0:51                                               ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-07 19:31 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Fri, 07 Jul 2023 15:17:23 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> maybe need to have multiple heaps.
> >
> > All modern implementation of malloc already do use several different
> > heaps internally.
> 
> I was talking about Elisp heaps.

I don't understand what you mean by "Elisp heaps".  Emacs allocates
memory by using the system's memory-allocation routines.  We don't
have our own heaps.

> AFAIU, Elisp memory management is conceptually single-threaded.

Again, I don't understand why you say that and what you mean by that.
We just call malloc, and malloc on modern systems is thread-safe.

> > There are much more:
> >
> >   buffer-alist
> 
> I do not see how this is a problem to lock/unlock this variable.

So your solution to each such problem is to lock variables?  If so,
you will end up locking a lot of them, and how is this different from
using the global lock we do today with Lisp threads?

> >   buffer's base-buffer
> 
> I do not see it. May you point me to where this is changed?

See set_buffer_internal_2.

How do you investigate this stuff?  I type M-. on every macro and
function call I see, recursively, and look what they do.  If you do
the same, how come you overlook all these details?  And if you do not
use M-., how do you expect to learn what the code does internally?

> >   buffer's undo-list
> 
> That's just a synchronization between old_buffer and
> old_buffer->base_buffer.
> I am not 100% sure why it is necessary to be done this way and manually
> instead of making undo-list values in indirect buffers point to base
> buffer.

So you are now saying that code which worked in Emacs for decades does
unnecessary stuff, and should be removed or ignored?

How is it useful, in the context of this discussion, to take such a
stance?  IMO, we should assume that whatever the current code does it
does for a reason, and look at the effects of concurrency on the code
as it is.

> >   buffer's point and begv/zv markers
> 
> AFAIU, these store the last point position and narrowing state.
> I do not see much problem here, except a need to lock these variables
> while writing them. They will not affect PT, BEGZ, and ZV in other
> threads, even if those operate on the same buffer now.

Oh, yes, they will: see fetch_buffer_markers, called by
set_buffer_internal_2.

> >   buffer's marker list
> 
> May you point me where it is?

In fetch_buffer_markers.  Again, I don't understand how you missed
that.

> >   buffer's local variables
> 
> I admit that I do not understand what the following comment is talking
> about:
> 
>   /* Look down buffer's list of local Lisp variables
>      to find and update any that forward into C variables.  */

The C code accesses some buffer-local variables via Vfoo_bar C
variables.  Those need to be updated when the current buffer changes.

> > Yes, but we have just looked at a single primitive: set-buffer.  Once
> > in the buffer, any useful Lisp program will do gobs of stuff, and each
> > one of those accesses more and more globals.  How do you protect all
> > that in a 100% reliable way? by identifying the variables and
> > structures one by one? what if tomorrow some change in Emacs adds one
> > more?
> 
> This sounds like a problem that is already solved by any program that
> uses async threads. Maybe Po Lu can provide good insights.

Programs that use async threads avoid global variables like the
plague.  Emacs is full of them.

> > Searching for "setq case-fold-search" finds more than 30 hits in Emacs
> > alone.  And this variable is just an example.
> 
> These are mostly major/minor modes and constructs like
> 
> (setq case-fold-search ...)
> (do staff)
> (setq case-fold-search old-value)
> 
> The last one is legitimate problem with logic. Although not much
> different from Elisp threads doing the same and changing buffer-local
> variables in the midst of other thread running in the same buffer.
> So, I do not see how this should prevent async threads. We just need to
> take care about setq not attempting to write into buffer-local variables
> at the same time, which is solved by locking.

That "we just need to" is the problem, because it is multiplied by the
number of such variables, and they are a lot in Emacs.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 18:24                                                               ` Ihor Radchenko
@ 2023-07-07 19:36                                                                 ` Eli Zaretskii
  2023-07-07 20:05                                                                   ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-07 19:36 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Fri, 07 Jul 2023 18:24:04 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> May you please provide a concrete example how running GC breaks things?
> >
> > I already did: see my description of compacting strings, relocating
> > buffer text, and other stuff GC does.
> 
> But how exactly does, say, relocating buffer text can break things? May
> you provide a pseudo-code example?

If by "lock the buffer" you mean that buffer text cannot be relocated,
then you hurt GC and eventually the Emacs memory footprint.  If you
consider allowing relocation when the buffer is locked, then some of
the threads will be surprised when they try to resume accessing the
buffer text.

> >> I am interested in specific details. The way you say it now is general
> >> and sounds like "it is impossible and there is no point trying to do
> >> anything".
> >
> > That's indeed what I want to say.  You cannot have true concurrency as
> > long as Emacs Lisp programs use this huge global state.
> 
> Then the question is: can the global state be reduced?

By what measures?  Please suggest something concrete here.

> May it be acceptable to have a limited concurrency where async threads
> are only allowed to modify a portion of the global state?

I don't see how one can write a useful Lisp program with such
restrictions.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 19:31                                             ` Eli Zaretskii
@ 2023-07-07 20:01                                               ` Ihor Radchenko
  2023-07-08  6:50                                                 ` Eli Zaretskii
  2023-07-08  0:51                                               ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 20:01 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> I was talking about Elisp heaps.
>
> I don't understand what you mean by "Elisp heaps".  Emacs allocates
> memory by using the system's memory-allocation routines.  We don't
> have our own heaps.
>
>> AFAIU, Elisp memory management is conceptually single-threaded.
>
> Again, I don't understand why you say that and what you mean by that.
> We just call malloc, and malloc on modern systems is thread-safe.

Does it mean that we can safely call, for example, Fcons asynchronously?

(I am saying the above because my understanding is limited, hoping that
you can give some pointers when I happen to be wrong.)

>> >   buffer-alist
>> 
>> I do not see how this is a problem to lock/unlock this variable.
>
> So your solution to each such problem is to lock variables?  If so,
> you will end up locking a lot of them, and how is this different from
> using the global lock we do today with Lisp threads?

The idea is to prevent simultaneous write, which will only lock for a
small fraction of time.

It is not always sufficient, of course. When the code expects the value
to be unchanged, the lock will take much longer, and may cause global
locking eventually.

>> >   buffer's base-buffer
>> 
>> I do not see it. May you point me to where this is changed?
>
> See set_buffer_internal_2.
>
> How do you investigate this stuff?  I type M-. on every macro and
> function call I see, recursively, and look what they do.  If you do
> the same, how come you overlook all these details?  And if you do not
> use M-., how do you expect to learn what the code does internally?

Yes, I use M-. and C-x p g. And I do follow the code. But it does not
mean that I fully understand it, sorry.

And I still fail to see where base-buffer is _changed_. Is base buffer
ever supposed to be changed?

>> >   buffer's undo-list
>> 
>> That's just a synchronization between old_buffer and
>> old_buffer->base_buffer.
>> I am not 100% sure why it is necessary to be done this way and manually
>> instead of making undo-list values in indirect buffers point to base
>> buffer.
>
> So you are now saying that code which worked in Emacs for decades does
> unnecessary stuff, and should be removed or ignored?

No, I am saying that the current logic of updating the undo-list will not work
when multiple async threads are involved. It will no longer be safe to
assume that we can safely update undo-list right before/after switching
current_buffer.

So, I asked if an alternative approach could be used instead.

>> I admit that I do not understand what the following comment is talking
>> about:
>> 
>>   /* Look down buffer's list of local Lisp variables
>>      to find and update any that forward into C variables.  */
>
> The C code accesses some buffer-local variables via Vfoo_bar C
> variables.  Those need to be updated when the current buffer changes.

Now, when you explained this, it is also a big problem. Such C variables
are a global state that needs to be kept up to date. Async will break
the existing logic of these updates.

>> >   buffer's point and begv/zv markers
>> 
>> AFAIU, these store the last point position and narrowing state.
>> I do not see much problem here, except a need to lock these variables
>> while writing them. They will not affect PT, BEGZ, and ZV in other
>> threads, even if those operate on the same buffer now.
>
> Oh, yes, they will: see fetch_buffer_markers, called by
> set_buffer_internal_2.

Do you mean that in the existing cooperative Elisp threads, if one
thread moves the point and yields to other thread, the other thread will
be left with point in the same position (arbitrary, from the point of
view of this other thread)?

>> >   buffer's marker list
>> 
>> May you point me where it is?
>
> In fetch_buffer_markers.  Again, I don't understand how you missed
> that.

Is it buffer's marker list? I thought that you are referring to
BUF_MARKERS, not to PT, BEGV, and ZV.

>> This sounds like a problem that is already solved by any program that
>> uses async threads. Maybe Po Lu can provide good insights.
>
> Programs that use async threads avoid global variables like the
> plague.  Emacs is full of them.

Fair. I still hope that Po Lu can comment on this.

(Do note that my original proposal is not about "true" async thread and
it is Po Lu who were pushing for "proper interlocking". But the current
discussion is still helpful either way.).

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 19:36                                                                 ` Eli Zaretskii
@ 2023-07-07 20:05                                                                   ` Ihor Radchenko
  2023-07-08  7:05                                                                     ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-07 20:05 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> But how exactly does, say, relocating buffer text can break things? May
>> you provide a pseudo-code example?
>
> ...  If you
> consider allowing relocation when the buffer is locked, then some of
> the threads will be surprised when they try to resume accessing the
> buffer text.

Can you please provide an example about "surprised"? Do you mean that
buffer->pt will no longer be accurate? Something else?

>> Then the question is: can the global state be reduced?
>
> By what measures?  Please suggest something concrete here.

By transforming some of the global state variables into thread-local
variables.

>> May it be acceptable to have a limited concurrency where async threads
>> are only allowed to modify a portion of the global state?
>
> I don't see how one can write a useful Lisp program with such
> restrictions.

Pure, side-effect-free functions, for example.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 15:31                                           ` Ihor Radchenko
@ 2023-07-08  0:44                                             ` Po Lu
  2023-07-08  4:29                                               ` tomas
  2023-07-08  7:21                                               ` Eli Zaretskii
  0 siblings, 2 replies; 192+ messages in thread
From: Po Lu @ 2023-07-08  0:44 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> So, we can hold when interrupt_input_blocked is changed until the thread
> becomes main thread?

IMHO there should only be a single main thread processing input and
display, since that's required by most GUI toolkits.

> Hmm. What about locking thread->current_buffer for all other threads?
> This appears to solve the majority of the discussed problems.

If you're referring to prohibiting two threads from sharing the same
current_buffer, I doubt that's necessary.

> I am not sure how to deal with buffer->pt for multiple threads running
> in the same buffer.

C functions which modify the buffer should be interlocked to prevent
them from running simultaneously with other modifications to the same
buffer.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 19:31                                             ` Eli Zaretskii
  2023-07-07 20:01                                               ` Ihor Radchenko
@ 2023-07-08  0:51                                               ` Po Lu
  2023-07-08  4:18                                                 ` tomas
  2023-07-08  6:25                                                 ` Eli Zaretskii
  1 sibling, 2 replies; 192+ messages in thread
From: Po Lu @ 2023-07-08  0:51 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: Ihor Radchenko, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

> Programs that use async threads avoid global variables like the
> plague.  Emacs is full of them.

That's not true.  Look at any modern Unix kernel, and their detailed
locking around traditional Unix data structures, such as allproc, the
run queue, the vnode cache, and et cetera.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  0:51                                               ` Po Lu
@ 2023-07-08  4:18                                                 ` tomas
  2023-07-08  5:51                                                   ` Po Lu
  2023-07-08  6:25                                                 ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: tomas @ 2023-07-08  4:18 UTC (permalink / raw)
  To: emacs-devel

[-- Attachment #1: Type: text/plain, Size: 1015 bytes --]

On Sat, Jul 08, 2023 at 08:51:48AM +0800, Po Lu wrote:
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > Programs that use async threads avoid global variables like the
> > plague.  Emacs is full of them.
> 
> That's not true.  Look at any modern Unix kernel, and their detailed
> locking around traditional Unix data structures, such as allproc, the
> run queue, the vnode cache, and et cetera.

The tendency, though, seems to be to avoid interlocking as much as possible
and use "transactional" data structures [1]. Which is an order of magnitude
more "interesting" :-)

But this is a kernel. I have the impression that this discussion has exploded
in scope, from taking the blocking out of "long" (network, external procs)
waits to fine-grained parallelism and multithreading.

I think the first makes sense in Emacs, the second... not so much. But that's
just one random opinion :-)

Cheers

[1] https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
-- 
t

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  0:44                                             ` Po Lu
@ 2023-07-08  4:29                                               ` tomas
  2023-07-08  7:21                                               ` Eli Zaretskii
  1 sibling, 0 replies; 192+ messages in thread
From: tomas @ 2023-07-08  4:29 UTC (permalink / raw)
  To: Po Lu; +Cc: Ihor Radchenko, Eli Zaretskii, emacs-devel

[-- Attachment #1: Type: text/plain, Size: 728 bytes --]

On Sat, Jul 08, 2023 at 08:44:17AM +0800, Po Lu wrote:
> Ihor Radchenko <yantar92@posteo.net> writes:
> 
> > So, we can hold when interrupt_input_blocked is changed until the thread
> > becomes main thread?
> 
> IMHO there should only be a single main thread processing input and
> display, since that's required by most GUI toolkits.

This is a hard lesson Sun had to learn while developing the GUI toolkits
for the Java platform. After trying hard to have a multithreaded GUI
(Swing, if I remember correctly [1]) they had to back out due to lots of
hard-to-chase bugs.

Since 
Cheers

[1] They had a strong motivation, since their hardware (remember Niagara?)
   was going the massive-parallel path
-- 
t

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  4:18                                                 ` tomas
@ 2023-07-08  5:51                                                   ` Po Lu
  2023-07-08  6:01                                                     ` tomas
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-08  5:51 UTC (permalink / raw)
  To: tomas; +Cc: emacs-devel

<tomas@tuxteam.de> writes:

> The tendency, though, seems to be to avoid interlocking as much as possible
> and use "transactional" data structures [1]. Which is an order of magnitude
> more "interesting" :-)

They are unfortunately not relevant to Emacs, since their use pertains
to programs that were designed to run in multiple processor environments
almost from the very start.  On the other hand, Unix is a large and
ancient program with vast amounts of global state, that was still
modified to run in SMP environments, making its development experiences
directly relevant to Emacs.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  5:51                                                   ` Po Lu
@ 2023-07-08  6:01                                                     ` tomas
  2023-07-08 10:02                                                       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: tomas @ 2023-07-08  6:01 UTC (permalink / raw)
  To: Po Lu; +Cc: emacs-devel

[-- Attachment #1: Type: text/plain, Size: 1036 bytes --]

On Sat, Jul 08, 2023 at 01:51:42PM +0800, Po Lu wrote:
> <tomas@tuxteam.de> writes:
> 
> > The tendency, though, seems to be to avoid interlocking as much as possible
> > and use "transactional" data structures [1]. Which is an order of magnitude
> > more "interesting" :-)
> 
> They are unfortunately not relevant to Emacs, since their use pertains
> to programs that were designed to run in multiple processor environments
> almost from the very start.  On the other hand, Unix is a large and
> ancient program with vast amounts of global state, that was still
> modified to run in SMP environments, making its development experiences
> directly relevant to Emacs.

My point, too. Very few user space applications need the techniques
discussed in that book. I'd even venture that very few need heavy
interlocking, but this has been a topic in this thread.

Reducing blocking waits for external stuff (processes, network) would
already be a lofty goal for Emacs, and far more attainable, I think.

Cheers
-- 
t

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  0:51                                               ` Po Lu
  2023-07-08  4:18                                                 ` tomas
@ 2023-07-08  6:25                                                 ` Eli Zaretskii
  2023-07-08  6:38                                                   ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08  6:25 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: Ihor Radchenko <yantar92@posteo.net>,  emacs-devel@gnu.org
> Date: Sat, 08 Jul 2023 08:51:48 +0800
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > Programs that use async threads avoid global variables like the
> > plague.  Emacs is full of them.
> 
> That's not true.  Look at any modern Unix kernel, and their detailed
> locking around traditional Unix data structures, such as allproc, the
> run queue, the vnode cache, and et cetera.

I said "programs", not "OSes".

It _is_ possible to have threads with global variables, but that
requires locking, which punishes performance.  We even do that in
Emacs, with Lisp threads.  I thought this discussion was about less
painful implementations.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  6:25                                                 ` Eli Zaretskii
@ 2023-07-08  6:38                                                   ` Ihor Radchenko
  2023-07-08  7:45                                                     ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-08  6:38 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: Po Lu, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

> ....  I thought this discussion was about less
> painful implementations.

My idea with isolated thread is similar to having a bunch of state
variables coped to the thread before executing it. Interlocking will
still be necessary if the isolated thread wants to do anything with the
actual global state (like buffer modification, for example).

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 20:01                                               ` Ihor Radchenko
@ 2023-07-08  6:50                                                 ` Eli Zaretskii
  2023-07-08 11:55                                                   ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08  6:50 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Fri, 07 Jul 2023 20:01:24 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> AFAIU, Elisp memory management is conceptually single-threaded.
> >
> > Again, I don't understand why you say that and what you mean by that.
> > We just call malloc, and malloc on modern systems is thread-safe.
> 
> Does it mean that we can safely call, for example, Fcons asynchronously?

Not necessarily, because Fcons is not just about allocating memory.

I think we once again have a misunderstanding here, because when you
say "memory allocation" you mean something very different than I do,
which is a call to malloc to get more memory from the system.  It
sounds like you think that Fcons _is_ memory allocation?  But if so,
this terminology is so confusing that it is not useful in a detailed
technical discussion such as this one.  We use the term "consing" to
refer to creation of Lisp objects, which includes memory allocation,
but also other stuff.

In particular, consing modifies memory blocks (already available to
Emacs, so no "memory allocation" per se) used to keep track of live
and dead Lisp objects, and those modifications cannot be concurrently
done by more than one thread, at least in some cases.

> (I am saying the above because my understanding is limited, hoping that
> you can give some pointers when I happen to be wrong.)

I'm happy to give pointers, once I understand pointers to what.
Before I have a chance of understanding that, we need to have a common
terminology, though.

> >> >   buffer-alist
> >> 
> >> I do not see how this is a problem to lock/unlock this variable.
> >
> > So your solution to each such problem is to lock variables?  If so,
> > you will end up locking a lot of them, and how is this different from
> > using the global lock we do today with Lisp threads?
> 
> The idea is to prevent simultaneous write, which will only lock for a
> small fraction of time.

If one thread writes to a data structure, reading from it could also
need to block, or else the reader will risk getting inconsistent data.
So this is not just about simultaneous writing, it's much more
general.

> And I still fail to see where base-buffer is _changed_. Is base buffer
> ever supposed to be changed?

Another thread might change it while this thread examines it.

> >> >   buffer's undo-list
> >> 
> >> That's just a synchronization between old_buffer and
> >> old_buffer->base_buffer.
> >> I am not 100% sure why it is necessary to be done this way and manually
> >> instead of making undo-list values in indirect buffers point to base
> >> buffer.
> >
> > So you are now saying that code which worked in Emacs for decades does
> > unnecessary stuff, and should be removed or ignored?
> 
> No, I am saying that the current logic of updating the undo-list will not work
> when multiple async threads are involved. It will no longer be safe to
> assume that we can safely update undo-list right before/after switching
> current_buffer.
> 
> So, I asked if an alternative approach could be used instead.

Undo records changes in text properties and markers, and those are
different in the indirect buffers from the base buffers.  Does this
explain why we cannot simply point to the base buffer?

If this is clear, then what other approach except locking do you
suggest for that?

> >> I admit that I do not understand what the following comment is talking
> >> about:
> >> 
> >>   /* Look down buffer's list of local Lisp variables
> >>      to find and update any that forward into C variables.  */
> >
> > The C code accesses some buffer-local variables via Vfoo_bar C
> > variables.  Those need to be updated when the current buffer changes.
> 
> Now, when you explained this, it is also a big problem. Such C variables
> are a global state that needs to be kept up to date. Async will break
> the existing logic of these updates.

Exactly.

> >> >   buffer's point and begv/zv markers
> >> 
> >> AFAIU, these store the last point position and narrowing state.
> >> I do not see much problem here, except a need to lock these variables
> >> while writing them. They will not affect PT, BEGZ, and ZV in other
> >> threads, even if those operate on the same buffer now.
> >
> > Oh, yes, they will: see fetch_buffer_markers, called by
> > set_buffer_internal_2.
> 
> Do you mean that in the existing cooperative Elisp threads, if one
> thread moves the point and yields to other thread, the other thread will
> be left with point in the same position (arbitrary, from the point of
> view of this other thread)?

That's one problem, yes.  There are others.  Emacs Lisp uses point,
both explicitly and implicitly, all over the board.  It is unthinkable
that a thread will find point not in a place where it last moved it.

> >> >   buffer's marker list
> >> 
> >> May you point me where it is?
> >
> > In fetch_buffer_markers.  Again, I don't understand how you missed
> > that.
> 
> Is it buffer's marker list? I thought that you are referring to
> BUF_MARKERS, not to PT, BEGV, and ZV.

Buffer's marker list are referenced in subroutines of
record_buffer_markers.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-07 20:05                                                                   ` Ihor Radchenko
@ 2023-07-08  7:05                                                                     ` Eli Zaretskii
  2023-07-08 10:53                                                                       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08  7:05 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Fri, 07 Jul 2023 20:05:53 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> But how exactly does, say, relocating buffer text can break things? May
> >> you provide a pseudo-code example?
> >
> > ...  If you
> > consider allowing relocation when the buffer is locked, then some of
> > the threads will be surprised when they try to resume accessing the
> > buffer text.
> 
> Can you please provide an example about "surprised"? Do you mean that
> buffer->pt will no longer be accurate? Something else?

Not pt but the pointers to buffer text and the gap.  Those determine
the address of a given buffer position in memory, and are used when a
Lisp program accesses buffer text in any way.  GC can change them if
it decides to relocate buffer text or compact the gap.

> >> Then the question is: can the global state be reduced?
> >
> > By what measures?  Please suggest something concrete here.
> 
> By transforming some of the global state variables into thread-local
> variables.

Which variables can safely and usefully be made thread-local?  I
invite you to look at all the defvar's in the ELisp manual that are
not buffer-local, and consider whether making them thread-local will
make sense, i.e. will still allow you to write useful Lisp programs.
(And if we are thinking about more than one thread working on the same
buffer, then buffer-local variables are also part of this.)

As an exercise, how about finding _one_ variable routinely used in
Lisp programs, which you think can be made thread-local?  Then let's
talk about it.

> >> May it be acceptable to have a limited concurrency where async threads
> >> are only allowed to modify a portion of the global state?
> >
> > I don't see how one can write a useful Lisp program with such
> > restrictions.
> 
> Pure, side-effect-free functions, for example.

I don't see how this could be practically useful.  Besides the basic
question of whether a useful Lisp program can be written in Emacs
using only side-effect-free functions, there's the large body of
subroutines and primitives any Lisp program uses to do its job, and
how do you know which ones of them are side-effect-free or async-safe?
To take just one example which came up in recent discussions, look at
string-pixel-width.  Or even at string-width.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  0:44                                             ` Po Lu
  2023-07-08  4:29                                               ` tomas
@ 2023-07-08  7:21                                               ` Eli Zaretskii
  2023-07-08  7:48                                                 ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08  7:21 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: Eli Zaretskii <eliz@gnu.org>,  emacs-devel@gnu.org
> Date: Sat, 08 Jul 2023 08:44:17 +0800
> 
> Ihor Radchenko <yantar92@posteo.net> writes:
> 
> > So, we can hold when interrupt_input_blocked is changed until the thread
> > becomes main thread?
> 
> IMHO there should only be a single main thread processing input and
> display, since that's required by most GUI toolkits.

That is already a problem, as long as we are talking about leaving
most of Emacs application code intact.  How do you ensure only the
main thread can process input and display?  A non-main thread can
easily call some function which prompts the user, e.g., with
yes-or-no-p, or force redisplay with sit-for, and what do you do when
that happens?

This is a problem even with the current Lisp threads, and we don't
have a satisfactory solution for it.  (We discussed this a couple of
years back, but didn't arrive at any useful conclusions, AFAIR.)

Personally, I don't believe this aspect can be solved without very
significant redesign of Emacs and -- what's more important -- without
rewriting many Lisp programs to adhere to the new design.  Searching
for yes-or-no-p and y-or-n-p are in Emacs alone brings 1500 hits.

> > Hmm. What about locking thread->current_buffer for all other threads?
> > This appears to solve the majority of the discussed problems.
> 
> If you're referring to prohibiting two threads from sharing the same
> current_buffer, I doubt that's necessary.
> 
> > I am not sure how to deal with buffer->pt for multiple threads running
> > in the same buffer.
> 
> C functions which modify the buffer should be interlocked to prevent
> them from running simultaneously with other modifications to the same
> buffer.

That's not enough!  Interlocking will prevent disastrous changes to
the buffer object which risk leaving the buffer in inconsistent state,
but it cannot prevent one thread from changing point under the feet of
another thread.  Consider this sequence:

  . thread A moves point to position P1
  . thread A yields
  . thread B moves point of the same buffer to position P2
  . thread B yields
  . thread A resumes and performs some processing assuming point is at P1

Without some kind of critical section, invoked on the Lisp level,
whereby moving point and the subsequent processing cannot be
interrupted, how will asynchronous processing by several threads that
use the same buffer ever work?  And please note that the above is
problematic even if none of the threads change buffer text, i.e. they
all are just _reading_ buffer text.

It follows that such asynchronous processing will have to be
explicitly accounted for on the level of Lisp programs, which means
thorough rewrite of most of Lisp code (and also a lot of C code).
IOW, we are no longer talking about some change to Emacs, we are
talking about rewriting most or all of Emacs!



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  6:38                                                   ` Ihor Radchenko
@ 2023-07-08  7:45                                                     ` Eli Zaretskii
  2023-07-08  8:16                                                       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08  7:45 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Po Lu <luangruo@yahoo.com>, emacs-devel@gnu.org
> Date: Sat, 08 Jul 2023 06:38:26 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > ....  I thought this discussion was about less
> > painful implementations.
> 
> My idea with isolated thread is similar to having a bunch of state
> variables coped to the thread before executing it. Interlocking will
> still be necessary if the isolated thread wants to do anything with the
> actual global state (like buffer modification, for example).

How would we know which part(s) of the global state to copy, and how
will the Lisp program running in the thread know which variables it
can safely access?  If I am the Lisp programmer writing code for such
a thread, how can I know what is and what isn't allowed?  And what
happens if I do something that is not allowed?  And finally, does it
mean we cannot run existing Lisp programs in such threads, but must
program for them from scratch?



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  7:21                                               ` Eli Zaretskii
@ 2023-07-08  7:48                                                 ` Po Lu
  2023-07-08 10:02                                                   ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-08  7:48 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: yantar92, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

> That is already a problem, as long as we are talking about leaving
> most of Emacs application code intact.  How do you ensure only the
> main thread can process input and display?  A non-main thread can
> easily call some function which prompts the user, e.g., with
> yes-or-no-p, or force redisplay with sit-for, and what do you do when
> that happens?

To signal an error.  Threads other than the main thread should not call
such functions, which is the approach taken by most toolkits and window
systems.  (X being a notable exception, where every Xlib display is
surrounded by one massive lock.)

This will mean that most user facing Lisp won't be able to run in
parallel with the main thread, but that can be fixed, given enough time.
And it's not disasterous, seeing as _no_ Lisp can presently run in
parallel with the main thread.

> That's not enough!  Interlocking will prevent disastrous changes to
> the buffer object which risk leaving the buffer in inconsistent state,
> but it cannot prevent one thread from changing point under the feet of
> another thread.  Consider this sequence:
>
>   . thread A moves point to position P1
>   . thread A yields
>   . thread B moves point of the same buffer to position P2
>   . thread B yields
>   . thread A resumes and performs some processing assuming point is at P1

Lisp code that is interested in making edits this way will need to
utilize synchronization mechanisms of their own, yes.

> Without some kind of critical section, invoked on the Lisp level,
> whereby moving point and the subsequent processing cannot be
> interrupted, how will asynchronous processing by several threads that
> use the same buffer ever work?  And please note that the above is
> problematic even if none of the threads change buffer text, i.e. they
> all are just _reading_ buffer text.
>
> It follows that such asynchronous processing will have to be
> explicitly accounted for on the level of Lisp programs, which means
> thorough rewrite of most of Lisp code (and also a lot of C code).

Insofar as that Lisp code is actually interested in making use of
multiple threads.

> IOW, we are no longer talking about some change to Emacs, we are
> talking about rewriting most or all of Emacs!

I think that as long as we make the C text editing primitives robust
against such problems, authors of Lisp code that needs to edit buffer
text from multiple threads will devise appropriate synchronization
procedures for their specific use cases.  For example, to parse a buffer
in the background, Semantic may chose to create a new thread with a
temporary copy of the buffer that is being read.  Or, Gnus might do the
same to fontify a Diff attachment in an article.  Lisp changes can all
be accomplished gradually, of course, whereas the C-level changes will
have to be completed all in one go.

Thanks.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  7:45                                                     ` Eli Zaretskii
@ 2023-07-08  8:16                                                       ` Ihor Radchenko
  2023-07-08 10:13                                                         ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-08  8:16 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> My idea with isolated thread is similar to having a bunch of state
>> variables coped to the thread before executing it. Interlocking will
>> still be necessary if the isolated thread wants to do anything with the
>> actual global state (like buffer modification, for example).
>
> How would we know which part(s) of the global state to copy, and how
> will the Lisp program running in the thread know which variables it
> can safely access?

This is a difficult question.
On one hand, it does not make sense to copy everything - it will imply
multiplying Emacs memory footprint.
On the other hand, we cannot know in advance which variables will
actually be used by a thread.

So, it can be something like copy-on-read - the child thread will copy
the necessary values from parent thread as running Elisp code needs
them.

By default, such copy will not be synchronized with the parent Emacs
thread, so that we do not need to worry about parent thread changing the
values asynchronously. 

Manually, it may also be allowed to create "remote" values that will
query the other thread: thread 1 will hold (foo = '(1 2 3)) and thread 2
will hold (foo = #<remote foo>). Then, thread 2 requesting to read/write
value will query thread 1.
The range of variables that can be made remote should be minimized and
possibly also restricted to a safe subset of variables.

> ... If I am the Lisp programmer writing code for such
> a thread, how can I know what is and what isn't allowed?

Everything is allowed, but global state changes will not necessarily
propagate to the parent Emacs thread. They will be confined to that
child async thread.

> ...And what
> happens if I do something that is not allowed?  And finally, does it
> mean we cannot run existing Lisp programs in such threads, but must
> program for them from scratch?

Non-trivial programs that need to modify the global Emacs state will
have to be written specially. But programs that only return a value will
work as usual, without a need to modify them.

The idea is similar to https://github.com/jwiegley/emacs-async (or to
`org-export-async-start'), but with more tight communication between
Emacs processes. The original emacs-async requires a new Emacs instance
that will also need to re-load all the necessary packages manually from
startup.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  6:01                                                     ` tomas
@ 2023-07-08 10:02                                                       ` Ihor Radchenko
  2023-07-08 19:39                                                         ` tomas
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-08 10:02 UTC (permalink / raw)
  To: tomas; +Cc: Po Lu, emacs-devel

tomas@tuxteam.de writes:

> Reducing blocking waits for external stuff (processes, network) would
> already be a lofty goal for Emacs, and far more attainable, I think.

Would you mind elaborating?
AFAIU, the bottlenecks in these scenarios are still on Lisp side and are
not specific to process/network handling.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  7:48                                                 ` Po Lu
@ 2023-07-08 10:02                                                   ` Eli Zaretskii
  2023-07-08 11:54                                                     ` Po Lu
  2023-07-08 12:01                                                     ` Ihor Radchenko
  0 siblings, 2 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08 10:02 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: yantar92@posteo.net,  emacs-devel@gnu.org
> Date: Sat, 08 Jul 2023 15:48:02 +0800
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > That is already a problem, as long as we are talking about leaving
> > most of Emacs application code intact.  How do you ensure only the
> > main thread can process input and display?  A non-main thread can
> > easily call some function which prompts the user, e.g., with
> > yes-or-no-p, or force redisplay with sit-for, and what do you do when
> > that happens?
> 
> To signal an error.

Great! that means in practice no existing Lisp program could ever run
in a non-main thread.  It isn't a very practical solution.

Besides, non-main threads do sometimes legitimately need to prompt
the user.  It is not a programmer's error when they do.

> Threads other than the main thread should not call
> such functions, which is the approach taken by most toolkits and window
> systems.  (X being a notable exception, where every Xlib display is
> surrounded by one massive lock.)

I don't think such a simplistic solution suits a program such as
Emacs.

> This will mean that most user facing Lisp won't be able to run in
> parallel with the main thread, but that can be fixed, given enough time.

Fixed how?

> > That's not enough!  Interlocking will prevent disastrous changes to
> > the buffer object which risk leaving the buffer in inconsistent state,
> > but it cannot prevent one thread from changing point under the feet of
> > another thread.  Consider this sequence:
> >
> >   . thread A moves point to position P1
> >   . thread A yields
> >   . thread B moves point of the same buffer to position P2
> >   . thread B yields
> >   . thread A resumes and performs some processing assuming point is at P1
> 
> Lisp code that is interested in making edits this way will need to
> utilize synchronization mechanisms of their own, yes.

The above doesn't do any editing, it just accesses buffer text without
changing it.

> > Without some kind of critical section, invoked on the Lisp level,
> > whereby moving point and the subsequent processing cannot be
> > interrupted, how will asynchronous processing by several threads that
> > use the same buffer ever work?  And please note that the above is
> > problematic even if none of the threads change buffer text, i.e. they
> > all are just _reading_ buffer text.
> >
> > It follows that such asynchronous processing will have to be
> > explicitly accounted for on the level of Lisp programs, which means
> > thorough rewrite of most of Lisp code (and also a lot of C code).
> 
> Insofar as that Lisp code is actually interested in making use of
> multiple threads.

Why are we talking about multiple threads at all? don't we want to
allow some Lisp code run from non-main threads?

> > IOW, we are no longer talking about some change to Emacs, we are
> > talking about rewriting most or all of Emacs!
> 
> I think that as long as we make the C text editing primitives robust
> against such problems, authors of Lisp code that needs to edit buffer
> text from multiple threads will devise appropriate synchronization
> procedures for their specific use cases.  For example, to parse a buffer
> in the background, Semantic may chose to create a new thread with a
> temporary copy of the buffer that is being read.  Or, Gnus might do the
> same to fontify a Diff attachment in an article.  Lisp changes can all
> be accomplished gradually, of course, whereas the C-level changes will
> have to be completed all in one go.

Using a snapshot of some global resource, such as buffer text, works
only up to a point, and basically prohibits many potentially
interesting uses of threads.  That's because such snapshotting assumes
no significant changes happen in the original objects while processing
the snapshot, and that is only sometimes true.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  8:16                                                       ` Ihor Radchenko
@ 2023-07-08 10:13                                                         ` Eli Zaretskii
  0 siblings, 0 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08 10:13 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Sat, 08 Jul 2023 08:16:39 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> So, it can be something like copy-on-read - the child thread will copy
> the necessary values from parent thread as running Elisp code needs
> them.
> 
> By default, such copy will not be synchronized with the parent Emacs
> thread, so that we do not need to worry about parent thread changing the
> values asynchronously. 

When does such synchronization happen, though?

Imagine some long-running thread which does some periodical
processing: when and how will such a thread synchronize with the
actual state of the variables it needs?  How will this look from the
POV of Lisp code running in this thread?

(This proposal also means significant implementation problems.)

> Manually, it may also be allowed to create "remote" values that will
> query the other thread: thread 1 will hold (foo = '(1 2 3)) and thread 2
> will hold (foo = #<remote foo>). Then, thread 2 requesting to read/write
> value will query thread 1.

A thread running Lisp cannot be queried, unless it somehow "listens"
to such queries.  This isn't magic, it has to be implemented, and how
do you envision to implement it?  You envision some kind of scheduler
as part of Emacs? something else?

> The range of variables that can be made remote should be minimized and
> possibly also restricted to a safe subset of variables.

I very much doubt such a "safe subset" exists that can support useful
Lisp programs.

> > ... If I am the Lisp programmer writing code for such
> > a thread, how can I know what is and what isn't allowed?
> 
> Everything is allowed, but global state changes will not necessarily
> propagate to the parent Emacs thread. They will be confined to that
> child async thread.

So it could be that a thread decides, for example, that some buffer B
is the return value of its function, whereas buffer B no longer exists
because it was killed?  How's that useful?

> > ...And what
> > happens if I do something that is not allowed?  And finally, does it
> > mean we cannot run existing Lisp programs in such threads, but must
> > program for them from scratch?
> 
> Non-trivial programs that need to modify the global Emacs state will
> have to be written specially.

So you basically agree that to have useful enough multi-threading,
most of Emacs will have to be redesigned and rewritten?

> But programs that only return a value will work as usual, without a
> need to modify them.

How many programs like that exist and are useful?

> The idea is similar to https://github.com/jwiegley/emacs-async (or to
> `org-export-async-start'), but with more tight communication between
> Emacs processes. The original emacs-async requires a new Emacs instance
> that will also need to re-load all the necessary packages manually from
> startup.

If we will have the same disadvantages and restrictions as in
emacs-async, then why bother? emacs-async already exists.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  7:05                                                                     ` Eli Zaretskii
@ 2023-07-08 10:53                                                                       ` Ihor Radchenko
  2023-07-08 14:26                                                                         ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-08 10:53 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> Can you please provide an example about "surprised"? Do you mean that
>> buffer->pt will no longer be accurate? Something else?
>
> Not pt but the pointers to buffer text and the gap.  Those determine
> the address of a given buffer position in memory, and are used when a
> Lisp program accesses buffer text in any way.  GC can change them if
> it decides to relocate buffer text or compact the gap.

Ok. I tried to find an example myself by looking into `char-after'.
I can see CHAR_TO_BYTE->buf_charpos_to_bytepos->buf_next_char_len->BUF_BYTE_ADDRESS
and
FETCH_CHAR->FETCH_MULTIBYTE_CHAR->BYTE_POS_ADDR

Will blocking GC for the duration of CHAR_TO_BYTE and buf_next_char_len
be a problem? GC between the calls to CHAR_TO_BYTE and
buf_next_char_len, if it relocates buffer text/gap will not break
anything, AFAIU.

>> >> Then the question is: can the global state be reduced?
>> >
>> > By what measures?  Please suggest something concrete here.
>> 
>> By transforming some of the global state variables into thread-local
>> variables.
>
> Which variables can safely and usefully be made thread-local?

PT, ZV, BEGV, and the buffer-local variables that are represented by the
global C variables.

>> > I don't see how one can write a useful Lisp program with such
>> > restrictions.
>> 
>> Pure, side-effect-free functions, for example.
>
> I don't see how this could be practically useful.

For example, `org-element-interpret-data' converts Org mode AST to
string. Just now, I tried it using AST of one of my large Org buffers.
It took 150seconds to complete, while blocking Emacs.

Or consider building large completion table.
Or parsing HTML DOM string into sexp.
Or JSON passed by LSP server.

> ... Besides the basic
> question of whether a useful Lisp program can be written in Emacs
> using only side-effect-free functions

Yes. I mean... look at Haskell. There is no shortage of pure functional
libraries there.

> .. , there's the large body of
> subroutines and primitives any Lisp program uses to do its job, and
> how do you know which ones of them are side-effect-free or async-safe?

(declare (pure t))

> To take just one example which came up in recent discussions, look at
> string-pixel-width.  Or even at string-width.

Those must either be blocking or throw an error when called from async
thread.

---

Just to be clear, it is not a useful aim to make any arbitrary Elisp
code asynchronous. But if we can at least allow specially designed Elisp
to be asynchronous, it will already be a good breakthrough.

And later, in future, if we can manage to reduce the amount global state
in Emacs, more Elisp code may be converted (or even work as is)
asynchronously.

Remember the lexical binding transition. I was not refused because some
Elisp code failed to work with it.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 10:02                                                   ` Eli Zaretskii
@ 2023-07-08 11:54                                                     ` Po Lu
  2023-07-08 14:12                                                       ` Eli Zaretskii
  2023-07-08 12:01                                                     ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-08 11:54 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: yantar92, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

> Great! that means in practice no existing Lisp program could ever run
> in a non-main thread.  It isn't a very practical solution.

Number and text crunching tasks (think Semantic, or JSON parsing for
LSP) don't need to sleep or read keyboard input.

> Besides, non-main threads do sometimes legitimately need to prompt
> the user.  It is not a programmer's error when they do.

They should then devise mechanisms for communicating with the main
thread.

> I don't think such a simplistic solution suits a program such as
> Emacs.

It is the only possible solution, as long as Emacs wants to keep working
with other window systems.  Even our limited threads cannot work with NS
and GTK in their present state: the toolkit aborts or enters an
inconsistent state the instant a GUI function is called from a thread
other than the main thread.

> Fixed how?

By replacing `sit-for' with `sleep-for' (and in general avoiding
functions that call redisplay or GUI functions.)

> The above doesn't do any editing, it just accesses buffer text without
> changing it.

I intended to include ``changing point'' in my definition of ``modifying
the buffer''.

> Why are we talking about multiple threads at all? don't we want to
> allow some Lisp code run from non-main threads?

That code will have to be specifically written for running outside the
main thread, of course, obviating the need to rewrite all of our
existing code.

> Using a snapshot of some global resource, such as buffer text, works
> only up to a point, and basically prohibits many potentially
> interesting uses of threads.  That's because such snapshotting assumes
> no significant changes happen in the original objects while processing
> the snapshot, and that is only sometimes true.

We could also allow Lisp to lock a buffer by hand.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08  6:50                                                 ` Eli Zaretskii
@ 2023-07-08 11:55                                                   ` Ihor Radchenko
  2023-07-08 14:43                                                     ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-08 11:55 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> Does it mean that we can safely call, for example, Fcons asynchronously?
>
> Not necessarily, because Fcons is not just about allocating memory.
>
> I think we once again have a misunderstanding here, because when you
> say "memory allocation" you mean something very different than I do,
> which is a call to malloc to get more memory from the system.  It
> sounds like you think that Fcons _is_ memory allocation?  But if so,
> this terminology is so confusing that it is not useful in a detailed
> technical discussion such as this one.  We use the term "consing" to
> refer to creation of Lisp objects, which includes memory allocation,
> but also other stuff.

> In particular, consing modifies memory blocks (already available to
> Emacs, so no "memory allocation" per se) used to keep track of live
> and dead Lisp objects, and those modifications cannot be concurrently
> done by more than one thread, at least in some cases.

Thanks for the clarification.
I heard this term, but was unsure what exactly it refers to.

>> > So your solution to each such problem is to lock variables?  If so,
>> > you will end up locking a lot of them, and how is this different from
>> > using the global lock we do today with Lisp threads?
>> 
>> The idea is to prevent simultaneous write, which will only lock for a
>> small fraction of time.
>
> If one thread writes to a data structure, reading from it could also
> need to block, or else the reader will risk getting inconsistent data.
> So this is not just about simultaneous writing, it's much more
> general.

Sure. Of course, locking should be on write.
May you elaborate what you mean by inconsistent data?

>> And I still fail to see where base-buffer is _changed_. Is base buffer
>> ever supposed to be changed?
>
> Another thread might change it while this thread examines it.

I was able to identify a single place in C code where buffer's base
buffer is being set: in make-indirect-buffer, when the buffer is just
created. So, it is safe to assume that buffer->base_buffer remain
constant for any given live buffer. Unless I miss something.

>> No, I am saying that the current logic of updating the undo-list will not work
>> when multiple async threads are involved. It will no longer be safe to
>> assume that we can safely update undo-list right before/after switching
>> current_buffer.
>> 
>> So, I asked if an alternative approach could be used instead.
>
> Undo records changes in text properties and markers, and those are
> different in the indirect buffers from the base buffers.  Does this
> explain why we cannot simply point to the base buffer?

Are you sure? Text properties are certainly shared between indirect buffers.

bset_undo_list (old_buf->base_buffer, BVAR (old_buf, undo_list));
INLINE void
bset_undo_list (struct buffer *b, Lisp_Object val)
{
  b->undo_list_ = val;
}

The markers that are not shared are pt_marker, begv_marker, and
zv_marker. But those could probably be made attached to a given thread.

>> >>   /* Look down buffer's list of local Lisp variables
>> >>      to find and update any that forward into C variables.  */
>> >
>> > The C code accesses some buffer-local variables via Vfoo_bar C
>> > variables.  Those need to be updated when the current buffer changes.
>> 
>> Now, when you explained this, it is also a big problem. Such C variables
>> are a global state that needs to be kept up to date. Async will break
>> the existing logic of these updates.
>
> Exactly.

I now looked a bit further, and what you are talking about are the
variables defined via DEFVAR_PER_BUFFER. These global variables have the
following type:

/* Forwarding pointer to a Lisp_Object variable.
   This is allowed only in the value cell of a symbol,
   and it means that the symbol's value really lives in the
   specified variable.  */
struct Lisp_Objfwd
  {
    enum Lisp_Fwd_Type type;	/* = Lisp_Fwd_Obj */
    Lisp_Object *objvar;
  };

The code in set_buffer calls
Fsymbol_value->find_symbol_value->swap_in_symval_forwarding for every symbol
with C variable equivalent.

These calls update internal pointer to Lisp object corresponding to
variable value in current_buffer.

If my understanding is correct, it should be safe to convert them into
thread-local variables and update them within current thread when
current_buffer (already thread-local) is altered.

>> > Oh, yes, they will: see fetch_buffer_markers, called by
>> > set_buffer_internal_2.
>> 
>> Do you mean that in the existing cooperative Elisp threads, if one
>> thread moves the point and yields to other thread, the other thread will
>> be left with point in the same position (arbitrary, from the point of
>> view of this other thread)?
>
> That's one problem, yes.  There are others.  Emacs Lisp uses point,
> both explicitly and implicitly, all over the board.  It is unthinkable
> that a thread will find point not in a place where it last moved it.

It is exactly what happens with current cooperative threads, AFAIK.
Will it make sense to convert PT, ZV, and BEGV into thread-local variables?

>> Is it buffer's marker list? I thought that you are referring to
>> BUF_MARKERS, not to PT, BEGV, and ZV.
>
> Buffer's marker list are referenced in subroutines of
> record_buffer_markers.

Do you mean record_buffer_markers->set_marker_both->attach_marker->
  if (m->buffer != b)
    {
      unchain_marker (m);
      m->buffer = b;
      m->next = BUF_MARKERS (b);
      BUF_MARKERS (b) = m;
    }

But will this `if' ever trigger for PT, BEGV, and ZV?

Also, it looks reasonable to block BUF_MARKERS when we need to change
BUF_MARKERS.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 10:02                                                   ` Eli Zaretskii
  2023-07-08 11:54                                                     ` Po Lu
@ 2023-07-08 12:01                                                     ` Ihor Radchenko
  2023-07-08 14:45                                                       ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-08 12:01 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: Po Lu, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> > That is already a problem, as long as we are talking about leaving
>> > most of Emacs application code intact.  How do you ensure only the
>> > main thread can process input and display?  A non-main thread can
>> > easily call some function which prompts the user, e.g., with
>> > yes-or-no-p, or force redisplay with sit-for, and what do you do when
>> > that happens?
>> 
>> To signal an error.
>
> Great! that means in practice no existing Lisp program could ever run
> in a non-main thread.  It isn't a very practical solution.
>
> Besides, non-main threads do sometimes legitimately need to prompt
> the user.  It is not a programmer's error when they do.

As an alternative async threads can be temporarily changed to
cooperative in such scenario.

They will already have to wait synchronously when consing new objects
anyway.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 11:54                                                     ` Po Lu
@ 2023-07-08 14:12                                                       ` Eli Zaretskii
  2023-07-09  0:37                                                         ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08 14:12 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: yantar92@posteo.net,  emacs-devel@gnu.org
> Date: Sat, 08 Jul 2023 19:54:59 +0800
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > Great! that means in practice no existing Lisp program could ever run
> > in a non-main thread.  It isn't a very practical solution.
> 
> Number and text crunching tasks (think Semantic, or JSON parsing for
> LSP) don't need to sleep or read keyboard input.

Are they allowed to, say, write some data to a file?  If so, they
might need to ask the user whether its okay to overwrite an existing
file.

IOW, I think you have a very narrow idea of "number and text crunching
tasks" that could benefit from threads.  For example, one of the
frequent requests is to run the part of Gnus that fetches email and
articles in a separate thread -- if this is okay for "number and text
crunching tasks", then it is likely to prompt users.

> > Besides, non-main threads do sometimes legitimately need to prompt
> > the user.  It is not a programmer's error when they do.
> 
> They should then devise mechanisms for communicating with the main
> thread.

We are mis-communicating.  My point is that it is almost impossible to
take an existing non-trivial Lisp program and let it run from a
non-main thread without bumping into this issue.  Your responses
indicate that you agree with me: such Lisp programs need to be written
from scratch under the assumption that they will run from a non-main
thread.

How is this compatible with the goal of having threads in Emacs, which
are to allow running Lisp code with less hair than the existing timers
or emacs-async?

> > Fixed how?
> 
> By replacing `sit-for' with `sleep-for' (and in general avoiding
> functions that call redisplay or GUI functions.)

So programs running in non-main threads will be unable to do stuff
like show progress etc.?  That's not very encouraging, to say the
least.

> That code will have to be specifically written for running outside the
> main thread, of course, obviating the need to rewrite all of our
> existing code.

QED

This basically means rewriting most of Emacs.  Because most APIs and
subroutines we use in every Lisp program were not "specifically
written for running outside the main thread".  So we'll need special
variants of all of those to do those simple jobs.

> > Using a snapshot of some global resource, such as buffer text, works
> > only up to a point, and basically prohibits many potentially
> > interesting uses of threads.  That's because such snapshotting assumes
> > no significant changes happen in the original objects while processing
> > the snapshot, and that is only sometimes true.
> 
> We could also allow Lisp to lock a buffer by hand.

Same problem here.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 10:53                                                                       ` Ihor Radchenko
@ 2023-07-08 14:26                                                                         ` Eli Zaretskii
  2023-07-09  9:36                                                                           ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08 14:26 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Sat, 08 Jul 2023 10:53:59 +0000
> 
> Ok. I tried to find an example myself by looking into `char-after'.
> I can see CHAR_TO_BYTE->buf_charpos_to_bytepos->buf_next_char_len->BUF_BYTE_ADDRESS
> and
> FETCH_CHAR->FETCH_MULTIBYTE_CHAR->BYTE_POS_ADDR
> 
> Will blocking GC for the duration of CHAR_TO_BYTE and buf_next_char_len
> be a problem?

Not by itself, no.  But if you go this way, you eventually will lock
everything all the time.

> >> By transforming some of the global state variables into thread-local
> >> variables.
> >
> > Which variables can safely and usefully be made thread-local?
> 
> PT, ZV, BEGV

Even that is not enough: you forgot the gap.

> and the buffer-local variables that are represented by the
> global C variables.

That's a lot!

> >> > I don't see how one can write a useful Lisp program with such
> >> > restrictions.
> >> 
> >> Pure, side-effect-free functions, for example.
> >
> > I don't see how this could be practically useful.
> 
> For example, `org-element-interpret-data' converts Org mode AST to
> string. Just now, I tried it using AST of one of my large Org buffers.
> It took 150seconds to complete, while blocking Emacs.

It isn't side-effect-free, though.

I don't believe any useful Lisp program in Emacs can be
side-effect-free, for the purposes of this discussion.  Every single
one of them accesses the global state and changes the global state.

> > ... Besides the basic
> > question of whether a useful Lisp program can be written in Emacs
> > using only side-effect-free functions
> 
> Yes. I mean... look at Haskell. There is no shortage of pure functional
> libraries there.

I cannot follow you there: I don't know Haskell.

> > .. , there's the large body of
> > subroutines and primitives any Lisp program uses to do its job, and
> > how do you know which ones of them are side-effect-free or async-safe?
> 
> (declare (pure t))

How many of these do we have, and can useful programs be written using
only those?  More importantly, when you call some function from
simple.el, how do you know whether all of its subroutines and
primitives are 'pure'?

> > To take just one example which came up in recent discussions, look at
> > string-pixel-width.  Or even at string-width.
> 
> Those must either be blocking or throw an error when called from async
> thread.

So we will always block.  Which is what we already have with the Lisp
threads.

> Just to be clear, it is not a useful aim to make any arbitrary Elisp
> code asynchronous. But if we can at least allow specially designed Elisp
> to be asynchronous, it will already be a good breakthrough.

No, it will not.  It will be a fancy feature with little or no use.

> And later, in future, if we can manage to reduce the amount global state
> in Emacs, more Elisp code may be converted (or even work as is)
> asynchronously.

If we don't have a clear path towards that goal today, and don't even
know whether such a path is possible, it will never happen.

> Remember the lexical binding transition. I was not refused because some
> Elisp code failed to work with it.

Lexical binding is nothing compared to this.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 11:55                                                   ` Ihor Radchenko
@ 2023-07-08 14:43                                                     ` Eli Zaretskii
  2023-07-09  9:57                                                       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08 14:43 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Sat, 08 Jul 2023 11:55:34 +0000
> 
> >> And I still fail to see where base-buffer is _changed_. Is base buffer
> >> ever supposed to be changed?
> >
> > Another thread might change it while this thread examines it.
> 
> I was able to identify a single place in C code where buffer's base
> buffer is being set: in make-indirect-buffer, when the buffer is just
> created. So, it is safe to assume that buffer->base_buffer remain
> constant for any given live buffer. Unless I miss something.

C code can change.  It is not carved in stone.  Are we going to treat
the current state of the code as if it can never change?  That's
unwise.

> > Undo records changes in text properties and markers, and those are
> > different in the indirect buffers from the base buffers.  Does this
> > explain why we cannot simply point to the base buffer?
> 
> Are you sure? Text properties are certainly shared between indirect buffers.

That's not what the documentation says.

> >> > The C code accesses some buffer-local variables via Vfoo_bar C
> >> > variables.  Those need to be updated when the current buffer changes.
> >> 
> >> Now, when you explained this, it is also a big problem. Such C variables
> >> are a global state that needs to be kept up to date. Async will break
> >> the existing logic of these updates.
> >
> > Exactly.
> 
> I now looked a bit further, and what you are talking about are the
> variables defined via DEFVAR_PER_BUFFER.

Non necessarily.  Example: show-trailing-whitespace.

> If my understanding is correct, it should be safe to convert them into
> thread-local variables and update them within current thread when
> current_buffer (already thread-local) is altered.

It is only safe if no other thread will access the same buffer.  For
example, redisplay will be unable to show that buffer if it is visible
in some window, because its notion of the buffer-local values might be
inaccurate.

> >> > Oh, yes, they will: see fetch_buffer_markers, called by
> >> > set_buffer_internal_2.
> >> 
> >> Do you mean that in the existing cooperative Elisp threads, if one
> >> thread moves the point and yields to other thread, the other thread will
> >> be left with point in the same position (arbitrary, from the point of
> >> view of this other thread)?
> >
> > That's one problem, yes.  There are others.  Emacs Lisp uses point,
> > both explicitly and implicitly, all over the board.  It is unthinkable
> > that a thread will find point not in a place where it last moved it.
> 
> It is exactly what happens with current cooperative threads, AFAIK.

With the existing threads, this will never happen, because a thread
will never yield between moving point to some position and accessing
buffer text at that position.

> Will it make sense to convert PT, ZV, and BEGV into thread-local variables?

What do you expect redisplay to do when some thread moves point in a
way that it is no longer in the window?

> >> Is it buffer's marker list? I thought that you are referring to
> >> BUF_MARKERS, not to PT, BEGV, and ZV.
> >
> > Buffer's marker list are referenced in subroutines of
> > record_buffer_markers.
> 
> Do you mean record_buffer_markers->set_marker_both->attach_marker->
>   if (m->buffer != b)
>     {
>       unchain_marker (m);
>       m->buffer = b;
>       m->next = BUF_MARKERS (b);
>       BUF_MARKERS (b) = m;
>     }
> 
> But will this `if' ever trigger for PT, BEGV, and ZV?

I don't know!  You cannot possibly have code where you need to reason
about every single line whether something can or cannot happen there.
You need a relatively small set of basic assumptions that _always_
hold.  Anything more complex makes the task of developing and
maintaining this code an impossible job.

> Also, it looks reasonable to block BUF_MARKERS when we need to change
> BUF_MARKERS.

Sure.  Like I said: we'd need to lock everything.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 12:01                                                     ` Ihor Radchenko
@ 2023-07-08 14:45                                                       ` Eli Zaretskii
  0 siblings, 0 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-08 14:45 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Po Lu <luangruo@yahoo.com>, emacs-devel@gnu.org
> Date: Sat, 08 Jul 2023 12:01:00 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> > That is already a problem, as long as we are talking about leaving
> >> > most of Emacs application code intact.  How do you ensure only the
> >> > main thread can process input and display?  A non-main thread can
> >> > easily call some function which prompts the user, e.g., with
> >> > yes-or-no-p, or force redisplay with sit-for, and what do you do when
> >> > that happens?
> >> 
> >> To signal an error.
> >
> > Great! that means in practice no existing Lisp program could ever run
> > in a non-main thread.  It isn't a very practical solution.
> >
> > Besides, non-main threads do sometimes legitimately need to prompt
> > the user.  It is not a programmer's error when they do.
> 
> As an alternative async threads can be temporarily changed to
> cooperative in such scenario.

That doesn't help, because, as I said, we don't have a good solution
for this dilemma even for the current Lisp threads.

> They will already have to wait synchronously when consing new objects
> anyway.

This issue about display and prompting the user is not the same as
interlocking.  The latter can be solved by waiting, but the former
cannot.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 10:02                                                       ` Ihor Radchenko
@ 2023-07-08 19:39                                                         ` tomas
  0 siblings, 0 replies; 192+ messages in thread
From: tomas @ 2023-07-08 19:39 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Po Lu, emacs-devel

[-- Attachment #1: Type: text/plain, Size: 1480 bytes --]

On Sat, Jul 08, 2023 at 10:02:18AM +0000, Ihor Radchenko wrote:
> tomas@tuxteam.de writes:
> 
> > Reducing blocking waits for external stuff (processes, network) would
> > already be a lofty goal for Emacs, and far more attainable, I think.
> 
> Would you mind elaborating?
> AFAIU, the bottlenecks in these scenarios are still on Lisp side and are
> not specific to process/network handling.

I think we agree: the basic building blocks are there, but not every
Lisp code uses them. Writing explicit parallelism is a bit strange
(have a look at the javascript code used in browsers to see what
I mean: whenever they write a function to fetch something from the
server, they provide a callback as a parameter for the function to
know what to do when the network request succeeds. Some times there
are several callbacks (one for the error path).

Of course the function doesn't get called right away, but something
is put somewhere into the structure managing the select/poll/epoll
machinery.

I've done that in C: it's doable, but definitely more error prone
than the "normal" way of doing things. Of course in Javascript and
Emacs Lisp, with closures (thanks, Stefan!) it is a tad easier.

The elephant in the room is, as Eli says, all this existing code.

I thing that (if going this way at all) the interesting challenge
would be to find a strategy which can bring gradual improvements
without breaking "the rest of the world".

Cheers
-- 
t

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 14:12                                                       ` Eli Zaretskii
@ 2023-07-09  0:37                                                         ` Po Lu
  2023-07-09  7:01                                                           ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-09  0:37 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: yantar92, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

> Are they allowed to, say, write some data to a file?  If so, they
> might need to ask the user whether its okay to overwrite an existing
> file.

They might chose to use `write-region' instead, or save the file from
the main thread.

> IOW, I think you have a very narrow idea of "number and text crunching
> tasks" that could benefit from threads.  For example, one of the
> frequent requests is to run the part of Gnus that fetches email and
> articles in a separate thread -- if this is okay for "number and text
> crunching tasks", then it is likely to prompt users.

Prompting for options should take place before the thread is started, or
after the data is retrieved and about to be displayed.

> We are mis-communicating.  My point is that it is almost impossible to
> take an existing non-trivial Lisp program and let it run from a
> non-main thread without bumping into this issue.  Your responses
> indicate that you agree with me: such Lisp programs need to be written
> from scratch under the assumption that they will run from a non-main
> thread.

I agree with this completely.  From my POV, such requirements are
reasonable and not very different from the requirements for doing so in
other GUI toolkits and programs.

> How is this compatible with the goal of having threads in Emacs, which
> are to allow running Lisp code with less hair than the existing timers
> or emacs-async?

I thought the concern was one of efficiency, and not ease of use:
process IO is slow compared to sharing the same VM space, and
subprocesses also utilize more memory.

>> > Fixed how?
>> 
>> By replacing `sit-for' with `sleep-for' (and in general avoiding
>> functions that call redisplay or GUI functions.)
>
> So programs running in non-main threads will be unable to do stuff
> like show progress etc.?  That's not very encouraging, to say the
> least.

They should be able to run code from the main thread's command loop, via
mechanisms such as `unread-command-events'.

> This basically means rewriting most of Emacs.  Because most APIs and
> subroutines we use in every Lisp program were not "specifically
> written for running outside the main thread".  So we'll need special
> variants of all of those to do those simple jobs.

I wasn't thinking about Lisp functions used for text editing tasks, but
rather raw data crunching functions.  Most of these are already
side-effect free, and some are even truly reentrant.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  0:37                                                         ` Po Lu
@ 2023-07-09  7:01                                                           ` Eli Zaretskii
  2023-07-09  7:14                                                             ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09  7:01 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: yantar92@posteo.net,  emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 08:37:47 +0800
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > IOW, I think you have a very narrow idea of "number and text crunching
> > tasks" that could benefit from threads.  For example, one of the
> > frequent requests is to run the part of Gnus that fetches email and
> > articles in a separate thread -- if this is okay for "number and text
> > crunching tasks", then it is likely to prompt users.
> 
> Prompting for options should take place before the thread is started, or
> after the data is retrieved and about to be displayed.

That is of course not always possible, or even desirable.  For
example, some prompts are caused by specific aspects of the
processing, which may or may not happen, depending on the data.
Prompting for options unrelated to the actual processing would mean
annoying users unnecessarily.  Etc. etc.

> > We are mis-communicating.  My point is that it is almost impossible to
> > take an existing non-trivial Lisp program and let it run from a
> > non-main thread without bumping into this issue.  Your responses
> > indicate that you agree with me: such Lisp programs need to be written
> > from scratch under the assumption that they will run from a non-main
> > thread.
> 
> I agree with this completely.  From my POV, such requirements are
> reasonable and not very different from the requirements for doing so in
> other GUI toolkits and programs.

So basically, what you have in mind is a way of accruing a body of
special-purpose Lisp programs written specifically for running from
non-main threads.  Which means reusing existing code or packages not
written to these specifications will be impossible, and we will in
effect have two completely separate flavors of Emacs Lisp programs.
It would mean, in particular, that many functions in simple.el,
subr.el, and other similar infrastructure packages will need to have
specialized variants suitable for running in non-main threads.

And all this will be possible if -- and it's a large "if" -- the
necessary support on the C level will be written and prove reliable.

If this is the plan, it might be possible, at least in principle, but
is it really what is on people's mind when they dream about "more
concurrent Emacs"?  I doubt that.

> > How is this compatible with the goal of having threads in Emacs, which
> > are to allow running Lisp code with less hair than the existing timers
> > or emacs-async?
> 
> I thought the concern was one of efficiency, and not ease of use:
> process IO is slow compared to sharing the same VM space, and
> subprocesses also utilize more memory.

Memory is cheap these days, and "slow IO" is still a gain when it
allows us to use more than a single CPU execution unit at the same
time.  So yes, efficiency is desirable, but ease of use is also
important.  What's more, I don't think anyone really wants to have to
write Lisp programs in a completely different way when we want them to
run from threads.

> >> > Fixed how?
> >> 
> >> By replacing `sit-for' with `sleep-for' (and in general avoiding
> >> functions that call redisplay or GUI functions.)
> >
> > So programs running in non-main threads will be unable to do stuff
> > like show progress etc.?  That's not very encouraging, to say the
> > least.
> 
> They should be able to run code from the main thread's command loop, via
> mechanisms such as `unread-command-events'.

Those mechanisms only work when Emacs is idle, which is bad for
features like progress reporting.  Doing this right requires to
redesign how redisplay kicks in, and probably have several different
kinds of redisplay, not one.

> > This basically means rewriting most of Emacs.  Because most APIs and
> > subroutines we use in every Lisp program were not "specifically
> > written for running outside the main thread".  So we'll need special
> > variants of all of those to do those simple jobs.
> 
> I wasn't thinking about Lisp functions used for text editing tasks, but
> rather raw data crunching functions.  Most of these are already
> side-effect free, and some are even truly reentrant.

I think none of them are side-effect free, if you look closely.  They
access buffer text, they move point, they temporarily change the
selected window and the current buffer, the access and many times
modify the obarray, etc. etc.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  7:01                                                           ` Eli Zaretskii
@ 2023-07-09  7:14                                                             ` Po Lu
  2023-07-09  7:35                                                               ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-09  7:14 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: yantar92, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

> That is of course not always possible, or even desirable.  For
> example, some prompts are caused by specific aspects of the
> processing, which may or may not happen, depending on the data.
> Prompting for options unrelated to the actual processing would mean
> annoying users unnecessarily.  Etc. etc.

In that case, the thread will have to suspend itself until the main
thread can read a response from the user.

> So basically, what you have in mind is a way of accruing a body of
> special-purpose Lisp programs written specifically for running from
> non-main threads.  Which means reusing existing code or packages not
> written to these specifications will be impossible, and we will in
> effect have two completely separate flavors of Emacs Lisp programs.
> It would mean, in particular, that many functions in simple.el,
> subr.el, and other similar infrastructure packages will need to have
> specialized variants suitable for running in non-main threads.

Yes.

> And all this will be possible if -- and it's a large "if" -- the
> necessary support on the C level will be written and prove reliable.
>
> If this is the plan, it might be possible, at least in principle, but
> is it really what is on people's mind when they dream about "more
> concurrent Emacs"?  I doubt that.

I don't know what other people think, but it's what I would find
desirable, as my gripe with tools like Semantic is that they cannot run
expensive text crunching tasks in the background.  Having shared-memory
multiprocessing would allow these tasks to be implemented efficiently.

> Memory is cheap these days, and "slow IO" is still a gain when it
> allows us to use more than a single CPU execution unit at the same
> time.  So yes, efficiency is desirable, but ease of use is also
> important.  What's more, I don't think anyone really wants to have to
> write Lisp programs in a completely different way when we want them to
> run from threads.

When programmers write such code for other interactive programs, they
are comfortable with the limitations of running code outside of the UI
thread.  Why should writing new, thread-safe Lisp for Emacs be any more
difficult?

> Those mechanisms only work when Emacs is idle, which is bad for
> features like progress reporting.  Doing this right requires to
> redesign how redisplay kicks in, and probably have several different
> kinds of redisplay, not one.

Maybe.  I haven't worked out the details of that yet, but in most other
GUI programs and toolkits, messages from other threads can only be
processed by the UI thread while it is idle.

> I think none of them are side-effect free, if you look closely.  They
> access buffer text, they move point, they temporarily change the
> selected window and the current buffer, the access and many times
> modify the obarray, etc. etc.

`intern' should be interlocked so that only one thread is accessing the
obarray at any given time.  Point and buffer text shouldn't prove a
problem, provided that the buffer being used is not accessed from any
other thread.

But yes, many of these functions will have to be audited and possibly
rewritten for multi-threaded correctness.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  7:14                                                             ` Po Lu
@ 2023-07-09  7:35                                                               ` Eli Zaretskii
  2023-07-09  7:57                                                                 ` Ihor Radchenko
  2023-07-09  9:25                                                                 ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09  7:35 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: yantar92@posteo.net,  emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 15:14:36 +0800
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > What's more, I don't think anyone really wants to have to
> > write Lisp programs in a completely different way when we want them to
> > run from threads.
> 
> When programmers write such code for other interactive programs, they
> are comfortable with the limitations of running code outside of the UI
> thread.  Why should writing new, thread-safe Lisp for Emacs be any more
> difficult?

Because we'd need to throw away 40 years of Lisp programming, and
rewrite almost every bit of what was written since then.  It's a huge
setback for writing Emacs applications.

> > Those mechanisms only work when Emacs is idle, which is bad for
> > features like progress reporting.  Doing this right requires to
> > redesign how redisplay kicks in, and probably have several different
> > kinds of redisplay, not one.
> 
> Maybe.  I haven't worked out the details of that yet, but in most other
> GUI programs and toolkits, messages from other threads can only be
> processed by the UI thread while it is idle.

Emacs doesn't have a UI thread, as you know.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  7:35                                                               ` Eli Zaretskii
@ 2023-07-09  7:57                                                                 ` Ihor Radchenko
  2023-07-09  8:41                                                                   ` Eli Zaretskii
  2023-07-09  9:25                                                                 ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-09  7:57 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: Po Lu, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> When programmers write such code for other interactive programs, they
>> are comfortable with the limitations of running code outside of the UI
>> thread.  Why should writing new, thread-safe Lisp for Emacs be any more
>> difficult?
>
> Because we'd need to throw away 40 years of Lisp programming, and
> rewrite almost every bit of what was written since then.  It's a huge
> setback for writing Emacs applications.

May you please elaborate why exactly do we need to rewrite everything?
If we agree that async threads will have limitations, we may as well
start small, allowing a limited number of functions to be used. The
functions that do not support true async will then switch to the
cooperative mode, possibly throwing a warning.

Later, if it is truly necessary to rewrite things for async, it can be
done gradually.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  7:57                                                                 ` Ihor Radchenko
@ 2023-07-09  8:41                                                                   ` Eli Zaretskii
  2023-07-10 14:53                                                                     ` Dmitry Gutov
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09  8:41 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Po Lu <luangruo@yahoo.com>, emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 07:57:47 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> When programmers write such code for other interactive programs, they
> >> are comfortable with the limitations of running code outside of the UI
> >> thread.  Why should writing new, thread-safe Lisp for Emacs be any more
> >> difficult?
> >
> > Because we'd need to throw away 40 years of Lisp programming, and
> > rewrite almost every bit of what was written since then.  It's a huge
> > setback for writing Emacs applications.
> 
> May you please elaborate why exactly do we need to rewrite everything?

We already did, please read the previous messages.  In a nutshell:
because most of the Lisp code we have cannot be run from an async
thread.

> If we agree that async threads will have limitations, we may as well
> start small, allowing a limited number of functions to be used. The
> functions that do not support true async will then switch to the
> cooperative mode, possibly throwing a warning.
> 
> Later, if it is truly necessary to rewrite things for async, it can be
> done gradually.

The above exactly means a massive rewrite of a very large portion of
Lisp code we have today and use it every day.  As long as such a
rewrite is not done, "switching to cooperative mode" means that the
Lisp programs runs the way it does today, so there will be almost no
gain until enough code will be rewritten.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  7:35                                                               ` Eli Zaretskii
  2023-07-09  7:57                                                                 ` Ihor Radchenko
@ 2023-07-09  9:25                                                                 ` Po Lu
  2023-07-09 11:14                                                                   ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-09  9:25 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: yantar92, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

> Because we'd need to throw away 40 years of Lisp programming, and
> rewrite almost every bit of what was written since then.  It's a huge
> setback for writing Emacs applications.

Programmers who are not comfortable with that can continue to utilize
asynch subprocesses or to run their code in the main thread.

> Emacs doesn't have a UI thread, as you know.

Emacs's equivalent is the main thread, which is the only thread that can
safely call redisplay.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 14:26                                                                         ` Eli Zaretskii
@ 2023-07-09  9:36                                                                           ` Ihor Radchenko
  2023-07-09  9:56                                                                             ` Po Lu
                                                                                               ` (2 more replies)
  0 siblings, 3 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-09  9:36 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> > Which variables can safely and usefully be made thread-local?
>> 
>> PT, ZV, BEGV
>
> Even that is not enough: you forgot the gap.

Am I missing something, does the gap remain intact when the just move
point? AFAIU, the gap only needs to move when we do edits.

I was thinking about thread-local variables just to move around and read
buffer. Asynchronous writing is probably a poor idea anyway - the very
idea of a gap does not work well when we need to write in multiple
far-away places in buffer.

>> and the buffer-local variables that are represented by the
>> global C variables.
>
> That's a lot!

Do you mean that "a lot" is bad?

>> > I don't see how this could be practically useful.
>> 
>> For example, `org-element-interpret-data' converts Org mode AST to
>> string. Just now, I tried it using AST of one of my large Org buffers.
>> It took 150seconds to complete, while blocking Emacs.
>
> It isn't side-effect-free, though.

It is, just not declared so.

> I don't believe any useful Lisp program in Emacs can be
> side-effect-free, for the purposes of this discussion.  Every single
> one of them accesses the global state and changes the global state.

As I said, I hope that we can convert the important parts of the global
state into thread-local state.

>> Yes. I mean... look at Haskell. There is no shortage of pure functional
>> libraries there.
>
> I cannot follow you there: I don't know Haskell.

In short, pure functions in Haskell can utilize multiple CPUs
automatically, without programmers explicitly writing code for
multi-threading support.
https://wiki.haskell.org/Parallelism

     In Haskell we provide two ways to achieve parallelism:

     - Pure parallelism, which can be used to speed up non-IO parts of the program.
     - Concurrency, which can be used for parallelising IO.
     
     Pure Parallelism (Control.Parallel): Speeding up a pure computation
     using multiple processors. Pure parallelism has these advantages:

     - Guaranteed deterministic (same result every time)
     - no race conditions or deadlocks
     
     Concurrency (Control.Concurrent): Multiple threads of control that execute "at the same time".

     - Threads are in the IO monad
     - IO operations from multiple threads are interleaved non-deterministically
     - communication between threads must be explicitly programmed
     - Threads may execute on multiple processors simultaneously
     - Dangers: race conditions and deadlocks
     
     *Rule of thumb: use Pure Parallelism if you can, Concurrency otherwise.*

>> (declare (pure t))
>
> How many of these do we have, and can useful programs be written using
> only those?

I think I did provide several examples. Po Lu also did.
One additional example: Org mode export (the most CPU-heavy part of it).

> ... More importantly, when you call some function from
> simple.el, how do you know whether all of its subroutines and
> primitives are 'pure'?

We do not, but it may be possible to add assertions that will ensure
purity in whatever sense we need.

Having pure functions is not enough by itself - async support is still
needed and not trivial. However, supporting asynchronous pure function
is easier compared to more general async support. See the above quote
from Haskell wiki.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  9:36                                                                           ` Ihor Radchenko
@ 2023-07-09  9:56                                                                             ` Po Lu
  2023-07-09 10:04                                                                               ` Ihor Radchenko
  2023-07-09 11:59                                                                             ` Eli Zaretskii
  2023-07-09 17:13                                                                             ` Gregory Heytings
  2 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-09  9:56 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Am I missing something, does the gap remain intact when the just move
> point? AFAIU, the gap only needs to move when we do edits.
>
> I was thinking about thread-local variables just to move around and read
> buffer. Asynchronous writing is probably a poor idea anyway - the very
> idea of a gap does not work well when we need to write in multiple
> far-away places in buffer.

Thread-local state is NOT cheap!  We should not jump at every
opportunity to add new thread-local state.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-08 14:43                                                     ` Eli Zaretskii
@ 2023-07-09  9:57                                                       ` Ihor Radchenko
  2023-07-09 12:08                                                         ` Eli Zaretskii
  2023-07-09 12:22                                                         ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-09  9:57 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> I was able to identify a single place in C code where buffer's base
>> buffer is being set: in make-indirect-buffer, when the buffer is just
>> created. So, it is safe to assume that buffer->base_buffer remain
>> constant for any given live buffer. Unless I miss something.
>
> C code can change.  It is not carved in stone.  Are we going to treat
> the current state of the code as if it can never change?  That's
> unwise.

Drastic changes as big as breaking the concept that indirect buffer has
a single fixed parent are not likely.

Of course, if there are breaking future changes on C level will need to
account for concurrency.

>> > Undo records changes in text properties and markers, and those are
>> > different in the indirect buffers from the base buffers.  Does this
>> > explain why we cannot simply point to the base buffer?
>> 
>> Are you sure? Text properties are certainly shared between indirect buffers.
>
> That's not what the documentation says.

May you please point me to this documentation?

>> I now looked a bit further, and what you are talking about are the
>> variables defined via DEFVAR_PER_BUFFER.
>
> Non necessarily.  Example: show-trailing-whitespace.

It has XSYMBOL (sym)->u.s.redirect = SYMBOL_FORWARDED;

and the loop in set_buffer_internal_2 has
if (sym->u.s.redirect == SYMBOL_LOCALIZED

>> If my understanding is correct, it should be safe to convert them into
>> thread-local variables and update them within current thread when
>> current_buffer (already thread-local) is altered.
>
> It is only safe if no other thread will access the same buffer.  For
> example, redisplay will be unable to show that buffer if it is visible
> in some window, because its notion of the buffer-local values might be
> inaccurate.

Another thread will have its own local set of Vfoo. When that thread
switches to a buffer, it will update its local Vfoo values. So,
redisplay will have access to correct local values.

>> Will it make sense to convert PT, ZV, and BEGV into thread-local variables?
>
> What do you expect redisplay to do when some thread moves point in a
> way that it is no longer in the window?

Async threads will not trigger redisplay. And they will have their own
PT, BEGV, and ZV.

Basically, I propose async threads not to set buffer->pt in the buffer
object. They will operate using their own local excursion and
restriction.

>> > Buffer's marker list are referenced in subroutines of
>> > record_buffer_markers.
>> 
>> Do you mean record_buffer_markers->set_marker_both->attach_marker->
>>   if (m->buffer != b)
>>     {
>>       unchain_marker (m);
>>       m->buffer = b;
>>       m->next = BUF_MARKERS (b);
>>       BUF_MARKERS (b) = m;
>>     }
>> 
>> But will this `if' ever trigger for PT, BEGV, and ZV?
>
> I don't know!  You cannot possibly have code where you need to reason
> about every single line whether something can or cannot happen there.
> You need a relatively small set of basic assumptions that _always_
> hold.  Anything more complex makes the task of developing and
> maintaining this code an impossible job.

Fair.
Then, we can block if we need to store thread markers.

>> Also, it looks reasonable to block BUF_MARKERS when we need to change
>> BUF_MARKERS.
>
> Sure.  Like I said: we'd need to lock everything.

I kindly do not agree. It is not like a global lock. Yes, there will be
a lot of blocking of individual Elisp objects. But it does not mean that
everything will be locked.

I think there is a good way to tentatively check if everything will be
locked or not - just check what will happen with consing. Consing
appears to be one of the biggest bottlenecks that will basically cause
global lock. If it can be demonstrated that consing is manageable, other
things will pose less of an issue.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  9:56                                                                             ` Po Lu
@ 2023-07-09 10:04                                                                               ` Ihor Radchenko
  0 siblings, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-09 10:04 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> Am I missing something, does the gap remain intact when the just move
>> point? AFAIU, the gap only needs to move when we do edits.
>>
>> I was thinking about thread-local variables just to move around and read
>> buffer. Asynchronous writing is probably a poor idea anyway - the very
>> idea of a gap does not work well when we need to write in multiple
>> far-away places in buffer.
>
> Thread-local state is NOT cheap!  We should not jump at every
> opportunity to add new thread-local state.

I see.
I still think that adding PT, BEGV, and ZV makes sense (even for current
cooperative threads; I have been trying to write something using these
some time ago, and having to constantly switch buffer and set point was
a big annoyance).

As for global C variable bindings, may we get rid of them?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  9:25                                                                 ` Po Lu
@ 2023-07-09 11:14                                                                   ` Eli Zaretskii
  2023-07-09 11:23                                                                     ` Ihor Radchenko
  2023-07-09 12:10                                                                     ` Po Lu
  0 siblings, 2 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09 11:14 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: yantar92@posteo.net,  emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 17:25:21 +0800
> 
> > Emacs doesn't have a UI thread, as you know.
> 
> Emacs's equivalent is the main thread, which is the only thread that can
> safely call redisplay.

Which makes it impossible to display indications that are unrelated to
buffer text displayed in some window.  We use the "normal"
buffer/window/frame/redisplay machinery for showing such indications
(a good example is progress report), but the downsides of this are:

  . we cannot show the indications asynchronously, although they have
    absolutely no relation to any other buffer or window, and
  . they look badly when compared to modern GUI facilities



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 11:14                                                                   ` Eli Zaretskii
@ 2023-07-09 11:23                                                                     ` Ihor Radchenko
  2023-07-09 12:10                                                                     ` Po Lu
  1 sibling, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-09 11:23 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: Po Lu, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> Emacs's equivalent is the main thread, which is the only thread that can
>> safely call redisplay.
>
> Which makes it impossible to display indications that are unrelated to
> buffer text displayed in some window.  We use the "normal"
> buffer/window/frame/redisplay machinery for showing such indications
> (a good example is progress report), but the downsides of this are:

Progress report and other indications that do not request user output do
make more sense if they are converted to async calls. For example,
`make-progress-reporter' might create a cooperative thread (that will be
able to trigger redisplay) and `progress-reporter-update' can signal to
that thread to display the updated the progress.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  9:36                                                                           ` Ihor Radchenko
  2023-07-09  9:56                                                                             ` Po Lu
@ 2023-07-09 11:59                                                                             ` Eli Zaretskii
  2023-07-09 13:58                                                                               ` Ihor Radchenko
  2023-07-09 17:13                                                                             ` Gregory Heytings
  2 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09 11:59 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 09:36:15 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> > Which variables can safely and usefully be made thread-local?
> >> 
> >> PT, ZV, BEGV
> >
> > Even that is not enough: you forgot the gap.
> 
> Am I missing something, does the gap remain intact when the just move
> point? AFAIU, the gap only needs to move when we do edits.

That's not the issue.  The issue is that without the gap being
accurate Emacs cannot convert buffer positions to pointers to buffer
text.  So not saving the gap is asking for trouble.

Once again: please do NOT try designing Emacs features based on what
you happen to see in the current code.  First, it is easy to miss
important stuff that invalidates your design; and second, code does
change over time, and if you introduce a feature that assumes some
changes will never happen, you are introducing ticking time bombs into
Emacs.  So instead, you need to understand the assumptions and the
invariants on which the code relies, and either keep them or go over
everything and adapt the code to assumptions that are no longer true.
In this case, the assumption is that the gap is always accurate except
for short periods of time, during which buffer text cannot be accessed
via buffer positions.

> I was thinking about thread-local variables just to move around and read
> buffer.

You will see in xml.c that we move the gap even though we do not
"edit" (i.e. do not modify the buffer).  We do this so that a pointer
to a portion of buffer text could be passed to an external library as
a simple C char array -- a legitimate technique that must be
available.  So even reading the buffer sometimes might require moving
the gap.

> Asynchronous writing is probably a poor idea anyway - the very
> idea of a gap does not work well when we need to write in multiple
> far-away places in buffer.

What if the main thread modifies buffer text, while one of the other
threads wants to read from it?

> >> and the buffer-local variables that are represented by the
> >> global C variables.
> >
> > That's a lot!
> 
> Do you mean that "a lot" is bad?

Yes, because it will require a huge thread-local storage.

> >> > I don't see how this could be practically useful.
> >> 
> >> For example, `org-element-interpret-data' converts Org mode AST to
> >> string. Just now, I tried it using AST of one of my large Org buffers.
> >> It took 150seconds to complete, while blocking Emacs.
> >
> > It isn't side-effect-free, though.
> 
> It is, just not declared so.

No, it isn't.  For starters, it changes obarray.

> >> Yes. I mean... look at Haskell. There is no shortage of pure functional
> >> libraries there.
> >
> > I cannot follow you there: I don't know Haskell.
> 
> In short, pure functions in Haskell can utilize multiple CPUs
> automatically, without programmers explicitly writing code for
> multi-threading support.
> https://wiki.haskell.org/Parallelism

Thanks, but I'm afraid this all is a bit academic.  Haskell is a
language, whereas Emacs is a text-processing program.  So Emacs
doesn't only define a programming language, it also implements gobs of
APIs and low-level subroutines whose purpose is to facilitate a
specific class of applications.  The huge global state that we have is
due to this latter aspect of Emacs, not to the design of the language.

> > ... More importantly, when you call some function from
> > simple.el, how do you know whether all of its subroutines and
> > primitives are 'pure'?
> 
> We do not, but it may be possible to add assertions that will ensure
> purity in whatever sense we need.

Those assertions will fire in any useful program with 100% certainty.
Imagine the plight of an Emacs Lisp programmer who has to write and
debug such programs.

We have in Emacs gazillion lines of Lisp code, written, debugged, and
tested during 4 decades.  We use those, almost without thinking, every
day for writing Lisp programs.  What you suggest means throwing away
most of that and starting from scratch.

I mean, take the simplest thing, like save-buffer-excursion or
with-selected-window, something that we use all the time, and look how
much of the global state they access and change.  Then imagine that
you don't have these and need to write programs that switch buffers
and windows temporarily in thread-safe way.  Then reflect on what this
means for all the other useful APIs and subroutines we have.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  9:57                                                       ` Ihor Radchenko
@ 2023-07-09 12:08                                                         ` Eli Zaretskii
  2023-07-09 14:16                                                           ` Ihor Radchenko
  2023-07-09 12:22                                                         ` Po Lu
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09 12:08 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 09:57:20 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> I was able to identify a single place in C code where buffer's base
> >> buffer is being set: in make-indirect-buffer, when the buffer is just
> >> created. So, it is safe to assume that buffer->base_buffer remain
> >> constant for any given live buffer. Unless I miss something.
> >
> > C code can change.  It is not carved in stone.  Are we going to treat
> > the current state of the code as if it can never change?  That's
> > unwise.
> 
> Drastic changes as big as breaking the concept that indirect buffer has
> a single fixed parent are not likely.

Famous last words ;-)

> Of course, if there are breaking future changes on C level will need to
> account for concurrency.

Who will remember that we at some point "assumed" buffer->base_buffer
can change _only_ in make-indirect-buffer?

> >> > Undo records changes in text properties and markers, and those are
> >> > different in the indirect buffers from the base buffers.  Does this
> >> > explain why we cannot simply point to the base buffer?
> >> 
> >> Are you sure? Text properties are certainly shared between indirect buffers.
> >
> > That's not what the documentation says.
> 
> May you please point me to this documentation?

     In all other respects, the indirect buffer and its base buffer are
  completely separate.  They have different names, independent values of
  point, independent narrowing, independent markers and overlays (though
  inserting or deleting text in either buffer relocates the markers and
  overlays for both), independent major modes, and independent
  buffer-local variable bindings.

Or did you exclude overlays and their properties from the above?

> >> If my understanding is correct, it should be safe to convert them into
> >> thread-local variables and update them within current thread when
> >> current_buffer (already thread-local) is altered.
> >
> > It is only safe if no other thread will access the same buffer.  For
> > example, redisplay will be unable to show that buffer if it is visible
> > in some window, because its notion of the buffer-local values might be
> > inaccurate.
> 
> Another thread will have its own local set of Vfoo. When that thread
> switches to a buffer, it will update its local Vfoo values.

And what happens if that thread also changes Vfoo?

> So, redisplay will have access to correct local values.

No, it won't, because redisplay runs only in the main thread,
remember?  So it will not see changes to Vfoo done by other threads.
This could sometimes be good (e.g., if the changes are temporary), but
sometimes bad (if the changes are permanent).

> >> Will it make sense to convert PT, ZV, and BEGV into thread-local variables?
> >
> > What do you expect redisplay to do when some thread moves point in a
> > way that it is no longer in the window?
> 
> Async threads will not trigger redisplay. And they will have their own
> PT, BEGV, and ZV.

This goes back to the other sub-thread, where we discussed how to show
and prompt the user from non-main threads.  The conclusion was that
there is no good solution to that.  The best proposal, wait for the
main thread, would mean that stuff like stealth fontifications, which
currently run from timers, cannot be run from a thread.

> >> Also, it looks reasonable to block BUF_MARKERS when we need to change
> >> BUF_MARKERS.
> >
> > Sure.  Like I said: we'd need to lock everything.
> 
> I kindly do not agree. It is not like a global lock. Yes, there will be
> a lot of blocking of individual Elisp objects. But it does not mean that
> everything will be locked.

I see no difference between locking everything and locking just 95%.

> I think there is a good way to tentatively check if everything will be
> locked or not - just check what will happen with consing. Consing
> appears to be one of the biggest bottlenecks that will basically cause
> global lock. If it can be demonstrated that consing is manageable, other
> things will pose less of an issue.

Consing is not even the tip of the iceberg.  The real bad problems are
elsewhere: in the global objects we access and modify.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 11:14                                                                   ` Eli Zaretskii
  2023-07-09 11:23                                                                     ` Ihor Radchenko
@ 2023-07-09 12:10                                                                     ` Po Lu
  2023-07-09 13:03                                                                       ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-09 12:10 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: yantar92, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>   . they look badly when compared to modern GUI facilities

Other GUI toolkits also cannot display progress indications outside the
main thread, so I'm confused as to how this comparison was made.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  9:57                                                       ` Ihor Radchenko
  2023-07-09 12:08                                                         ` Eli Zaretskii
@ 2023-07-09 12:22                                                         ` Po Lu
  2023-07-09 13:12                                                           ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-09 12:22 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Another thread will have its own local set of Vfoo. When that thread
> switches to a buffer, it will update its local Vfoo values. So,
> redisplay will have access to correct local values.

How do you propose to associate different bindings for global variables
with each thread?  Remember that access to even a C variable in TLS is
significantly more expensive than an access to a normal variable; for
example, in MIPS systems, such an access results in an instruction
emulation trap to the Unix kernel, and on many other systems requires
multiple system and function calls.

Multiple value cells will have to be maintained for each symbol that is
locally bound.  This alone will already be very expensive; treating all
symbols this way is definitely unacceptable.

> Async threads will not trigger redisplay. And they will have their own
> PT, BEGV, and ZV.
>
> Basically, I propose async threads not to set buffer->pt in the buffer
> object. They will operate using their own local excursion and
> restriction.

We already have window points that are distinct from PT and PT_BYTE.
Adding thread-local points and narrowing would only contribute more to
the confusion.

The straightforward solution is rather for Lisp to avoid editing buffers
that are currently being displayed from outside the main thread.

>> Sure.  Like I said: we'd need to lock everything.

But the interlocking will be specific to the object being locked.  Two
threads will be able to modify the marker lists of two distinct buffers
simultaneously.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 12:10                                                                     ` Po Lu
@ 2023-07-09 13:03                                                                       ` Eli Zaretskii
  0 siblings, 0 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09 13:03 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: yantar92@posteo.net,  emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 20:10:06 +0800
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >   . they look badly when compared to modern GUI facilities
> 
> Other GUI toolkits also cannot display progress indications outside the
> main thread, so I'm confused as to how this comparison was made.

In this particular point, I meant visually, not thread-wise.  We use
the "normal" Emacs display facilities: windows and frames, and those
don't look like what modern users expect to see for such widgets.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 12:22                                                         ` Po Lu
@ 2023-07-09 13:12                                                           ` Eli Zaretskii
  2023-07-10  0:18                                                             ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09 13:12 UTC (permalink / raw)
  To: Po Lu; +Cc: yantar92, emacs-devel

> From: Po Lu <luangruo@yahoo.com>
> Cc: Eli Zaretskii <eliz@gnu.org>,  emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 20:22:30 +0800
> 
> >> Sure.  Like I said: we'd need to lock everything.
> 
> But the interlocking will be specific to the object being locked.  Two
> threads will be able to modify the marker lists of two distinct buffers
> simultaneously.

For this particular example, yes.  But we also have a lot of global
objects that are not specific to a buffer or a window.  Example:
buffer-list.  Another example: Vwindow_list (not exposed to Lisp).
Etc. etc.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 11:59                                                                             ` Eli Zaretskii
@ 2023-07-09 13:58                                                                               ` Ihor Radchenko
  2023-07-09 14:52                                                                                 ` Eli Zaretskii
                                                                                                   ` (2 more replies)
  0 siblings, 3 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-09 13:58 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> >> > Which variables can safely and usefully be made thread-local?
> ...
> In this case, the assumption is that the gap is always accurate except
> for short periods of time, during which buffer text cannot be accessed
> via buffer positions.

Thanks for the clarification.
But note that I was suggesting which global state variables may be
converted to thread-local.

I now understand that the gap can be moved by the code that is not
actually writing text in buffer. However, I do not see how this is a
problem we need to care about more than about generic problem with
simultaneous write.

If a variable or object value is being written, we need to block it.
If a buffer object is being written (like when moving the gap or writing
text), we need to block it. And this blocking will generally pose a
problem only when multiple threads try to access the same object, which
is generally unlikely.

The global state is another story. Redisplay, consing data, current
buffer, point, buffer narrowing, and C variables corresponding to
buffer-local Elisp variables are shared across all the threads, and
are often set by all the threads. And because point, narrowing, and some
buffer-locals (like `case-fold-search') are so ubiquitous, blocking them
will block everything. (Also, unwinding may contribute here, judging
from how thread.c juggles it)

So, if we want to have async support in Emacs, we need to find out how
to deal with each component of the global state without global locking:

1. There is consing, that is principally not asynchronous.

   It is not guaranteed, but I _hope_ that lock during consing
   can be manageable.

   We need to ensure that simultaneous consing will never happen. AFAIU,
   it should be ok if something that does not involve consing is running
   at the same time with cons (correct me if I am wrong here).

2. Redisplay cannot be asynchronous in a sense that it does not make
   sense that multiple threads, possibly working with different buffers
   and different points in those buffers, request redisplay
   simultaneously. Of course, it is impossible to display several places
   in a buffer at once.

   Only a single `main-thread' should be allowed to modify frames,
   window configurations, and generally trigger redisplay. And thread
   that attempts to do such modifications must wait to become
   `main-thread' first.

   This means that any code that is using things like
   `save-window-excursion', `display-buffer', and other display-related
   staff cannot run asynchronously.

   But I still believe that useful Elisp code can be written without a
   need to trigger redisplay. I have seen plenty of examples in Org and
   I have refactored a number of functions to avoid staff like
   `switch-to-buffer' in favour of `with-current-buffer'.

3. Current buffer, point position, and narrowing.

   By current design, Emacs always have a single global current buffer,
   current point position, and narrowing state in that buffer.
   Even when we switch cooperative threads, a thread must update its
   thread->current_buffer to previous_thread->current_buffer; and update
   point and narrowing by calling set_buffer_internal_2.

   Current design is incompatible with async threads - they must be able
   to have different buffers, points, and narrowing states current
   within each thread.

   That's why I suggested to convert PT, BEGV, and ZV into
   thread-locals.

   Note that PT, BEGV, and ZV are currently stored in buffer object
   before leaving a buffer and recovered when setting a new buffer.
   Async threads will make an assumption that
   (set-buffer "1") (goto-char 100) (set-buffer "2") (set-buffer "1")
   (= (point) 100) invalid.

4. Buffer-local variables, defined in C have C variable equivalents that
   are updated as Emacs changes current_buffer.
   
   AFAIU, their purpose is to make buffer-local variables and normal
   Elisp variables uniformly accessible from C code - C code does not
   need to worry about Vfoo being buffer-local or not, and just set it.

   This is not compatible with async threads that work with several buffers.

   I currently do not fully understand how defining C variables works in
   DEFVAR_LISP.

>> Asynchronous writing is probably a poor idea anyway - the very
>> idea of a gap does not work well when we need to write in multiple
>> far-away places in buffer.
>
> What if the main thread modifies buffer text, while one of the other
> threads wants to read from it?

Reading and writing should be blocked while buffer is being modified.

>> >> For example, `org-element-interpret-data' converts Org mode AST to
>> >> string. Just now, I tried it using AST of one of my large Org buffers.
>> >> It took 150seconds to complete, while blocking Emacs.
>> >
>> > It isn't side-effect-free, though.
>> 
>> It is, just not declared so.
>
> No, it isn't.  For starters, it changes obarray.

Do you mean `intern'? `intern-soft' would be equivalent there.

>> We do not, but it may be possible to add assertions that will ensure
>> purity in whatever sense we need.
>
> Those assertions will fire in any useful program with 100% certainty.
> Imagine the plight of an Emacs Lisp programmer who has to write and
> debug such programs.
>
> We have in Emacs gazillion lines of Lisp code, written, debugged, and
> tested during 4 decades.  We use those, almost without thinking, every
> day for writing Lisp programs.  What you suggest means throwing away
> most of that and starting from scratch.

There will indeed be a lot of work to make the range of Lisp functions
available for async code large enough. But it does not have to be done
all at once.

Of course, we first need to make sure that there are no hard blockers,
like global state. I do not think that Elisp code will be the blocker if
we find out how to deal with Emacs global state on C level.

> I mean, take the simplest thing, like save-buffer-excursion or
> with-selected-window, something that we use all the time, and look how
> much of the global state they access and change.  Then imagine that
> you don't have these and need to write programs that switch buffers
> and windows temporarily in thread-safe way.  Then reflect on what this
> means for all the other useful APIs and subroutines we have.

These examples are touching very basics aspects that we need to take
care of for async: (1) point/buffer; (2) unwind; (3) redisplay.
I think that (3) is not something that should be allowed as async. (1)
and (2) are to be discussed.

P.S. I am struggling to understand swap_in_symval_forwarding:

      /* Unload the previously loaded binding.  */
      tem1 = blv->valcell;

Is the above assignment redundant?

      if (blv->fwd.fwdptr)
	set_blv_value (blv, do_symval_forwarding (blv->fwd));

      /* Choose the new binding.  */
      {
	Lisp_Object var;
	XSETSYMBOL (var, symbol);
	tem1 = assq_no_quit (var, BVAR (current_buffer, local_var_alist));

This assignment always triggers after the first one, overriding it.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 12:08                                                         ` Eli Zaretskii
@ 2023-07-09 14:16                                                           ` Ihor Radchenko
  2023-07-09 15:00                                                             ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-09 14:16 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> Of course, if there are breaking future changes on C level will need to
>> account for concurrency.
>
> Who will remember that we at some point "assumed" buffer->base_buffer
> can change _only_ in make-indirect-buffer?

Ok. So, in other words, we need to make sure that any changes in a
buffer and also in indirect buffers will lock.
Trying to be more granular is not reliable.

>> So, redisplay will have access to correct local values.
>
> No, it won't, because redisplay runs only in the main thread,
> remember?  So it will not see changes to Vfoo done by other threads.
> This could sometimes be good (e.g., if the changes are temporary), but
> sometimes bad (if the changes are permanent).

If redisplay is about to display buffer that is being modified
(including its buffer-local values), it will have to lock until the
modification is done. Same for global Lisp variables.

>> > What do you expect redisplay to do when some thread moves point in a
>> > way that it is no longer in the window?
>> 
>> Async threads will not trigger redisplay. And they will have their own
>> PT, BEGV, and ZV.
>
> This goes back to the other sub-thread, where we discussed how to show
> and prompt the user from non-main threads.  The conclusion was that
> there is no good solution to that.  The best proposal, wait for the
> main thread, would mean that stuff like stealth fontifications, which
> currently run from timers, cannot be run from a thread.

May you provide a link?
I am not sure how independent PT+buffer in different threads affects
prompts.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 13:58                                                                               ` Ihor Radchenko
@ 2023-07-09 14:52                                                                                 ` Eli Zaretskii
  2023-07-09 15:49                                                                                   ` Ihor Radchenko
  2023-07-16 14:58                                                                                 ` Ihor Radchenko
  2023-07-24  8:42                                                                                 ` Ihor Radchenko
  2 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09 14:52 UTC (permalink / raw)
  To: Ihor Radchenko, Stefan Monnier; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 13:58:51 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> >> > Which variables can safely and usefully be made thread-local?
> > ...
> > In this case, the assumption is that the gap is always accurate except
> > for short periods of time, during which buffer text cannot be accessed
> > via buffer positions.
> 
> Thanks for the clarification.
> But note that I was suggesting which global state variables may be
> converted to thread-local.
> 
> I now understand that the gap can be moved by the code that is not
> actually writing text in buffer. However, I do not see how this is a
> problem we need to care about more than about generic problem with
> simultaneous write.

Imagine a situation where we need to process XML or HTML, and since
that's quite expensive, we want to do that in a thread.  What you are
saying is that this will either be impossible/impractical to do from a
thread, or will require to lock the entire buffer from access, because
the above processing moves the gap.  If that is not a problem, I don't
know what is, because there could be a lot of such scenarios, and they
all will be either forbidden or very hard to implement.

> If a variable or object value is being written, we need to block it.
> If a buffer object is being written (like when moving the gap or writing
> text), we need to block it. And this blocking will generally pose a
> problem only when multiple threads try to access the same object, which
> is generally unlikely.

My impression is that this is very likely, because of the many global
objects in Emacs.  Moreover, if you intend to allow several threads
using the same buffer (and I'm not yet sure whether you want that or
not), then the buffer-local variables of that buffer present the same
problem as global variables.  Take the case-table or display-table,
for example: those are buffer-local in many cases, but their changes
will affect all the threads that work on the buffer.

> 1. There is consing, that is principally not asynchronous.
> 
>    It is not guaranteed, but I _hope_ that lock during consing
>    can be manageable.
> 
>    We need to ensure that simultaneous consing will never happen. AFAIU,
>    it should be ok if something that does not involve consing is running
>    at the same time with cons (correct me if I am wrong here).

What do you do if some thread hits the memory-full condition?  The
current handling includes GC.

> 2. Redisplay cannot be asynchronous in a sense that it does not make
>    sense that multiple threads, possibly working with different buffers
>    and different points in those buffers, request redisplay
>    simultaneously. Of course, it is impossible to display several places
>    in a buffer at once.

But what about different threads redisplaying different windows? is
that allowed?  If not, here goes one more benefit of concurrent
threads.

Also, that issue with prompting the user also needs some solution,
otherwise the class of jobs that non-main threads can do will be even
smaller.

>    Only a single `main-thread' should be allowed to modify frames,
>    window configurations, and generally trigger redisplay. And thread
>    that attempts to do such modifications must wait to become
>    `main-thread' first.

What about changes to frame-parameters?  Those don't necessarily
affect display.

>    This means that any code that is using things like
>    `save-window-excursion', `display-buffer', and other display-related
>    staff cannot run asynchronously.

What about with-selected-window? also forbidden?

>    Async threads will make an assumption that
>    (set-buffer "1") (goto-char 100) (set-buffer "2") (set-buffer "1")
>    (= (point) 100) invalid.

If this is invalid, I don't see how one can write useful Lisp
programs, except of we request Lisp to explicitly define critical
sections.

> > What if the main thread modifies buffer text, while one of the other
> > threads wants to read from it?
> 
> Reading and writing should be blocked while buffer is being modified.

This will basically mean many/most threads will be blocked most of the
time.  Lisp programs in Emacs read and write buffers a lot, and the
notion of forcing a thread to work only on its own single set of
buffers is quite a restriction, IMO.

> >> >> For example, `org-element-interpret-data' converts Org mode AST to
> >> >> string. Just now, I tried it using AST of one of my large Org buffers.
> >> >> It took 150seconds to complete, while blocking Emacs.
> >> >
> >> > It isn't side-effect-free, though.
> >> 
> >> It is, just not declared so.
> >
> > No, it isn't.  For starters, it changes obarray.
> 
> Do you mean `intern'? `intern-soft' would be equivalent there.

"Equivalent" in what way?  AFAIU, the function does want to create a
symbol when it doesn't already exist.

> > Those assertions will fire in any useful program with 100% certainty.
> > Imagine the plight of an Emacs Lisp programmer who has to write and
> > debug such programs.
> >
> > We have in Emacs gazillion lines of Lisp code, written, debugged, and
> > tested during 4 decades.  We use those, almost without thinking, every
> > day for writing Lisp programs.  What you suggest means throwing away
> > most of that and starting from scratch.
> 
> There will indeed be a lot of work to make the range of Lisp functions
> available for async code large enough. But it does not have to be done
> all at once.

No, it doesn't.  But until we have enough of those functions
available, one will be unable to write applications without
implementing and debugging a lot of those new functions as part of the
job.  It will make simple programming jobs much larger and more
complicated, especially since it will require the programmers to
understand very well the limitations and requirements of concurrent
code programming, something Lisp programmers don't know very well, and
rightfully so.

> 
> Of course, we first need to make sure that there are no hard blockers,
> like global state. I do not think that Elisp code will be the blocker if
> we find out how to deal with Emacs global state on C level.
> 
> > I mean, take the simplest thing, like save-buffer-excursion or
> > with-selected-window, something that we use all the time, and look how
> > much of the global state they access and change.  Then imagine that
> > you don't have these and need to write programs that switch buffers
> > and windows temporarily in thread-safe way.  Then reflect on what this
> > means for all the other useful APIs and subroutines we have.
> 
> These examples are touching very basics aspects that we need to take
> care of for async: (1) point/buffer; (2) unwind; (3) redisplay.
> I think that (3) is not something that should be allowed as async. (1)
> and (2) are to be discussed.
> 
> P.S. I am struggling to understand swap_in_symval_forwarding:
> 
>       /* Unload the previously loaded binding.  */
>       tem1 = blv->valcell;
> 
> Is the above assignment redundant?

I'll let Stefan answer that, since he made the change in commit
ce5b453a449 that resulted in this code.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 14:16                                                           ` Ihor Radchenko
@ 2023-07-09 15:00                                                             ` Eli Zaretskii
  0 siblings, 0 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09 15:00 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 14:16:31 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> So, redisplay will have access to correct local values.
> >
> > No, it won't, because redisplay runs only in the main thread,
> > remember?  So it will not see changes to Vfoo done by other threads.
> > This could sometimes be good (e.g., if the changes are temporary), but
> > sometimes bad (if the changes are permanent).
> 
> If redisplay is about to display buffer that is being modified
> (including its buffer-local values), it will have to lock until the
> modification is done. Same for global Lisp variables.

Here goes one more advantage of concurrency, then: while redisplay
runs, all the threads that access buffers shown on display will have
to block.

> >> > What do you expect redisplay to do when some thread moves point in a
> >> > way that it is no longer in the window?
> >> 
> >> Async threads will not trigger redisplay. And they will have their own
> >> PT, BEGV, and ZV.
> >
> > This goes back to the other sub-thread, where we discussed how to show
> > and prompt the user from non-main threads.  The conclusion was that
> > there is no good solution to that.  The best proposal, wait for the
> > main thread, would mean that stuff like stealth fontifications, which
> > currently run from timers, cannot be run from a thread.
> 
> May you provide a link?
> I am not sure how independent PT+buffer in different threads affects
> prompts.

No, I meant the "Async threads will not trigger redisplay" part.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 14:52                                                                                 ` Eli Zaretskii
@ 2023-07-09 15:49                                                                                   ` Ihor Radchenko
  2023-07-09 16:35                                                                                     ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-09 15:49 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: Stefan Monnier, luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> I now understand that the gap can be moved by the code that is not
>> actually writing text in buffer. However, I do not see how this is a
>> problem we need to care about more than about generic problem with
>> simultaneous write.
>
> Imagine a situation where we need to process XML or HTML, and since
> that's quite expensive, we want to do that in a thread.  What you are
> saying is that this will either be impossible/impractical to do from a
> thread, or will require to lock the entire buffer from access, because
> the above processing moves the gap.  If that is not a problem, I don't
> know what is, because there could be a lot of such scenarios, and they
> all will be either forbidden or very hard to implement.

In this particular example, the need to move the gap is there because
htmlReadMemory requires memory segment as input. Obviously, it requires
a block in the current implementation.

Can it be done async-safe? We would need to memcpy the parsed buffer
block. Or up to 3 memcpy if we do not want to move the gap.

>> If a variable or object value is being written, we need to block it.
>> If a buffer object is being written (like when moving the gap or writing
>> text), we need to block it. And this blocking will generally pose a
>> problem only when multiple threads try to access the same object, which
>> is generally unlikely.
>
> My impression is that this is very likely, because of the many global
> objects in Emacs.

There are many objects, but each individual thread will use a subset of
these objects. What are the odds that intersection of these subsets are
frequent? Not high, except certain frequently used objects. And we need
to focus on identifying and figuring out what to do with these
likely-to-clash objects.

> ... Moreover, if you intend to allow several threads
> using the same buffer (and I'm not yet sure whether you want that or
> not),

It would be nice if multiple threads can work with the same buffer in
read-only mode, maybe with a single main thread editing the buffer (and
pausing the async read-only threads while doing so).
Writing simultaneously is a much bigger ask.

> ... then the buffer-local variables of that buffer present the same
> problem as global variables.  Take the case-table or display-table,
> for example: those are buffer-local in many cases, but their changes
> will affect all the threads that work on the buffer.

And how frequently are case-table and display-table changed? AFAIK, not
frequently at all.

>>    We need to ensure that simultaneous consing will never happen. AFAIU,
>>    it should be ok if something that does not involve consing is running
>>    at the same time with cons (correct me if I am wrong here).
>
> What do you do if some thread hits the memory-full condition?  The
> current handling includes GC.

May you please explain a bit more about the situation you are referring
to? My above statement was about consing, not GC.

For GC, as I mentioned earlier, we can pause each thread once maybe_gc()
determines that GC is necessary, until all the threads are paused. Then,
GC is executed and the threads continue.

>> 2. Redisplay cannot be asynchronous in a sense that it does not make
>>    sense that multiple threads, possibly working with different buffers
>>    and different points in those buffers, request redisplay
>>    simultaneously. Of course, it is impossible to display several places
>>    in a buffer at once.
>
> But what about different threads redisplaying different windows? is
> that allowed?  If not, here goes one more benefit of concurrent
> threads.

I think I need to elaborate what I mean by "redisplay cannot be
asynchronous".

If an async thread want to request redisplay, it should be possible. But
the redisplay itself must not be done by this same thread. Instead, the
thread will send a request that Emacs needs redisplay and optionally
block until that redisplay finishes (optionally, because something like
displaying notification may not require waiting). The redisplay requests
will be processed separately.

Is Emacs display code even capable of redisplaying two different windows
at the same time?

> Also, that issue with prompting the user also needs some solution,
> otherwise the class of jobs that non-main threads can do will be even
> smaller.

We can make reading input using similar idea to the above, but it will
always block until the response.

For non-blocking input, you said that it has been discussed.
I do vaguely recall such discussion in the past and I even recall some
ideas about it, but it would be better if you can link to that
discussion, so that the participants of this thread can review the
previously proposed ideas.

>>    Only a single `main-thread' should be allowed to modify frames,
>>    window configurations, and generally trigger redisplay. And thread
>>    that attempts to do such modifications must wait to become
>>    `main-thread' first.
>
> What about changes to frame-parameters?  Those don't necessarily
> affect display.

But doesn't it depend on graphic toolkit? I got an impression (from Po
Lu's replies) that graphic toolkits generally do not handle async
requests well.

>>    This means that any code that is using things like
>>    `save-window-excursion', `display-buffer', and other display-related
>>    staff cannot run asynchronously.
>
> What about with-selected-window? also forbidden?

Yes. A given frame must always have a single window active, which is not
compatible with async threads.
In addition, `with-selected-window' triggers redisplay. In particular,
it triggers redisplaying mode-lines.

It is a problem similar to async redisplay.

>>    Async threads will make an assumption that
>>    (set-buffer "1") (goto-char 100) (set-buffer "2") (set-buffer "1")
>>    (= (point) 100) invalid.
>
> If this is invalid, I don't see how one can write useful Lisp
> programs, except of we request Lisp to explicitly define critical
> sections.

Hmm. I realized that it is already invalid. At least, if `thread-yield'
is triggered somewhere between `set-buffer' calls and other thread
happens to move point in buffer "1".

But I realize that something like

(while (re-search-forward "foo") nil t)
  (with-current-buffer "bar" (insert (match-string 0))))

may be broken if point is moved when switching between "bar" and "foo".

Maybe, the last PV, ZV, and BEGV should not be stored in the buffer
object upon switching away and instead recorded in a thread-local
((buffer PV ZV BEGV) ...) alist. Then, thread will set PV, ZV, and BEGV
from its local alist rather than by reading buffer->... values.

>> > What if the main thread modifies buffer text, while one of the other
>> > threads wants to read from it?
>> 
>> Reading and writing should be blocked while buffer is being modified.
>
> This will basically mean many/most threads will be blocked most of the
> time.  Lisp programs in Emacs read and write buffers a lot, and the
> notion of forcing a thread to work only on its own single set of
> buffers is quite a restriction, IMO.

But not the same buffers!

>> >> >> For example, `org-element-interpret-data' converts Org mode AST to
>> >> >> string. Just now, I tried it using AST of one of my large Org buffers.
>> >> >> It took 150seconds to complete, while blocking Emacs.
>> >> >
>> >> > It isn't side-effect-free, though.
>> >> 
>> >> It is, just not declared so.
>> >
>> > No, it isn't.  For starters, it changes obarray.
>> 
>> Do you mean `intern'? `intern-soft' would be equivalent there.
>
> "Equivalent" in what way?  AFAIU, the function does want to create a
> symbol when it doesn't already exist.

No.
(intern (format "org-element-%s-interpreter" type)) is just to retrieve
existing function symbol used for a given AST element type.

(interpret
		      (let ((fun (intern-soft
				  (format "org-element-%s-interpreter" type))))
			(if (and fun (fboundp fun)) fun (lambda (_ contents) contents))))

would also work.

To be clear, I do know how this function is designed to work.
It may not be de-facto pure, but that's just because nobody tried to
ensure it - the usefulness of pure declarations is questionable in Emacs
now.

>> There will indeed be a lot of work to make the range of Lisp functions
>> available for async code large enough. But it does not have to be done
>> all at once.
>
> No, it doesn't.  But until we have enough of those functions
> available, one will be unable to write applications without
> implementing and debugging a lot of those new functions as part of the
> job.  It will make simple programming jobs much larger and more
> complicated, especially since it will require the programmers to
> understand very well the limitations and requirements of concurrent
> code programming, something Lisp programmers don't know very well, and
> rightfully so.

I disagree.
If Emacs supports async threads, it does not mean that every single
peace of Elisp should be async-compatible.
But if a programmer is explicitly writing async code, it is natural to
expect limitations.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 15:49                                                                                   ` Ihor Radchenko
@ 2023-07-09 16:35                                                                                     ` Eli Zaretskii
  2023-07-10 11:30                                                                                       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-09 16:35 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: monnier, luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Stefan Monnier <monnier@iro.umontreal.ca>, luangruo@yahoo.com,
>  emacs-devel@gnu.org
> Date: Sun, 09 Jul 2023 15:49:41 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > ... then the buffer-local variables of that buffer present the same
> > problem as global variables.  Take the case-table or display-table,
> > for example: those are buffer-local in many cases, but their changes
> > will affect all the threads that work on the buffer.
> 
> And how frequently are case-table and display-table changed? AFAIK, not
> frequently at all.

Why does it matter?  Would you disallow programs doing that just
because you think it's infrequent?

Processing of various protocols frequently requires to do stuff like

   (with-case-table ascii-case-table
     DO SOMETHING...)

because otherwise some locales, such as the Turkish one, cause trouble
with case conversion of the upper-case 'I'.  So changing the
case-table of a buffer is not such an outlandish operation to do.

> >>    We need to ensure that simultaneous consing will never happen. AFAIU,
> >>    it should be ok if something that does not involve consing is running
> >>    at the same time with cons (correct me if I am wrong here).
> >
> > What do you do if some thread hits the memory-full condition?  The
> > current handling includes GC.
> 
> May you please explain a bit more about the situation you are referring
> to? My above statement was about consing, not GC.

Consing can trigger GC if we detect the memory-full situation, see
alloc.c:memory_full.

> For GC, as I mentioned earlier, we can pause each thread once maybe_gc()
> determines that GC is necessary, until all the threads are paused. Then,
> GC is executed and the threads continue.

If all the threads are paused, which thread will run GC?

> If an async thread want to request redisplay, it should be possible. But
> the redisplay itself must not be done by this same thread. Instead, the
> thread will send a request that Emacs needs redisplay and optionally
> block until that redisplay finishes (optionally, because something like
> displaying notification may not require waiting). The redisplay requests
> will be processed separately.

Ouch! this again kills opportunities for gains from concurrent
processing.  It means, for example, that we will be unable to run in a
thread some processing that affects a window and reflect that
processing on display when it's ready.

> Is Emacs display code even capable of redisplaying two different windows
> at the same time?

Maybe.  At least the GUI redisplay proceeds one window at a time, so
we already deal with each window separately.  There are some caveats,
naturally: redisplay runs hooks, which could access other windows,
redisplaying a window also updates the frame title, etc. -- those will
need to be carefully examined.

> > Also, that issue with prompting the user also needs some solution,
> > otherwise the class of jobs that non-main threads can do will be even
> > smaller.
> 
> We can make reading input using similar idea to the above, but it will
> always block until the response.

This will have to be designed and implemented first, since we
currently have no provision for multiple prompt sources.

> For non-blocking input, you said that it has been discussed.
> I do vaguely recall such discussion in the past and I even recall some
> ideas about it, but it would be better if you can link to that
> discussion, so that the participants of this thread can review the
> previously proposed ideas.

  https://lists.gnu.org/archive/html/emacs-devel/2018-08/msg00456.html

> >>    Only a single `main-thread' should be allowed to modify frames,
> >>    window configurations, and generally trigger redisplay. And thread
> >>    that attempts to do such modifications must wait to become
> >>    `main-thread' first.
> >
> > What about changes to frame-parameters?  Those don't necessarily
> > affect display.
> 
> But doesn't it depend on graphic toolkit?

Not necessarily: frame parameters are also used for "frame-local"
variables, and those have nothing to do with GUI toolkits.

> >>    This means that any code that is using things like
> >>    `save-window-excursion', `display-buffer', and other display-related
> >>    staff cannot run asynchronously.
> >
> > What about with-selected-window? also forbidden?
> 
> Yes.

Too bad.

> In addition, `with-selected-window' triggers redisplay. In particular,
> it triggers redisplaying mode-lines.

No, with-selected-window doesn't do any redisplay, it only marks the
window for redisplay, i.e. it suppresses the redisplay optimization
which could decide that this window doesn't need to be redrawn.
Redisplay itself will happen normally, usually when Emacs is idle,
i.e. when the command which called with-selected-window finishes.

> >>    Async threads will make an assumption that
> >>    (set-buffer "1") (goto-char 100) (set-buffer "2") (set-buffer "1")
> >>    (= (point) 100) invalid.
> >
> > If this is invalid, I don't see how one can write useful Lisp
> > programs, except of we request Lisp to explicitly define critical
> > sections.
> 
> Hmm. I realized that it is already invalid. At least, if `thread-yield'
> is triggered somewhere between `set-buffer' calls and other thread
> happens to move point in buffer "1".

But the programmer is in control!  If no such API is called, point
stays put.  And if such APIs are called, the program can save and
restore point around the calls.  By contrast, you want to be able to
pause and resume threads at will, which means a thread can be
suspended at any time.  So in this case, the programmer will be unable
to do anything against such calamities.

> >> > What if the main thread modifies buffer text, while one of the other
> >> > threads wants to read from it?
> >> 
> >> Reading and writing should be blocked while buffer is being modified.
> >
> > This will basically mean many/most threads will be blocked most of the
> > time.  Lisp programs in Emacs read and write buffers a lot, and the
> > notion of forcing a thread to work only on its own single set of
> > buffers is quite a restriction, IMO.
> 
> But not the same buffers!

I don't see why not.

> To be clear, I do know how this function is designed to work.
> It may not be de-facto pure, but that's just because nobody tried to
> ensure it - the usefulness of pure declarations is questionable in Emacs
> now.

But that's the case with most Emacs Lisp programs: we rarely try too
hard to make functions pure even if they can be pure.  What's more
important, most useful programs cannot be pure at all, because Emacs
is about text processing, which means modifying text, and good Emacs
Lisp programs process text in buffers, instead of returning strings,
which makes them not pure.

> >> There will indeed be a lot of work to make the range of Lisp functions
> >> available for async code large enough. But it does not have to be done
> >> all at once.
> >
> > No, it doesn't.  But until we have enough of those functions
> > available, one will be unable to write applications without
> > implementing and debugging a lot of those new functions as part of the
> > job.  It will make simple programming jobs much larger and more
> > complicated, especially since it will require the programmers to
> > understand very well the limitations and requirements of concurrent
> > code programming, something Lisp programmers don't know very well, and
> > rightfully so.
> 
> I disagree.

So let's agree to disagree.  Because I don't see how it will benefit
anything to continue this dispute.  I've said everything I have to
say, sometimes more than once.  At least Po Lu seems to agree that
concurrent Emacs means we will have to write a lot of new routines,
and that code which is meant to run in non-main threads will have to
be written very specially.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  9:36                                                                           ` Ihor Radchenko
  2023-07-09  9:56                                                                             ` Po Lu
  2023-07-09 11:59                                                                             ` Eli Zaretskii
@ 2023-07-09 17:13                                                                             ` Gregory Heytings
  2023-07-10 11:37                                                                               ` Ihor Radchenko
  2 siblings, 1 reply; 192+ messages in thread
From: Gregory Heytings @ 2023-07-09 17:13 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, luangruo, emacs-devel


>> I don't believe any useful Lisp program in Emacs can be 
>> side-effect-free, for the purposes of this discussion.  Every single 
>> one of them accesses the global state and changes the global state.
>
> As I said, I hope that we can convert the important parts of the global 
> state into thread-local state.
>

That's not possible alas.  Basically Emacs' global state is the memory it 
has allocated, which is an enormous graph in which objects are the nodes 
and pointers are the edges.  Any object may be linked to any number of 
other objects (which means that you cannot isolate a subgraph, or even a 
single object, of the graph), and any non-trivial Elisp function (as well 
as garbage collection and redisplay) can change any of the objects and the 
structure of the graph at any time (which means that two threads cannot 
use the same graph concurrently without both of them taking the risk of 
finding the graph suddenly different from what it was during the execution 
of the previous opcode).

Given that, the only thing can be done in practice (which is what 
emacs-async does) is to create another Emacs instance, with another global 
state, in which some processing is done, and report the result back to the 
main Emacs instance.

The only alternative is to create another Emacs from scratch, with 
concurrency as one of its design principles.

>>> Yes. I mean... look at Haskell. There is no shortage of pure 
>>> functional libraries there.
>>
>> I cannot follow you there: I don't know Haskell.
>
> In short, pure functions in Haskell can utilize multiple CPUs 
> automatically, without programmers explicitly writing code for 
> multi-threading support.
>

The problem is that Elisp is not Haskell, and cannot be converted into 
something that resembles Haskell.  Haskell functions are pure by default 
(impurity is the rare exception), Elisp functions are impure by default 
(purity is the rare exception).




^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 13:12                                                           ` Eli Zaretskii
@ 2023-07-10  0:18                                                             ` Po Lu
  0 siblings, 0 replies; 192+ messages in thread
From: Po Lu @ 2023-07-10  0:18 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: yantar92, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

> For this particular example, yes.  But we also have a lot of global
> objects that are not specific to a buffer or a window.  Example:
> buffer-list.  Another example: Vwindow_list (not exposed to Lisp).
> Etc. etc.

They should have individual interlocks of their own, instead of sharing
a large global lock.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 16:35                                                                                     ` Eli Zaretskii
@ 2023-07-10 11:30                                                                                       ` Ihor Radchenko
  2023-07-10 12:13                                                                                         ` Po Lu
  2023-07-10 13:09                                                                                         ` Eli Zaretskii
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-10 11:30 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: monnier, luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> And how frequently are case-table and display-table changed? AFAIK, not
>> frequently at all.
>
> Why does it matter?  Would you disallow programs doing that just
> because you think it's infrequent?

Not disallow. But the programs setting case-table and display-table will
block. That will be only a fraction of programs.

> Processing of various protocols frequently requires to do stuff like
>
>    (with-case-table ascii-case-table
>      DO SOMETHING...)
>
> because otherwise some locales, such as the Turkish one, cause trouble
> with case conversion of the upper-case 'I'.  So changing the
> case-table of a buffer is not such an outlandish operation to do.

Yet, a lot of Elisp programs do not need to deal with case tables.

And if we talk about this particular use case, may we simply allow
let-binding case tables?

>> May you please explain a bit more about the situation you are referring
>> to? My above statement was about consing, not GC.
>
> Consing can trigger GC if we detect the memory-full situation, see
> alloc.c:memory_full.

I see.
AFAIU, we can then raise a flag that GC is necessary, so that other
threads will stop next time they reach maybe_gc, until GC is complete.

>> For GC, as I mentioned earlier, we can pause each thread once maybe_gc()
>> determines that GC is necessary, until all the threads are paused. Then,
>> GC is executed and the threads continue.
>
> If all the threads are paused, which thread will run GC?

I imagine that a dedicated thread will be used for GC. It will do
nothing until a signal, acquire a global lock, perform GC, release
the lock, and continue waiting.

>> If an async thread want to request redisplay, it should be possible. But
>> the redisplay itself must not be done by this same thread. Instead, the
>> thread will send a request that Emacs needs redisplay and optionally
>> block until that redisplay finishes (optionally, because something like
>> displaying notification may not require waiting). The redisplay requests
>> will be processed separately.
>
> Ouch! this again kills opportunities for gains from concurrent
> processing.  It means, for example, that we will be unable to run in a
> thread some processing that affects a window and reflect that
> processing on display when it's ready.

We may or may not be able to get async redisplay, depending on Emacs
display implementation:

1. A thread that requests redisplay will send a signal request to
   perform redisplay.
2. Another thread, responsible for redisplay, will retrieve the signal
   and process it. That thread may or may not do it asynchronously.

AFAIU, it is currently not possible to redisplay asynchronously.
But if we can somehow make redisplay (or parts of it) asynchronous, we
will just need to make adjustments to redisplay thread implementation.
Other async threads will not need to be changed.

I just do not know enough about redisplay to dive into possibility of
async in it. I am afraid that this discussion is already complex enough,
and discussing redisplay will add yet another layer of complexity on top.

>> We can make reading input using similar idea to the above, but it will
>> always block until the response.
>
> This will have to be designed and implemented first, since we
> currently have no provision for multiple prompt sources.

>> For non-blocking input, you said that it has been discussed.
>> I do vaguely recall such discussion in the past and I even recall some
>> ideas about it, but it would be better if you can link to that
>> discussion, so that the participants of this thread can review the
>> previously proposed ideas.
>
>   https://lists.gnu.org/archive/html/emacs-devel/2018-08/msg00456.html

Thanks!
After re-reading that discussion, it looks like the most discussed idea
was about maintaining a queue of input requests and prompting users at
appropriate times. This is similar to what I proposed - async threads
should query to process input and then the input order and time will be
decided elsewhere.

The rest of the discussion revolved around the fact that

1. Input may involve non-trivial Elisp that requires thread context - so
   the input processing must know the context of the original thread.
   Ergo, thread itself may need to be involved to process input, not
   some other thread.

   For async threads, it means that input may need to pause the whole
   async thread until scheduler (or whatever code decides when and where
   to read/display input or query) decides that it is appropriate time
   to query the user.

   We may also need to allow some API to make other thread read input
   independently while the current thread continues on. But that's not a
   requirement - I imagine that it can be done with `make-thread' from
   inside the async thread.

2. There should be some kind of indication for the user which thread is
   requesting input:

   - Emacs may display that intput is pending in modeline indicator (aka
     "unread" notifications); or Emacs may display upcoming query in the
     minibuffer until the user explicitly switches there and inputs the
     answer.
     
   - When input query is actually displayed, there should be an
     indication which thread the query belongs to - either a thread name
     or an appropriately designed prompt.

     Possibly, prompt history for the relevant thread may be displayed.

3. Threads may want to group their prompts into chunks that should be
   read without interrupt, so that we do not mix two threads like
   "Login for host1: "->"Password for host1: " and "Login for host2:
   "->"Password for host2: ".

>> > What about changes to frame-parameters?  Those don't necessarily
>> > affect display.
>> 
>> But doesn't it depend on graphic toolkit?
>
> Not necessarily: frame parameters are also used for "frame-local"
> variables, and those have nothing to do with GUI toolkits.

I see.
Then, when an async threads modifies frame parameters, it should be
allowed to do so. However, during redisplay, redisplay code should block
frame parameters - it should not be allowed to change during redisplay.

>> >>    This means that any code that is using things like
>> >>    `save-window-excursion', `display-buffer', and other display-related
>> >>    staff cannot run asynchronously.
>> >
>> > What about with-selected-window? also forbidden?
>> 
>> Yes.
>
> Too bad.

Well. In theory, this is a problem similar to set-buffer - we need to
deal with the fact that Emacs assumes that there is always a single
"current" window and frame.

I assume that this problem is not more difficult than with set-buffer.
If we can solve the problem with global state related to buffer, it
should be doable to solve async window/frame setting. (I am being
optimistic here)

>> >>    Async threads will make an assumption that
>> >>    (set-buffer "1") (goto-char 100) (set-buffer "2") (set-buffer "1")
>> >>    (= (point) 100) invalid.
>> >
> ... 
>> Hmm. I realized that it is already invalid. At least, if `thread-yield'
>> is triggered somewhere between `set-buffer' calls and other thread
>> happens to move point in buffer "1".
>
> But the programmer is in control!  If no such API is called, point
> stays put.  And if such APIs are called, the program can save and
> restore point around the calls.  By contrast, you want to be able to
> pause and resume threads at will, which means a thread can be
> suspended at any time.  So in this case, the programmer will be unable
> to do anything against such calamities.

Right. So, we may need to store per-thread history of PT, BEGV, and ZV
for each buffer that was current during thread execution.

>> >> > What if the main thread modifies buffer text, while one of the other
>> >> > threads wants to read from it?
>> >> 
>> >> Reading and writing should be blocked while buffer is being modified.
>> >
>> > This will basically mean many/most threads will be blocked most of the
>> > time.  Lisp programs in Emacs read and write buffers a lot, and the
>> > notion of forcing a thread to work only on its own single set of
>> > buffers is quite a restriction, IMO.
>> 
>> But not the same buffers!
>
> I don't see why not.

Of course, one may want to run two async threads that modify the same
buffer simultaneously. Not all the Elisp code will need this, but some
may.

And it is indeed a restriction. But I do not see that it should be the
restriction that stops us from implementing async support, even if we
cannot solve it.

> But that's the case with most Emacs Lisp programs: we rarely try too
> hard to make functions pure even if they can be pure.  What's more
> important, most useful programs cannot be pure at all, because Emacs
> is about text processing, which means modifying text, and good Emacs
> Lisp programs process text in buffers, instead of returning strings,
> which makes them not pure.

I am not sure if it is a good aim to design async threads in such a way
that _any_ (rather than explicitly designed) Elisp can work
asynchronously. It would be cool if we could achieve this, but I do not
feel like most of Elisp actually require async.

Yes, most of Elisp is about text processing. But when we really need to
utilize asynchronous code, it is usually not about reading/writing text
- it is about CPU-heavy analysis of text. This analysis is what truly
needs async threads. Writing back, if it is necessary, may be separated
from the analysis code or even done in separate buffer followed by
`buffer-swap-text' or `replace-buffer-contents'.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 17:13                                                                             ` Gregory Heytings
@ 2023-07-10 11:37                                                                               ` Ihor Radchenko
  2023-07-13 13:54                                                                                 ` Gregory Heytings
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-10 11:37 UTC (permalink / raw)
  To: Gregory Heytings; +Cc: Eli Zaretskii, luangruo, emacs-devel

Gregory Heytings <gregory@heytings.org> writes:

>> As I said, I hope that we can convert the important parts of the global 
>> state into thread-local state.
>>
>
> That's not possible alas.  Basically Emacs' global state is the memory it 
> has allocated, which is an enormous graph in which objects are the nodes 
> and pointers are the edges.  Any object may be linked to any number of 
> other objects (which means that you cannot isolate a subgraph, or even a 
> single object, of the graph), and any non-trivial Elisp function (as well 
> as garbage collection and redisplay) can change any of the objects and the 
> structure of the graph at any time (which means that two threads cannot 
> use the same graph concurrently without both of them taking the risk of 
> finding the graph suddenly different from what it was during the execution 
> of the previous opcode).

That's already similar for cooperative threads. The difference is that
sudden changes are limited to `thread-yield', while async threads will
have to be written keeping in mind that global variables may be changed
during execution unless explicitly locked.

And yes, we will need to implement locking on Elisp object level and
also on Elisp variable level.

But the very need to access global Elisp variables/objects from async
threads should not be considered a good practice (except near start/end
of thread execution). Most of processing should be done using threads'
local lexical scope.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 11:30                                                                                       ` Ihor Radchenko
@ 2023-07-10 12:13                                                                                         ` Po Lu
  2023-07-10 12:28                                                                                           ` Ihor Radchenko
  2023-07-10 13:09                                                                                         ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-10 12:13 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, monnier, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I see.
> AFAIU, we can then raise a flag that GC is necessary, so that other
> threads will stop next time they reach maybe_gc, until GC is complete.

What if even one of those other threads never calls maybe_gc?  This can
easily happen if they are blocked by a long running operation (domain
name resolution comes to mind) and will result in all threads waiting
for it to complete, defeating the purpose of having threads in the first
place.

TRT on GNU and other Mach based systems (OSF/1, OS X, etc) is to suspend
all other threads using `thread_suspend' and then run GC from whichever
thread is the first to discover that the consing threshold has been
exceeded.  Unix systems typically provide other platform specific
functions to suspend threads or LWPs.

As a consequence of this approach, GC can take place even if those other
threads hold pointers to relocatable string data and buffer text.  I've
experimentally verified that this is rare, and that not compacting
string blocks which are referenced from the stack or in registers
doesn't significantly affect string data fragmentation.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 12:13                                                                                         ` Po Lu
@ 2023-07-10 12:28                                                                                           ` Ihor Radchenko
  2023-07-10 12:48                                                                                             ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-10 12:28 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, monnier, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> AFAIU, we can then raise a flag that GC is necessary, so that other
>> threads will stop next time they reach maybe_gc, until GC is complete.
>
> What if even one of those other threads never calls maybe_gc?  This can
> easily happen if they are blocked by a long running operation (domain
> name resolution comes to mind) and will result in all threads waiting
> for it to complete, defeating the purpose of having threads in the first
> place.

Good point.
Maybe something similar to how thread-yield is triggered by
`accept-process-output' - add extra maybe_gc() to process code as
necessary.

> TRT on GNU and other Mach based systems (OSF/1, OS X, etc) is to suspend
> all other threads using `thread_suspend' and then run GC from whichever
> thread is the first to discover that the consing threshold has been
> exceeded.  Unix systems typically provide other platform specific
> functions to suspend threads or LWPs.
>
> As a consequence of this approach, GC can take place even if those other
> threads hold pointers to relocatable string data and buffer text.  I've
> experimentally verified that this is rare, and that not compacting
> string blocks which are referenced from the stack or in registers
> doesn't significantly affect string data fragmentation.

Sounds not 100% reliable.
May it be possible to restart blocking operation if GC is triggered in
the middle of it?

What I have in mind is something similar to

(setq sucess nil)
(while (not success)
  (while-no-input
    (do-staff-that-knows-it-can-be-restarted-sometimes)
    (setq sucess t)))

but for low-level code that requires object locks.    

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 12:28                                                                                           ` Ihor Radchenko
@ 2023-07-10 12:48                                                                                             ` Po Lu
  2023-07-10 12:53                                                                                               ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-10 12:48 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, monnier, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Sounds not 100% reliable.

It is 100% reliable, at least when implemented with system specific
code.  POSIX is iffy in this regard.

> May it be possible to restart blocking operation if GC is triggered in
> the middle of it?

You cannot make an LWP that is performing name resolution call
`maybe_gc', but it can be suspended outright.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 12:48                                                                                             ` Po Lu
@ 2023-07-10 12:53                                                                                               ` Ihor Radchenko
  2023-07-10 13:18                                                                                                 ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-10 12:53 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, monnier, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> Sounds not 100% reliable.
>
> It is 100% reliable, at least when implemented with system specific
> code.  POSIX is iffy in this regard.

Then, may GC simply suspend all the threads every time, not just when we
have memory_full condition?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 11:30                                                                                       ` Ihor Radchenko
  2023-07-10 12:13                                                                                         ` Po Lu
@ 2023-07-10 13:09                                                                                         ` Eli Zaretskii
  2023-07-10 13:58                                                                                           ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-10 13:09 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: monnier, luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: monnier@iro.umontreal.ca, luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Mon, 10 Jul 2023 11:30:10 +0000
> 
> AFAIU, it is currently not possible to redisplay asynchronously.

The main reason for that is that redisplay accesses the global state
in many places, and so it needs that global state to stay put.  Wed
already have trouble with keeping this so because we allow to run Lisp
from various hooks called by redisplay and via :eval in the mode line.
Quite a few bugs were caused by these, and had to be fixed by "fixing
up" the state, like making sure the selected frame/window were not
deleted under your feet etc.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 12:53                                                                                               ` Ihor Radchenko
@ 2023-07-10 13:18                                                                                                 ` Po Lu
  0 siblings, 0 replies; 192+ messages in thread
From: Po Lu @ 2023-07-10 13:18 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, monnier, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Then, may GC simply suspend all the threads every time, not just when we
> have memory_full condition?

The easiest way to make Emacs's GC thread-safe is to make it suspend
every thread other than the one performing garbage collection.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 13:09                                                                                         ` Eli Zaretskii
@ 2023-07-10 13:58                                                                                           ` Ihor Radchenko
  2023-07-10 14:37                                                                                             ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-10 13:58 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: monnier, luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> AFAIU, it is currently not possible to redisplay asynchronously.
>
> The main reason for that is that redisplay accesses the global state
> in many places, and so it needs that global state to stay put.  Wed
> already have trouble with keeping this so because we allow to run Lisp
> from various hooks called by redisplay and via :eval in the mode line.
> Quite a few bugs were caused by these, and had to be fixed by "fixing
> up" the state, like making sure the selected frame/window were not
> deleted under your feet etc.

Do you know which particular parts of the global state are necessary for
redisplay? You mentioned current_frame and current_window. What else?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 13:58                                                                                           ` Ihor Radchenko
@ 2023-07-10 14:37                                                                                             ` Eli Zaretskii
  2023-07-10 14:55                                                                                               ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-10 14:37 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: monnier, luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: monnier@iro.umontreal.ca, luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Mon, 10 Jul 2023 13:58:48 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> AFAIU, it is currently not possible to redisplay asynchronously.
> >
> > The main reason for that is that redisplay accesses the global state
> > in many places, and so it needs that global state to stay put.  Wed
> > already have trouble with keeping this so because we allow to run Lisp
> > from various hooks called by redisplay and via :eval in the mode line.
> > Quite a few bugs were caused by these, and had to be fixed by "fixing
> > up" the state, like making sure the selected frame/window were not
> > deleted under your feet etc.
> 
> Do you know which particular parts of the global state are necessary for
> redisplay? You mentioned current_frame and current_window. What else?

You are asking questions that would require me to do a lot of code
scanning and investigating to produce a full answer.  So please
forgive me if I can afford only some random examples that pop up in my
mind, not even close to the exhaustive list:

  . the window tree (redisplay traverses it depth-first)
  . point position of displayed buffers (needed to decide whether to
    scroll the window)
  . narrowing of each displayed buffer
  . text properties and overlays of each displayed buffer
  . which window is selected on each frame
  . window-start position of each window
  . variables affecting display: display-line-numbers,
    show-trailing-whitespace, line-spacing, auto-compression-mode,
    auto-hscroll-mode, mode-line-format, etc.

And that is just a few seconds of thinking, I'm sure I forget a lot of
others.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09  8:41                                                                   ` Eli Zaretskii
@ 2023-07-10 14:53                                                                     ` Dmitry Gutov
  0 siblings, 0 replies; 192+ messages in thread
From: Dmitry Gutov @ 2023-07-10 14:53 UTC (permalink / raw)
  To: Eli Zaretskii, Ihor Radchenko; +Cc: luangruo, emacs-devel

On 09/07/2023 11:41, Eli Zaretskii wrote:
>> From: Ihor Radchenko<yantar92@posteo.net>
>> Cc: Po Lu<luangruo@yahoo.com>,emacs-devel@gnu.org
>> Date: Sun, 09 Jul 2023 07:57:47 +0000
>>
>> Eli Zaretskii<eliz@gnu.org>  writes:
>>
>>>> When programmers write such code for other interactive programs, they
>>>> are comfortable with the limitations of running code outside of the UI
>>>> thread.  Why should writing new, thread-safe Lisp for Emacs be any more
>>>> difficult?
>>> Because we'd need to throw away 40 years of Lisp programming, and
>>> rewrite almost every bit of what was written since then.  It's a huge
>>> setback for writing Emacs applications.
>> May you please elaborate why exactly do we need to rewrite everything?
> We already did, please read the previous messages.  In a nutshell:
> because most of the Lisp code we have cannot be run from an async
> thread.

IME most of the code we have works decently already, and what we've been 
missing is a way to speed up certain crunchy bits without spawning an 
additional Emacs process (with all the coding pain and overhead that 
that entails). Workloads, for example, like creating a buffer (pinned to 
the thread), calling a process (asynchronously or not), getting JSON 
from it, parsing said JSON, processing the result, and returning it in 
some shape to the parent (probably main) thread. Most of the work that 
Gnus does, I think, also fits in that rough category.

Reporting on progress can be done by sending messages to the main 
thread, either via a dedicated mechanism ("messages" like in JavaScript 
Workers), or by changing some global variable as decided by the code 
author. The variable access would have to be synchronized, of course.

The worker thread will need an efficient way to return the computation 
result too (which still might be large, memory-wise), though. Maybe 
it'll use the same communication mechanism as progress report, maybe 
not. But it's something to keep in mind.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 14:37                                                                                             ` Eli Zaretskii
@ 2023-07-10 14:55                                                                                               ` Ihor Radchenko
  2023-07-10 16:03                                                                                                 ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-10 14:55 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: monnier, luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> Do you know which particular parts of the global state are necessary for
>> redisplay? You mentioned current_frame and current_window. What else?
>
> You are asking questions that would require me to do a lot of code
> scanning and investigating to produce a full answer.  So please
> forgive me if I can afford only some random examples that pop up in my
> mind, not even close to the exhaustive list:

Thanks, that is good enough.
I mostly wanted to get pointers to interesting places in that 1.2Mb
xdisp.c file to save myself from reading it from top to bottom.

>   . the window tree (redisplay traverses it depth-first)
>   . point position of displayed buffers (needed to decide whether to
>     scroll the window)

Maybe point position in window?

>   . narrowing of each displayed buffer
>   . text properties and overlays of each displayed buffer
>   . which window is selected on each frame
>   . window-start position of each window
>   . variables affecting display: display-line-numbers,
>     show-trailing-whitespace, line-spacing, auto-compression-mode,
>     auto-hscroll-mode, mode-line-format, etc.

These lay in 4 categories:
1. Data attached to frame object (window tree)
2. Data attached to window object (point position, window-start, etc)
3. Data attached to buffer object (buffer-local variables, narrowing)
4. Global Lisp variables

I have a suspicion that at least windows in a frame might be redisplayed
in parallel, unless some strange Elisp code is allowed to modify things
affecting redisplay.

May I know what happens if redisplay code sets variables like
line-spacing or display-line-numbers? Is it allowed?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 14:55                                                                                               ` Ihor Radchenko
@ 2023-07-10 16:03                                                                                                 ` Eli Zaretskii
  0 siblings, 0 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-10 16:03 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: monnier, luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: monnier@iro.umontreal.ca, luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Mon, 10 Jul 2023 14:55:36 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > You are asking questions that would require me to do a lot of code
> > scanning and investigating to produce a full answer.  So please
> > forgive me if I can afford only some random examples that pop up in my
> > mind, not even close to the exhaustive list:
> 
> Thanks, that is good enough.
> I mostly wanted to get pointers to interesting places in that 1.2Mb
> xdisp.c file to save myself from reading it from top to bottom.

You'd still need to do that, because the list I posted is nowhere near
complete.

> >   . the window tree (redisplay traverses it depth-first)
> >   . point position of displayed buffers (needed to decide whether to
> >     scroll the window)
> 
> Maybe point position in window?

No, point of the buffer.  We copy the window-point to the buffer point
once, but assume it stays put thereafter, until we are done with the
window.

> >   . narrowing of each displayed buffer
> >   . text properties and overlays of each displayed buffer
> >   . which window is selected on each frame
> >   . window-start position of each window
> >   . variables affecting display: display-line-numbers,
> >     show-trailing-whitespace, line-spacing, auto-compression-mode,
> >     auto-hscroll-mode, mode-line-format, etc.
> 
> These lay in 4 categories:
> 1. Data attached to frame object (window tree)
> 2. Data attached to window object (point position, window-start, etc)
> 3. Data attached to buffer object (buffer-local variables, narrowing)
> 4. Global Lisp variables

Beware: you categorize and are making conclusions based on incomplete
information.

> I have a suspicion that at least windows in a frame might be redisplayed
> in parallel, unless some strange Elisp code is allowed to modify things
> affecting redisplay.

Mostly, yes.  But there are things like resize-mini-windows that can
affect other windows.  Also, this is limited to GUI redisplay; TTY
frames have additional subtleties.

> May I know what happens if redisplay code sets variables like
> line-spacing or display-line-numbers? Is it allowed?

In a nutshell, we need to abort the current redisplay cycle and start
anew.  So doing that is a very bad idea.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-10 11:37                                                                               ` Ihor Radchenko
@ 2023-07-13 13:54                                                                                 ` Gregory Heytings
  2023-07-13 14:23                                                                                   ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Gregory Heytings @ 2023-07-13 13:54 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, luangruo, emacs-devel


>
> while async threads will have to be written keeping in mind that global 
> variables may be changed during execution unless explicitly locked.
>
> And yes, we will need to implement locking on Elisp object level and 
> also on Elisp variable level.
>

It's not completely clear (to me) what you mean by "locking".  But I 
assume you mean two operations lock/unlock with which a thread can request 
and waive exclusive access to an object, and sleeps until that exclusive 
access is granted.  IOW, mutexes.  If my guess is correct, that is not 
possible.  You cannot use mutexes in a program whose data structures have 
not been organized in a way that makes the use of such synchronization 
primitives possible, without having deadlocks.  Given that in Emacs 
objects are part of an enormous unstructured graph, with pointers leading 
from anywhere to anywhere, that's clearly not the case, and all you can 
use is a single global lock.

But...

>
> But the very need to access global Elisp variables/objects from async 
> threads should not be considered a good practice (except near start/end 
> of thread execution). Most of processing should be done using threads' 
> local lexical scope.
>

... if what you have in mind are async threads that should in fact not 
access objects of the main thread, why is it necessary to lock objects? 
You can prepare the arguments to the async thread in the main thread, 
start the async thread, and when its execution completes process the 
return values of the async thread in the main thread, which is what 
emacs-async already does.

May I ask you if you have a concrete example of a task that you would like 
to perform with such threads, and that cannot already be done with 
emacs-async?  In other words, what are the limitations of emacs-async you 
try to overcome?




^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-13 13:54                                                                                 ` Gregory Heytings
@ 2023-07-13 14:23                                                                                   ` Ihor Radchenko
  0 siblings, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-13 14:23 UTC (permalink / raw)
  To: Gregory Heytings; +Cc: Eli Zaretskii, luangruo, emacs-devel

Gregory Heytings <gregory@heytings.org> writes:

> It's not completely clear (to me) what you mean by "locking".  But I 
> assume you mean two operations lock/unlock with which a thread can request 
> and waive exclusive access to an object, and sleeps until that exclusive 
> access is granted.  IOW, mutexes.

Yes. Although mutexes are not the only way to achieve this.

> If my guess is correct, that is not 
> possible.  You cannot use mutexes in a program whose data structures have 
> not been organized in a way that makes the use of such synchronization 
> primitives possible, without having deadlocks.  Given that in Emacs 
> objects are part of an enormous unstructured graph, with pointers leading 
> from anywhere to anywhere, that's clearly not the case, and all you can 
> use is a single global lock.

I do not think so.
There is usually no need to ensure that a given Elisp object is locked
recursively, with all the linked objects (like all elements in a list).
We just need to ensure that we protect individual Lisp_Object's from
data races.

>> But the very need to access global Elisp variables/objects from async 
>> threads should not be considered a good practice (except near start/end 
>> of thread execution). Most of processing should be done using threads' 
>> local lexical scope.
>
> ... if what you have in mind are async threads that should in fact not 
> access objects of the main thread, why is it necessary to lock
> objects?
> You can prepare the arguments to the async thread in the main thread, 
> start the async thread, and when its execution completes process the 
> return values of the async thread in the main thread, which is what 
> emacs-async already does.

> May I ask you if you have a concrete example of a task that you would like 
> to perform with such threads, and that cannot already be done with 
> emacs-async?  In other words, what are the limitations of emacs-async you 
> try to overcome?

You cannot pass non-printable objects like (like markers of buffer). You
cannot pass Emacs local state, like user customization, except a small
subset. You have to go through print/read loop to pass the objects
around. You have to deal with startup overheads and large memory
overheads associated with creating a new Emacs process.

Examples:

1. Background parsing of large buffers, where you cannot afford to pass
   the whole buffer and parser state back-and-forth every time the user
   makes changes.
   
2. Searching across multiple buffers, while keeping all the buffer-local
   customizations (including runtime, this-session-only, customizations)
   honored.

3. Stealth fontification of buffers. Not just on idle, but
   asynchronously.

4. Doing something as basic as loop parallelism on multiple CPUs for
   CPU-heavy processing. (passing the data between Emacs process would
   defer the whole purpose here - that's inter-process communication
   overheads + sexp parsing !!!)

5. The very async process communication where process sentinels are not
   trivial and have to utilize significant CPU processing.

6. Basically, anything more complex than what can be done using a bunch
   of foo& + wait() in bash.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 13:58                                                                               ` Ihor Radchenko
  2023-07-09 14:52                                                                                 ` Eli Zaretskii
@ 2023-07-16 14:58                                                                                 ` Ihor Radchenko
  2023-07-17  7:55                                                                                   ` Ihor Radchenko
  2023-07-24  8:42                                                                                 ` Ihor Radchenko
  2 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-16 14:58 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> 4. Buffer-local variables, defined in C have C variable equivalents that
>    are updated as Emacs changes current_buffer.
>    
>    AFAIU, their purpose is to make buffer-local variables and normal
>    Elisp variables uniformly accessible from C code - C code does not
>    need to worry about Vfoo being buffer-local or not, and just set it.
>
>    This is not compatible with async threads that work with several buffers.
>
>    I currently do not fully understand how defining C variables works in
>    DEFVAR_LISP.

Currently, Emacs has a huge `globals' struct, where each record is a
variable defined in Lisp.

On C side, we have macros like
#define Vdebug_on_error globals.f_Vdebug_on_error
So, Vdebug_on_error can be used pretending that it is an ordinary
variable.
The location of stored Elisp value is thus fixed in memory.

On Elisp side, the value cells of symbols point to special internal
type, called "object forwarder". It arranges the actual symbol value
to be a pointer to where the value is actually stored.

For global variables the above scheme is easy.
Tricky things start to happen when we make variable that has C
reference buffer-local - the Vfoo C variable will always point to
the same address in ~globals~, but that address can only represent a
single value.

So, Emacs has to keep updating ~globals~ every time we switch buffer:

1. An old value stored in ~globals~ should be recorded in
to-be-switched-away buffer object.
2. ~globals~ is reassigned to load the value from new buffer.

(the situation is a bit more complex, because updating is done
lazily, only when the new buffer actually has a separate
buffer-local binding for a given variable; but the basic scheme is
as described).

The same update is happening when Emacs enters/exits let-binding or
switches between current cooperative threads.

---

The easiest way to not break the current Emacs logic would be making
~globals~ thread-local. Then, the current code may remain intact, if
we are content with global variables synchronized (or not) between
threads manually (say, when a thread exits or when it calls something
like `thread-synchronize-globals').

~globals~ currently has 457 variables, which is \approx{} =sizeof (int) * 457=
memory per thread.

If memory is an issue, we may want to store only a list of actually
changed variables in thread and change the current approach with
Vfoo macro and have something like VSETfoo + VGETfoo. VSETfoo will
always assign thread-local binding, adding it as necessary. VGETfoo
will first check thread-local binding and reach out to global struct
as fallback.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-16 14:58                                                                                 ` Ihor Radchenko
@ 2023-07-17  7:55                                                                                   ` Ihor Radchenko
  2023-07-17  8:36                                                                                     ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-17  7:55 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> The same update is happening when Emacs enters/exits let-binding or
> switches between current cooperative threads.

Which is actually quite a big issue.

When Emacs descents into let binding and dynamic scope is active, it
stores the existing variable value into a stack and sets the new value
globally, into symbol's value slot.

When ascending out of let, the existing value is discarded and the old
value is set globally, recovered from the stack.

And the cooperative threads simply manipulate the stack to store the
thread-local values before thread yields and load the values from the
next active stack. (though I did not look too close here)

--

The conclusion so far: it will be difficult to support dynamic scope
if we want things to run in parallel.

We would need to implement some way to store symbol values in every
thread (not necessarily all values, but at least the symbols changed by
thread function).

I can see several ways to approach this:

1. We can maintain a separate, sparse (just for changed variables)
   obarray inside threads, so that `intern'
   will point to thread-local symbol objects.

2. Force lexical scope inside threads, so that the values do not have to
   be stored in object value slots, but within per-thread lexical scope
   structure.
   This however, will not work when using things like `cl-letf' where
   Elisp code may want to temporarily set symbol function slots.

3. Store multiple value and function slots in symbol objects.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-17  7:55                                                                                   ` Ihor Radchenko
@ 2023-07-17  8:36                                                                                     ` Po Lu
  2023-07-17  8:52                                                                                       ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-17  8:36 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I can see several ways to approach this:
>
> 1. We can maintain a separate, sparse (just for changed variables)
>    obarray inside threads, so that `intern'
>    will point to thread-local symbol objects.

Interning happens at read (and thus load) time.  Byte-code vectors and
lambdas given to make-thread contain symbols, and not their names.  A
thread-local obarray will not be helpful.

> 2. Force lexical scope inside threads, so that the values do not have to
>    be stored in object value slots, but within per-thread lexical scope
>    structure.
>    This however, will not work when using things like `cl-letf' where
>    Elisp code may want to temporarily set symbol function slots.

This is not realistic.

> 3. Store multiple value and function slots in symbol objects.

Why would this be difficult?  It would slow down accesses to value and
function slots (especially those which aren't bound dynamically), but
that's an unavoidable drawback of multiprocessing.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-17  8:36                                                                                     ` Po Lu
@ 2023-07-17  8:52                                                                                       ` Ihor Radchenko
  2023-07-17  9:39                                                                                         ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-17  8:52 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> 1. We can maintain a separate, sparse (just for changed variables)
>>    obarray inside threads, so that `intern'
>>    will point to thread-local symbol objects.
>
> Interning happens at read (and thus load) time.  Byte-code vectors and
> lambdas given to make-thread contain symbols, and not their names.  A
> thread-local obarray will not be helpful.

Fair point. Alas.

>> 3. Store multiple value and function slots in symbol objects.
>
> Why would this be difficult?  It would slow down accesses to value and
> function slots (especially those which aren't bound dynamically), but
> that's an unavoidable drawback of multiprocessing.

Mostly because it will involve more changes than I was hoping for. And I
am not sure how things will affect memory alignment of symbol objects (I
do not fully understand the relevant comments in lisp.h) - we should be
careful about concurrent read access to struct slots in shared objects.
If there is some memory re-alignment happening concurrently, we may
either have to use READ_ONCE (and degrade performance) or be extra
careful to ensure that memory does not get shifted around, and we do not
end up trying to read wrong memory address because some other thread
wrote staff into the same symbol during the read.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-17  8:52                                                                                       ` Ihor Radchenko
@ 2023-07-17  9:39                                                                                         ` Po Lu
  2023-07-17  9:54                                                                                           ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-17  9:39 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Mostly because it will involve more changes than I was hoping for. 

Fundamental changes to the Emacs Lisp runtime will unavoidably involve
an immense number of changes, of course.

> And I am not sure how things will affect memory alignment of symbol
> objects (I do not fully understand the relevant comments in lisp.h)

How is any of this relevant to symbol alignment?  Please tell us which
comments you're referring to.

> we should be careful about concurrent read access to struct slots in
> shared objects.

Fortunately, there are very few direct references to fields within
Lisp_Symbol.

> If there is some memory re-alignment happening concurrently, we may
> either have to use READ_ONCE (and degrade performance) or be extra
> careful to ensure that memory does not get shifted around, and we do
> not end up trying to read wrong memory address because some other
> thread wrote staff into the same symbol during the read.

Memory re-alignment?



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-17  9:39                                                                                         ` Po Lu
@ 2023-07-17  9:54                                                                                           ` Ihor Radchenko
  2023-07-17 10:08                                                                                             ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-17  9:54 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> And I am not sure how things will affect memory alignment of symbol
>> objects (I do not fully understand the relevant comments in lisp.h)
>
> How is any of this relevant to symbol alignment?  Please tell us which
> comments you're referring to.

GCALIGNED_UNION_MEMBER

>> we should be careful about concurrent read access to struct slots in
>> shared objects.
>
> Fortunately, there are very few direct references to fields within
> Lisp_Symbol.

What I mean is a situation when we try to read sym->u.s.val.value, but
the value becomes Lisp_Object value[].

Then, realloc calls in other thread may create a race condition when
accessing array element may point to obsolete memory address that was
only valid prior to realloc.

Of course, it is just a trivial example. I am worried about less obvious
scenarios (those, I can't predict in advance).

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-17  9:54                                                                                           ` Ihor Radchenko
@ 2023-07-17 10:08                                                                                             ` Po Lu
  0 siblings, 0 replies; 192+ messages in thread
From: Po Lu @ 2023-07-17 10:08 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> GCALIGNED_UNION_MEMBER

Its purpose is to ensure that Lisp structures are aligned sufficiently
for mark_memory to locate its members when they are placed on the stack,
through the optional AUTO_CONS or AUTO_STRING mechanism.

> What I mean is a situation when we try to read sym->u.s.val.value, but
> the value becomes Lisp_Object value[].
>
> Then, realloc calls in other thread may create a race condition when
> accessing array element may point to obsolete memory address that was
> only valid prior to realloc.

My implementation used a lock around a linked list of value and function
cells, only taken if the thread sees that the list is not empty.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-05  2:31   ` Eli Zaretskii
@ 2023-07-17 20:43     ` Hugo Thunnissen
  2023-07-18  4:51       ` tomas
  2023-07-18  5:25       ` Ihor Radchenko
  0 siblings, 2 replies; 192+ messages in thread
From: Hugo Thunnissen @ 2023-07-17 20:43 UTC (permalink / raw)
  To: Eli Zaretskii, emacs-devel, yantar92


On 7/5/23 04:31, Eli Zaretskii wrote:
> No, because we already handle sub-process output in a way that doesn't
> require true concurrency.
>
Not to hijack this thread and this is only tangentially related to the 
the message I'm responding to, but asynchronous IO like Emacs has for 
sub processes, network connections and the like are, in my opinion, more 
important features than true concurrency. Asynchronous IO is also what 
can make current lisp threads useful. I think that most of the time when 
people want to reach for concurrency in an Emacs lisp program it is to 
retrieve, process, store or emit data in the background. Correct me if 
you think I'm wrong, but I suspect that execution time, parallelism or 
performance isn't usually much of an issue: I think what people usually 
want is a way of processing things or synchronizing things without 
having the user wait on an unresponsive Emacs. Combining lisp threads, 
asynchronous IO (not async execution) facilities and a little bit of 
polling of `input-pending-p' can in my opinion already get this job done 
most of the time.

Ihor Radchenko wrote:

> Yes, most of Elisp is about text processing. But when we really need to
> utilize asynchronous code, it is usually not about reading/writing text
> - it is about CPU-heavy analysis of text. This analysis is what truly
> needs async threads. Writing back, if it is necessary, may be separated
> from the analysis code or even done in separate buffer followed by
> `buffer-swap-text' or `replace-buffer-contents'.

I am skeptical that there are all that many use cases for the 
asynchronous analysis of text in buffers that are currently being 
edited. Correct me if I'm wrong, AFAIU programs that analyze text as it 
is being edited will usually need to re-parse after every edit anyways, 
making a parse during the edit itself not all that useful: After all, 
the result is no longer accurate for the buffer contents by the time it 
becomes available. A synchronous parser can probably do just as good of 
a job in such a scenario, as long as it is interruptible by user input.

I'm not familiar with the C core at all, do you think it might be more 
realistic to add a little more facilities for asynchronous data streams 
than to rework the execution model of Emacs to add multi threading? I'm 
mainly thinking of something like "channels" or "queues" for lisp 
objects, with an easy to use scheduling mechanism that makes it 
straightforward for people to not stall the main thread for too long.

And another (very) nice to have feature: asynchronous filesystem IO. 
Consider the code below which I use in a package I'm working on. My 
package reads and parses a large amount of files in the background 
within a short time frame. Parsing/processing the files in a timely 
manner is never really an issue, but the blocking IO of 
`insert-file-contents' does often take so long that it is  impossible to 
not have the user notice, even if polling `input-pending-p' and yielding 
in between operations. The workaround below makes the thread virtually 
unnoticeable in the majority of cases, but it would have been nice to 
have a native elisp facility for this as opposed to using `cat` in an 
asynchronous process.

----

(defconst phpinspect--cat-executable (executable-find "cat")
   "The executable used to read files asynchronously from the filesystem.")

(defsubst phpinspect--insert-file-contents-asynchronously (file)
   "Inserts FILE contents into the current buffer asynchronously, while 
yielding the current thread.

Errors when executed in main thread, as it should be used to make
background operations less invasive. Usage in the main thread can
only be the result of a logic error."
   (let* ((thread (current-thread))
          (mx (make-mutex))
          (condition (make-condition-variable mx))
          (err)
          (sentinel
           (lambda (process event)
             (with-mutex mx
               (if (string-match-p 
"^\\(deleted\\|exited\\|failed\\|connection\\)" event)
                   (progn
                     (setq err (format "cat process %s failed with 
event: %s" process event))
                     (condition-notify condition))
                 (when (string-match-p "^finished" event)
                   (condition-notify condition)))))))
     (when (not phpinspect--cat-executable)
       (error
        "ERROR: phpinspect--insert-file-contents-asynchronously called 
when cat-executable is not set"))

     (when (eq thread main-thread)
       (error "ERROR: phpinspect--insert-file-contents-asynchronously 
called from main-thread"))

     (with-mutex mx
       (make-process :name "phpinspect--insert-file-contents-asynchronously"
                     :command `(,phpinspect--cat-executable ,file)
                     :buffer (current-buffer)
                     :sentinel sentinel)

       (condition-wait condition)
       (when err (error err)))))

(cl-defmethod phpinspect-fs-insert-file-contents ((fs phpinspect-fs) 
file &optional prefer-async)
   "Insert file contents from FILE. "
   (if (and prefer-async (not (eq (current-thread) main-thread))
            phpinspect--cat-executable)
       (phpinspect--insert-file-contents-asynchronously file)
     (insert-file-contents-literally file)))




^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-17 20:43     ` Hugo Thunnissen
@ 2023-07-18  4:51       ` tomas
  2023-07-18  5:25       ` Ihor Radchenko
  1 sibling, 0 replies; 192+ messages in thread
From: tomas @ 2023-07-18  4:51 UTC (permalink / raw)
  To: emacs-devel

[-- Attachment #1: Type: text/plain, Size: 2581 bytes --]

On Mon, Jul 17, 2023 at 10:43:26PM +0200, Hugo Thunnissen wrote:
> 
> On 7/5/23 04:31, Eli Zaretskii wrote:
> > No, because we already handle sub-process output in a way that doesn't
> > require true concurrency.
> > 
> Not to hijack this thread and this is only tangentially related to the the
> message I'm responding to, but asynchronous IO like Emacs has for sub
> processes, network connections and the like are, in my opinion, more
> important features than true concurrency. Asynchronous IO is also what can
> make current lisp threads useful. I think that most of the time when people
> want to reach for concurrency in an Emacs lisp program it is to retrieve,
> process, store or emit data in the background [...]

I couldn't agree more. I'd add: true parallelism is a whole order of
magnitude more complex than just avoiding blocking operations. You
want the former when you need to harness more than one CPU in your
process. As you do, I don't think we need that in Emacs just yet.

But I think that there's a lot of the latter on the table for Emacs
until we reach a point of diminishing returns (and, btw, tackling the
easier part prepares the code for the more difficult one).

I've been watching the Java story from the sidelines for a long time
time (there was a huge push towards true parallelism, because Sun
back then was envisioning massively parallel processors). They tried
with "classical" interlocking. It was a world of pain. Their attempt
to develop a parallel GUI toolkit (it's tempting: each widget lives
a life of its own and iteracts with each other and the user is a
compelling idea) failed miserably and they had to showe the whole
thing into one thread after all (Po Lu is right).

There are other approaches which feel more manageable (communicating
processes over channels, à la Erlang, Concurrent ML). There is a
very nice series of blog posts by Andy Wingo, Guile's maintainer, on
that topic (ex. [1], [2]). But Emacs's architecture is just too far
away from that for it to be a viable option, methinks.

The other extreme is to design lockless algorithms from the ground
up (cf. the ref to Paul McKenney I posted elsewhere) as it is
being done in the Linux kernel.

Still I think those approaches aren't (yet?) for Emacs. Of course,
it makes a lot of sense to "draw a future picture" to have an idea
of what direction one wants to take...

Cheers

[1] https://wingolog.org/archives/2016/09/20/concurrent-ml-versus-go
[2] https://wingolog.org/archives/2017/06/29/a-new-concurrent-ml

-- 
t

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-17 20:43     ` Hugo Thunnissen
  2023-07-18  4:51       ` tomas
@ 2023-07-18  5:25       ` Ihor Radchenko
  2023-07-18  5:39         ` Po Lu
  2023-07-18 12:14         ` Hugo Thunnissen
  1 sibling, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-18  5:25 UTC (permalink / raw)
  To: Hugo Thunnissen; +Cc: Eli Zaretskii, emacs-devel

Hugo Thunnissen <devel@hugot.nl> writes:

>> Yes, most of Elisp is about text processing. But when we really need to
>> utilize asynchronous code, it is usually not about reading/writing text
>> - it is about CPU-heavy analysis of text. This analysis is what truly
>> needs async threads.
>
> I am skeptical that there are all that many use cases for the 
> asynchronous analysis of text in buffers that are currently being 
> edited.

Org does it via idle timers.
Also, consider searching across many buffers - that's a legitimate use
case. Think about AG/RG and other multi-threaded text searchers.
If the buffers are also structured and need to be parsed, we arrive at
the situation when parallelism can be quite useful.

> ... Correct me if I'm wrong, AFAIU programs that analyze text as it 
> is being edited will usually need to re-parse after every edit anyways, 
> making a parse during the edit itself not all that useful: After all, 
> the result is no longer accurate for the buffer contents by the time it 
> becomes available. A synchronous parser can probably do just as good of 
> a job in such a scenario, as long as it is interruptible by user input.

No. It still makes sense to re-parse incrementally before the point
where the edits are being made. For example, Org parser has to trash
everything that may theoretically be affected by current edits and
re-parse later, synchronously, on demand. This trashing sometimes
involve rather big chunks of text, and it would significantly speed
things up if we could do part of the incremental parsing in parallel.

> I'm not familiar with the C core at all, do you think it might be more 
> realistic to add a little more facilities for asynchronous data streams 
> than to rework the execution model of Emacs to add multi threading? I'm 
> mainly thinking of something like "channels" or "queues" for lisp 
> objects, with an easy to use scheduling mechanism that makes it 
> straightforward for people to not stall the main thread for too long.

AFAIK, sentinels do this already. Of course, they are quite basic.
As for queues, see https://emacsconf.org/2022/talks/async/

> And another (very) nice to have feature: asynchronous filesystem IO. 
> Consider the code below which I use in a package I'm working on. My 
> package reads and parses a large amount of files in the background 
> within a short time frame. Parsing/processing the files in a timely 
> manner is never really an issue, but the blocking IO of 
> `insert-file-contents' does often take so long that it is  impossible to 
> not have the user notice, even if polling `input-pending-p' and yielding 
> in between operations.

Well. I have an opposite problem when `read' takes second to read few Mb
of Elisp data, while inserting it into a temporary buffer is instant.

Also, did you actually profile your use case? I actually doubt that
reading file takes a long time on modern systems. In my previous tests,
I tried to open multiple tens of thousands files using

(let ((buffer (get-buffer-create " *Processing*")))
  (with-current-buffer buffer
    (let (buffer-undo-list t)
      (dolist (f files)
	(insert-file-contents f nil nil nil 'replace)
        (do-staff)))))

And it was very, very fast.
What was not fast (orders of magnitude) is opening files directly that
triggers all kinds of user hooks.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-18  5:25       ` Ihor Radchenko
@ 2023-07-18  5:39         ` Po Lu
  2023-07-18  5:49           ` Ihor Radchenko
  2023-07-18 12:14         ` Hugo Thunnissen
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-18  5:39 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Hugo Thunnissen, Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Also, did you actually profile your use case? I actually doubt that
> reading file takes a long time on modern systems. In my previous tests,
> I tried to open multiple tens of thousands files using
>
> (let ((buffer (get-buffer-create " *Processing*")))
>   (with-current-buffer buffer
>     (let (buffer-undo-list t)
>       (dolist (f files)
> 	(insert-file-contents f nil nil nil 'replace)
>         (do-staff)))))
>
> And it was very, very fast.

Now do this over NFS, and see how much slower it becomes.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-18  5:39         ` Po Lu
@ 2023-07-18  5:49           ` Ihor Radchenko
  0 siblings, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-18  5:49 UTC (permalink / raw)
  To: Po Lu; +Cc: Hugo Thunnissen, Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Now do this over NFS, and see how much slower it becomes.

Fair point.
That said, I myself is not very interested in async IO. Others may take
this work, if they wish to.

Though I still have a feeling that Hugo had different reasons for
slowdown - on Elisp side. We might probably identify them.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-18  5:25       ` Ihor Radchenko
  2023-07-18  5:39         ` Po Lu
@ 2023-07-18 12:14         ` Hugo Thunnissen
  2023-07-18 12:39           ` Async IO and queing process sentinels (was: Concurrency via isolated process/thread) Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Hugo Thunnissen @ 2023-07-18 12:14 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: emacs-devel


On 7/18/23 07:25, Ihor Radchenko wrote:
> Hugo Thunnissen <devel@hugot.nl> writes:
>
>>> Yes, most of Elisp is about text processing. But when we really need to
>>> utilize asynchronous code, it is usually not about reading/writing text
>>> - it is about CPU-heavy analysis of text. This analysis is what truly
>>> needs async threads.
>> I am skeptical that there are all that many use cases for the
>> asynchronous analysis of text in buffers that are currently being
>> edited.
> Org does it via idle timers.
> Also, consider searching across many buffers - that's a legitimate use
> case. Think about AG/RG and other multi-threaded text searchers.
> If the buffers are also structured and need to be parsed, we arrive at
> the situation when parallelism can be quite useful.

True, for that scenario parallelism could be useful.

>
>> ... Correct me if I'm wrong, AFAIU programs that analyze text as it
>> is being edited will usually need to re-parse after every edit anyways,
>> making a parse during the edit itself not all that useful: After all,
>> the result is no longer accurate for the buffer contents by the time it
>> becomes available. A synchronous parser can probably do just as good of
>> a job in such a scenario, as long as it is interruptible by user input.
> No. It still makes sense to re-parse incrementally before the point
> where the edits are being made. For example, Org parser has to trash
> everything that may theoretically be affected by current edits and
> re-parse later, synchronously, on demand. This trashing sometimes
> involve rather big chunks of text, and it would significantly speed
> things up if we could do part of the incremental parsing in parallel.

I see. I am not familiar with org-mode's code so I'll take your word for it.

 From my own limited experience, I think that most of the time the user 
isn't looking for "instant" feedback, but just wants feedback to appear 
shortly after they stop typing. I would expect a parser, even if not 
incremental, to be able to parse most buffers within 10 to maybe 200 
milliseconds. I think this is an acceptable wait time for feedback 
granted that the parse is interruptible by user input, so that it can 
never make the user wait for an unresponsive Emacs. Having less wait 
time through the use of multi threading might be slightly nicer, but I 
don't see it making that much of a difference for the user experience in 
most modes. Would you agree with that as a general statement?

>> I'm not familiar with the C core at all, do you think it might be more
>> realistic to add a little more facilities for asynchronous data streams
>> than to rework the execution model of Emacs to add multi threading? I'm
>> mainly thinking of something like "channels" or "queues" for lisp
>> objects, with an easy to use scheduling mechanism that makes it
>> straightforward for people to not stall the main thread for too long.
> AFAIK, sentinels do this already. Of course, they are quite basic.
> As for queues, see https://emacsconf.org/2022/talks/async/

Sentinels are for external processes. I was thinking more in the vain of 
a queue that is owned by two lisp threads, where one thread can write 
and prepare input for the other, and the other thread yields as long as 
there is no new input to process. When a thread receives an input 
message from the other, it will come alive at the next idling moment and 
process the queue while yielding in between messages. This would make it 
easier to process data in chunks without halting the main thread long 
enough to bother the user, as long as the main thread is prioritized 
over others.


>> And another (very) nice to have feature: asynchronous filesystem IO.
>> Consider the code below which I use in a package I'm working on. My
>> package reads and parses a large amount of files in the background
>> within a short time frame. Parsing/processing the files in a timely
>> manner is never really an issue, but the blocking IO of
>> `insert-file-contents' does often take so long that it is  impossible to
>> not have the user notice, even if polling `input-pending-p' and yielding
>> in between operations.
> Well. I have an opposite problem when `read' takes second to read few Mb
> of Elisp data, while inserting it into a temporary buffer is instant.
>
> Also, did you actually profile your use case?

I am ashamed to admit that I did not, I did not know about the profiler 
at the time so I just did guesswork until I seemed to be getting 
results. There seemed to be a noticeable difference at the time, but at 
the moment I'm having trouble reproducing it, so needless to say I did 
not find a large difference re-running both solutions with the profiler 
enabled. I did make some other large changes in the codebase since I 
made this change, so it could very well be that the thing that made 
Emacs hang at the time was not filesystem IO at all. Or there was 
something else going on with the specific files that were being parsed 
(I don't remember which folder I did my tests on). Sorry for being the 
typical guy who makes claims about performance without doing his due 
diligence ;)

I still suspect that async filesystem IO could make a considerable 
difference in some scenarios. My package is for PHP projects and it is 
not unheard of for PHP files to be edited over network mounts where IO 
could have a lot more latency. For example, some people have asked me 
whether my package works over tramp. I haven't profiled this scenario 
yet as I have to make some tweaks first to make it work at all, but I 
fear that blocking on `insert-file-contents' may not be ideal in that 
scenario.

And then there also is `directory-files-recursively':

          457  77%         - phpinspect-index-current-project
          457  77%          - let*
          371  63%           - if
          371  63%            - progn
          371  63%             - let
          371  63%              - while
          371  63%               - let
          371  63%                - if
          371  63%                 - progn
          371  63%                  - let
          371  63%                   - while
          371  63%                    - let
          371  63%                     - if
          361  61%                      - progn
          361  61%                       - let*
          312  53%                        - if
          312  53%                         - progn
          308  52%                          - maphash
          308  52%                           - #<lambda -0x2edd06728f18cc0>
          308  52%                            - let
          305  51%                             - if
          305  51%                              - progn
          305  51%                               - 
phpinspect-al-strategy-fill-typehash
          305  51%                                - apply
          292  49%                                 - #<lambda 
0x725f4ad90660>
          292  49%                                  - progn
          292  49%                                   - let
          292  49%                                    - let
          292  49%                                     - while
          292  49%                                      - let
          292  49%                                       - let
          172  29%                                        - while
          172  29%                                         + let
          120  20%                                        - 
phpinspect-fs-directory-files-recursively
          120  20%                                         - apply
          120  20%                                          - #<lambda 
0xf3386f2a7932164>
          120  20%                                           - progn
          120  20%                                           - progn
          120  20%                                            - 
directory-files-recursively
           79  13%                                             - 
directory-files-recursively
           21   3%                                              - 
directory-files-recursively
           11 1% directory-files-recursively
            4   0% sort
            3   0% file-remote-p






^ permalink raw reply	[flat|nested] 192+ messages in thread

* Async IO and queing process sentinels (was: Concurrency via isolated process/thread)
  2023-07-18 12:14         ` Hugo Thunnissen
@ 2023-07-18 12:39           ` Ihor Radchenko
  2023-07-18 12:49             ` Ihor Radchenko
  2023-07-18 14:12             ` Async IO and queing process sentinels Michael Albinus
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-18 12:39 UTC (permalink / raw)
  To: Hugo Thunnissen; +Cc: emacs-devel, Michael Albinus

Hugo Thunnissen <devel@hugot.nl> writes:

>>> I'm not familiar with the C core at all, do you think it might be more
>>> realistic to add a little more facilities for asynchronous data streams
>>> than to rework the execution model of Emacs to add multi threading? I'm
>>> mainly thinking of something like "channels" or "queues" for lisp
>>> objects, with an easy to use scheduling mechanism that makes it
>>> straightforward for people to not stall the main thread for too long.
>> AFAIK, sentinels do this already. Of course, they are quite basic.
>> As for queues, see https://emacsconf.org/2022/talks/async/
>
> Sentinels are for external processes. I was thinking more in the vain of 
> a queue that is owned by two lisp threads, where one thread can write 
> and prepare input for the other, and the other thread yields as long as 
> there is no new input to process. When a thread receives an input 
> message from the other, it will come alive at the next idling moment and 
> process the queue while yielding in between messages. This would make it 
> easier to process data in chunks without halting the main thread long 
> enough to bother the user, as long as the main thread is prioritized 
> over others.

What you describe sounds very similar to `condition-wait'/`condition-notify'.

That said, I have seen scenarios when I have several dozens of network
connections open, receive data from server, and fire all their sentinels
near the same time. This effectively blocks Emacs because all the
sentinels are suddenly scheduled to run together.
It would be nice to have some better balancing in such case.

> I still suspect that async filesystem IO could make a considerable 
> difference in some scenarios. My package is for PHP projects and it is 
> not unheard of for PHP files to be edited over network mounts where IO 
> could have a lot more latency. For example, some people have asked me 
> whether my package works over tramp. I haven't profiled this scenario 
> yet as I have to make some tweaks first to make it work at all, but I 
> fear that blocking on `insert-file-contents' may not be ideal in that 
> scenario.

AFAIR, Michael Albinus (the maintainer of TRAMP) is working on better
support for async processes in TRAMP. And he had troubles with managing
async process queues as well.

CCing him, as this discussion might be relevant.

> And then there also is `directory-files-recursively':
>
>           457  77% - phpinspect-index-current-project
>           120  20%    -directory-files-recursively
>            79  13%     -directory-files-recursively
>            21   3%        -directory-files-recursively
>            11 1%            directory-files-recursively

Note how functions involving actual IO do not show up in the backtrace:
`file-name-all-completions' and `file-symlink-p' are not there.

Most of the 120ms spend inside `directory-files-recursively' is Elisp
recursive calls.

I also did testing locally, on my $HOME dir, and got similar results
with recursive calls taking most of the CPU time:

         463  29%            - directory-files-recursively
         434  27%             - directory-files-recursively
         305  19%              - directory-files-recursively
         111   7%               - directory-files-recursively
          85   5%                - directory-files-recursively
          62   3%                 - directory-files-recursively
          24   1%                  - directory-files-recursively
          10   0%                   - directory-files-recursively
          10   0%                    - directory-files-recursively
           7   0%                     - directory-files-recursively
           7   0%                        directory-files-recursively
           3   0%                    sort
           3   0%                   sort
           3   0%                sort

If you want to improve performance here, you likely need to rewrite
`directory-files-recursively' without recursion. Doable, and nothing to
do with IO slowness. 

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Async IO and queing process sentinels (was: Concurrency via isolated process/thread)
  2023-07-18 12:39           ` Async IO and queing process sentinels (was: Concurrency via isolated process/thread) Ihor Radchenko
@ 2023-07-18 12:49             ` Ihor Radchenko
  2023-07-18 14:12             ` Async IO and queing process sentinels Michael Albinus
  1 sibling, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-18 12:49 UTC (permalink / raw)
  To: Hugo Thunnissen; +Cc: emacs-devel, Michael Albinus

Ihor Radchenko <yantar92@posteo.net> writes:

> I also did testing locally, on my $HOME dir, and got similar results
> with recursive calls taking most of the CPU time:
>
>          463  29%            - directory-files-recursively
>          434  27%             - directory-files-recursively
>          305  19%              - directory-files-recursively
> ...
>
> If you want to improve performance here, you likely need to rewrite
> `directory-files-recursively' without recursion. Doable, and nothing to
> do with IO slowness. 

Not recursion, actually. I did a bit more elaborate profiling with perf
and it looks like most of the time is spent matching regexp against file
names:

The actual tested command was (ignore (directory-files-recursively "/home/yantar92/.data" ".+"))

    33.82%  emacs         emacs                        [.] re_match_2_internal
    14.87%  emacs         emacs                        [.] process_mark_stack
     8.47%  emacs         emacs                        [.] Fnconc
     6.07%  emacs         emacs                        [.] re_search_2
     2.36%  emacs         emacs                        [.] unbind_to
     2.08%  emacs         emacs                        [.] sweep_strings
     1.98%  emacs         emacs                        [.] compile_pattern
     1.64%  emacs         emacs                        [.] execute_charset
     1.64%  emacs         emacs                        [.] assq_no_quit
     1.40%  emacs         emacs                        [.] sweep_conses
     1.01%  emacs         emacs                        [.] plist_get
     0.97%  emacs         emacs                        [.] set_buffer_internal_2
     0.86%  emacs         emacs                        [.] RE_SETUP_SYNTAX_TABLE_FOR_OBJECT
     0.84%  emacs         emacs                        [.] mark_interval_tree_1
     0.75%  emacs         emacs                        [.] internal_equal
     0.73%  emacs         emacs                        [.] Ffind_file_name_handler

There was quite a bit of GC (not included into Elisp profile), that
shows up as *mark* and *sweep* calls.

No IO at all shows up in the backtrace. Even Ffind_file_name_handler is
simply matching filename against `file-name-handler-alist', calling
`insert-directory-program' somewhere in the process (the call does not
show up anywhere high in the profile).

Most of the time is spent doing regexp matching in various places and
building the actual (long) list of files.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Async IO and queing process sentinels
  2023-07-18 12:39           ` Async IO and queing process sentinels (was: Concurrency via isolated process/thread) Ihor Radchenko
  2023-07-18 12:49             ` Ihor Radchenko
@ 2023-07-18 14:12             ` Michael Albinus
  1 sibling, 0 replies; 192+ messages in thread
From: Michael Albinus @ 2023-07-18 14:12 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Hugo Thunnissen, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> AFAIR, Michael Albinus (the maintainer of TRAMP) is working on better
> support for async processes in TRAMP. And he had troubles with managing
> async process queues as well.

Some years ago, I've tried to add threads to Tramp in order to unblock
it for remote file access. It worked somehow, but I couldn't fix the
problem to react on user input in a thread (for example, asking for
passwords and alike). So I have stopped this.

There were still problems that several operations could interfer each
other, when a Tramp file operation is interrupted by another Tramp file
operation due to timers, process sentinels, process filters and alike. I
have defended this in Tramp by suspending timers while accepting process
output. Sentinels and filters are less disturbing.

So I cannot offer something useful for the general case.

Best regards, Michael.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-09 13:58                                                                               ` Ihor Radchenko
  2023-07-09 14:52                                                                                 ` Eli Zaretskii
  2023-07-16 14:58                                                                                 ` Ihor Radchenko
@ 2023-07-24  8:42                                                                                 ` Ihor Radchenko
  2023-07-24  9:52                                                                                   ` Po Lu
  2023-07-24 12:44                                                                                   ` Eli Zaretskii
  2 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-24  8:42 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> 3. Current buffer, point position, and narrowing.
>
>    By current design, Emacs always have a single global current buffer,
>    current point position, and narrowing state in that buffer.
>    Even when we switch cooperative threads, a thread must update its
>    thread->current_buffer to previous_thread->current_buffer; and update
>    point and narrowing by calling set_buffer_internal_2.
>
>    Current design is incompatible with async threads - they must be able
>    to have different buffers, points, and narrowing states current
>    within each thread.
>
>    That's why I suggested to convert PT, BEGV, and ZV into
>    thread-locals.

Would it be acceptable to convert buffer PT, BEGV, and ZV into
thread-local for current cooperative threads?

I am thinking about:

1. Removing pt, pt_byte, begv, begv_byte, zv, zv_byte, pt_marker_,
   begv_marker_, and zv_marker_ from buffer objects.
2. Adding pt/begv/zv to thread object.
3. Adding an alist linking buffers and past
   pt/begv/zv positions visited by a given thread.

This way, when a thread yields and later continues executing, its point
and restriction will not be changed.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24  8:42                                                                                 ` Ihor Radchenko
@ 2023-07-24  9:52                                                                                   ` Po Lu
  2023-07-24 10:09                                                                                     ` Ihor Radchenko
  2023-07-24 12:44                                                                                   ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-24  9:52 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> 3. Current buffer, point position, and narrowing.
>>
>>    By current design, Emacs always have a single global current buffer,
>>    current point position, and narrowing state in that buffer.
>>    Even when we switch cooperative threads, a thread must update its
>>    thread->current_buffer to previous_thread->current_buffer; and update
>>    point and narrowing by calling set_buffer_internal_2.
>>
>>    Current design is incompatible with async threads - they must be able
>>    to have different buffers, points, and narrowing states current
>>    within each thread.
>>
>>    That's why I suggested to convert PT, BEGV, and ZV into
>>    thread-locals.
>
> Would it be acceptable to convert buffer PT, BEGV, and ZV into
> thread-local for current cooperative threads?
>
> I am thinking about:
>
> 1. Removing pt, pt_byte, begv, begv_byte, zv, zv_byte, pt_marker_,
>    begv_marker_, and zv_marker_ from buffer objects.
> 2. Adding pt/begv/zv to thread object.
> 3. Adding an alist linking buffers and past
>    pt/begv/zv positions visited by a given thread.
>
> This way, when a thread yields and later continues executing, its point
> and restriction will not be changed.

It may be possible to implement this with our current yielding system.

But how much do you want to slow down accesses to PT in the future, when
multiple threads will run simultaneously?  Operations that use and
modify PT are essentially the bread and butter of Emacs, and as a result
accesses to PT must be very fast.

In addition to that, we already have separate window and buffer points.
It would confound users even more to add a third kind of object with its
own point to the mix.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24  9:52                                                                                   ` Po Lu
@ 2023-07-24 10:09                                                                                     ` Ihor Radchenko
  2023-07-24 12:15                                                                                       ` Po Lu
  2023-07-24 12:50                                                                                       ` Eli Zaretskii
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-24 10:09 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> Would it be acceptable to convert buffer PT, BEGV, and ZV into
>> thread-local for current cooperative threads?
> ...
> But how much do you want to slow down accesses to PT in the future, when
> multiple threads will run simultaneously?  Operations that use and
> modify PT are essentially the bread and butter of Emacs, and as a result
> accesses to PT must be very fast.

I do not plan to slow down access to PT.
The basic idea is to have
#define PT (current_thread->m_pt + 0)
#define PT_BYTE (current_thread->m_pt_byte + 0)

Basically, use thread object slot instead of buffer object slot to store
point and restriction. This will not cause any performance degradation
and allow multiple points in the same buffer.

What may be slightly slower is setting/getting points in other (not
current) buffers.
We will need to store point and restriction history for each thread.
Searching this history will scale with the number of buffers that have
been current previously during thread execution (though we may use hash
table if necessary).

However, accessing and changing point in buffer that is not current is
not very common. AFAIU, it is done in (1) read operation when reading
from stream represented by a buffer; (2) when switching buffers when we
need to transfer current PT to history for current buffer and retrieve
PT from history for the buffer that will become current.

> In addition to that, we already have separate window and buffer
> points. It would confound users even more to add a third kind of
> object with its own point to the mix.

I propose to remove buffer points completely and use thread points
instead.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 10:09                                                                                     ` Ihor Radchenko
@ 2023-07-24 12:15                                                                                       ` Po Lu
  2023-07-24 12:25                                                                                         ` Ihor Radchenko
  2023-07-24 12:50                                                                                       ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-24 12:15 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I do not plan to slow down access to PT.
> The basic idea is to have
> #define PT (current_thread->m_pt + 0)
> #define PT_BYTE (current_thread->m_pt_byte + 0)
>
> Basically, use thread object slot instead of buffer object slot to store
> point and restriction. This will not cause any performance degradation
> and allow multiple points in the same buffer.

No matter where thread local points are stored, each of these points
will need to be a marker, because text editing operations will otherwise
cause PT in other threads to be desynchronized with PT_BYTE within
multibyte buffers.  Both unibyte buffers and multibyte buffers will also
experience complications if a thread deletes text within a buffer,
reducing its size below point within another thread.

Each text editing operations must then loop through and update each of
these markers.

> What may be slightly slower is setting/getting points in other (not
> current) buffers.
> We will need to store point and restriction history for each thread.
> Searching this history will scale with the number of buffers that have
> been current previously during thread execution (though we may use hash
> table if necessary).
>
> However, accessing and changing point in buffer that is not current is
> not very common. AFAIU, it is done in (1) read operation when reading
> from stream represented by a buffer; (2) when switching buffers when we
> need to transfer current PT to history for current buffer and retrieve
> PT from history for the buffer that will become current.

See above.  What you are proposing is a fundamental change to the
performance attributes of some of the most basic operations performed by
Emacs, and is not acceptable.

> I propose to remove buffer points completely and use thread points
> instead.

IMHO, any change that increases the number of objects that store points
will be a mistake.  There is only one buffer, but there can be many
threads.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 12:15                                                                                       ` Po Lu
@ 2023-07-24 12:25                                                                                         ` Ihor Radchenko
  2023-07-24 13:31                                                                                           ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-24 12:25 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> No matter where thread local points are stored, each of these points
> will need to be a marker, because text editing operations will otherwise
> cause PT in other threads to be desynchronized with PT_BYTE within
> multibyte buffers.  Both unibyte buffers and multibyte buffers will also
> experience complications if a thread deletes text within a buffer,
> reducing its size below point within another thread.
>
> Each text editing operations must then loop through and update each of
> these markers.

> ...  What you are proposing is a fundamental change to the
> performance attributes of some of the most basic operations performed by
> Emacs, and is not acceptable.

But editing operations already loop over all the buffer markers. If a
just dozen of extra buffer markers (one for each thread, in the worse
case) are unacceptable, we should really do something about marker
performance.

>> I propose to remove buffer points completely and use thread points
>> instead.
>
> IMHO, any change that increases the number of objects that store points
> will be a mistake.  There is only one buffer, but there can be many
> threads.

Last time I wrote a thread code that had to work with buffer text, I had
to store markers manually anyway. Otherwise, point position were always
chaotic.

And threads that will not work with buffer text, will only need to store
a single marker.

So, I do not see how the proposed approach will make things worse
memory-wise.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24  8:42                                                                                 ` Ihor Radchenko
  2023-07-24  9:52                                                                                   ` Po Lu
@ 2023-07-24 12:44                                                                                   ` Eli Zaretskii
  2023-07-24 13:02                                                                                     ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-24 12:44 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Mon, 24 Jul 2023 08:42:52 +0000
> 
> Ihor Radchenko <yantar92@posteo.net> writes:
> 
> > 3. Current buffer, point position, and narrowing.
> >
> >    By current design, Emacs always have a single global current buffer,
> >    current point position, and narrowing state in that buffer.
> >    Even when we switch cooperative threads, a thread must update its
> >    thread->current_buffer to previous_thread->current_buffer; and update
> >    point and narrowing by calling set_buffer_internal_2.
> >
> >    Current design is incompatible with async threads - they must be able
> >    to have different buffers, points, and narrowing states current
> >    within each thread.
> >
> >    That's why I suggested to convert PT, BEGV, and ZV into
> >    thread-locals.
> 
> Would it be acceptable to convert buffer PT, BEGV, and ZV into
> thread-local for current cooperative threads?

General note: it is very hard to have a serious discussion of this
kind of subject when the goals are not clearly announced, and there
are gaps of week or two between messages.  Discussing several separate
aspects of this makes this even harder to follow and respond in a
useful manner.

> I am thinking about:
> 
> 1. Removing pt, pt_byte, begv, begv_byte, zv, zv_byte, pt_marker_,
>    begv_marker_, and zv_marker_ from buffer objects.
> 2. Adding pt/begv/zv to thread object.
> 3. Adding an alist linking buffers and past
>    pt/begv/zv positions visited by a given thread.
> 
> This way, when a thread yields and later continues executing, its point
> and restriction will not be changed.

Why is the last sentence a worthy goal? what do you think will be the
advantage of that?



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 10:09                                                                                     ` Ihor Radchenko
  2023-07-24 12:15                                                                                       ` Po Lu
@ 2023-07-24 12:50                                                                                       ` Eli Zaretskii
  2023-07-24 13:15                                                                                         ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-24 12:50 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Eli Zaretskii <eliz@gnu.org>, emacs-devel@gnu.org
> Date: Mon, 24 Jul 2023 10:09:50 +0000
> 
> The basic idea is to have
> #define PT (current_thread->m_pt + 0)
> #define PT_BYTE (current_thread->m_pt_byte + 0)

Point is the attribute of a buffer.  The current definition of PT,
viz.:

  #define PT (current_buffer->pt + 0)

automagically makes PT refer to the current buffer, so the code only
needs to change current_buffer to have PT set correctly.

By contrast, you propose to have one value of point per thread, which
means a thread that switches buffers will have to manually change all
of these values, one by one.  Why is that a good idea?

And what about C code which copies/moves text between two buffers?
For example, some primitives in coding.c can decode text from one
buffer while writing the decoded text into another.

> Basically, use thread object slot instead of buffer object slot to store
> point and restriction. This will not cause any performance degradation
> and allow multiple points in the same buffer.
> 
> What may be slightly slower is setting/getting points in other (not
> current) buffers.
> We will need to store point and restriction history for each thread.
> Searching this history will scale with the number of buffers that have
> been current previously during thread execution (though we may use hash
> table if necessary).
> 
> However, accessing and changing point in buffer that is not current is
> not very common. AFAIU, it is done in (1) read operation when reading
> from stream represented by a buffer; (2) when switching buffers when we
> need to transfer current PT to history for current buffer and retrieve
> PT from history for the buffer that will become current.

Once again: please state the final goal, and please describe at least
in principle how these measures are steps toward that goal.

> I propose to remove buffer points completely and use thread points
> instead.

I don't think this could fly, because we must be able to copy text
from one buffer to another in a single thread, see above.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 12:44                                                                                   ` Eli Zaretskii
@ 2023-07-24 13:02                                                                                     ` Ihor Radchenko
  2023-07-24 13:54                                                                                       ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-24 13:02 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> Would it be acceptable to convert buffer PT, BEGV, and ZV into
>> thread-local for current cooperative threads?
>
> General note: it is very hard to have a serious discussion of this
> kind of subject when the goals are not clearly announced, and there
> are gaps of week or two between messages.  Discussing several separate
> aspects of this makes this even harder to follow and respond in a
> useful manner.

I was initially exploring what should be done allow async threads in
Emacs. The discussion established several important global states in
Emacs that must be changed. Then, I explored if those blockers are
possible to solve. Now, I believe that all the blockers can be solved,
if we accept to leave GC and redisplay synchronous.

However, reducing the global state will not be easy and should better be
done in steps. Ideally, steps that can be tested using the existing
synchronous thread paradigm.

The first step, that can also be potentially useful even if I fail to
proceed far enough to get async threads, is removing global point and
restriction state.

>> 1. Removing pt, pt_byte, begv, begv_byte, zv, zv_byte, pt_marker_,
>>    begv_marker_, and zv_marker_ from buffer objects.
>> 2. Adding pt/begv/zv to thread object.
>> 3. Adding an alist linking buffers and past
>>    pt/begv/zv positions visited by a given thread.
>> 
>> This way, when a thread yields and later continues executing, its point
>> and restriction will not be changed.
>
> Why is the last sentence a worthy goal? what do you think will be the
> advantage of that?

Consider a simple regexp search running in a thread:

(while (re-search-forward re nil t)
  (do-useful-staff)
  (thread-yield))

This search will fail if another thread ever moves point or changes
restriction in the same buffer.

The obvious

(while (re-search-forward re nil t)
  (do-useful-staff)
  (save-restriction (save-excursion (thread-yield))))

won't work.

So, one would have to go through awkward

(let ((marker (point-marker)))
 (while (re-search-forward re nil t)
  (do-useful-staff)
  (move-marker marker (point))
  (thread-yield)
  (goto-char marker)))

at least. But it will yet fail if we need to maintain buffer
restriction. So, the code becomes

(let ((marker (point-marker)) (begv (point-min-marker)) (zv (point-max-marker)))
 (while (re-search-forward re nil t)
  (do-useful-staff)
  (move-marker marker (point))
  (thread-yield)
  (widen)
  (narrow-to-region begv zv)
  (goto-char marker)))

And this may still fail because of unreproducible bugs.

Not to mention that we do not always know where the thread will yield.
Any function may be adviced by user to contain some interactive query,
which will also trigger thread yield.


The above example is just the trivial regexp search. When the code
becomes more complex, things gets even more chaotic and threads become
simply unusable for anything that is trying to scan buffer text.

That's why I think that having threads store their own point and
restriction is useful, even without considering that it will reduce the
global state.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 12:50                                                                                       ` Eli Zaretskii
@ 2023-07-24 13:15                                                                                         ` Ihor Radchenko
  2023-07-24 13:41                                                                                           ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-24 13:15 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> The basic idea is to have
>> #define PT (current_thread->m_pt + 0)
>> #define PT_BYTE (current_thread->m_pt_byte + 0)
>
> Point is the attribute of a buffer.  The current definition of PT,
> viz.:
>
>   #define PT (current_buffer->pt + 0)
>
> automagically makes PT refer to the current buffer, so the code only
> needs to change current_buffer to have PT set correctly.
>
> By contrast, you propose to have one value of point per thread, which
> means a thread that switches buffers will have to manually change all
> of these values, one by one.  Why is that a good idea?

Switching buffer already involves juggling with pt_marker_ in
record_buffer_markers and fetch_buffer_markers.

So, I do not see much problem here.

> And what about C code which copies/moves text between two buffers?
> For example, some primitives in coding.c can decode text from one
> buffer while writing the decoded text into another.

They still use set_buffer_internal.

There are certain cases when the code needs to fetch buffer point from a
buffer that is not current, but I did not yet look into them closely
before asking if the whole idea is going to be acceptable. (Roughly, I plan
to use thread point history or fallback to default point position as in
buffer constructor).

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 12:25                                                                                         ` Ihor Radchenko
@ 2023-07-24 13:31                                                                                           ` Po Lu
  2023-07-24 13:53                                                                                             ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-24 13:31 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> But editing operations already loop over all the buffer markers. If a
> just dozen of extra buffer markers (one for each thread, in the worse
> case) are unacceptable, we should really do something about marker
> performance.

It is unacceptable because it is drastically more expensive than it is
today.  No degree of optimization to the marker code will eliminate this
problem.

> Last time I wrote a thread code that had to work with buffer text, I had
> to store markers manually anyway. Otherwise, point position were always
> chaotic.

What happens if another thread deletes the text that surrounds point in
the thread your code is running in?  Won't you have the same problem
then anyhow?

> And threads that will not work with buffer text, will only need to store
> a single marker.
>
> So, I do not see how the proposed approach will make things worse
> memory-wise.

I'm not concerned about the memory consumption.  I'm concerned about
both the usability aspects of having even more disparate objects hold
point positions, and the slowdown.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 13:15                                                                                         ` Ihor Radchenko
@ 2023-07-24 13:41                                                                                           ` Eli Zaretskii
  2023-07-24 14:13                                                                                             ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-24 13:41 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Mon, 24 Jul 2023 13:15:55 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >   #define PT (current_buffer->pt + 0)
> >
> > automagically makes PT refer to the current buffer, so the code only
> > needs to change current_buffer to have PT set correctly.
> >
> > By contrast, you propose to have one value of point per thread, which
> > means a thread that switches buffers will have to manually change all
> > of these values, one by one.  Why is that a good idea?
> 
> Switching buffer already involves juggling with pt_marker_ in
> record_buffer_markers and fetch_buffer_markers.

No, it doesn't, not if the code only sets current_buffer.

> > And what about C code which copies/moves text between two buffers?
> > For example, some primitives in coding.c can decode text from one
> > buffer while writing the decoded text into another.
> 
> They still use set_buffer_internal.

That's not necessary for some operations.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 13:31                                                                                           ` Po Lu
@ 2023-07-24 13:53                                                                                             ` Ihor Radchenko
  2023-07-25  0:12                                                                                               ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-24 13:53 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> But editing operations already loop over all the buffer markers. If a
>> just dozen of extra buffer markers (one for each thread, in the worse
>> case) are unacceptable, we should really do something about marker
>> performance.
>
> It is unacceptable because it is drastically more expensive than it is
> today.  No degree of optimization to the marker code will eliminate this
> problem.

May you please elaborate?
I routinely deal with buffers having hundreds of markers.
How will adding a couple of markers from threads will make things worse?

>> Last time I wrote a thread code that had to work with buffer text, I had
>> to store markers manually anyway. Otherwise, point position were always
>> chaotic.
>
> What happens if another thread deletes the text that surrounds point in
> the thread your code is running in?  Won't you have the same problem
> then anyhow?

Usually not. The worst case could be some match being skipped, which is
often acceptable. I have seen plenty of examples because Org provides
`org-element-map' API where we allow the user function to change buffer.

Point and restriction changing unpredictably is much bigger problem in
practice, because it can be triggered even without editing the buffer.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 13:02                                                                                     ` Ihor Radchenko
@ 2023-07-24 13:54                                                                                       ` Eli Zaretskii
  2023-07-24 14:24                                                                                         ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-24 13:54 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Mon, 24 Jul 2023 13:02:26 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> Would it be acceptable to convert buffer PT, BEGV, and ZV into
> >> thread-local for current cooperative threads?
> >
> > General note: it is very hard to have a serious discussion of this
> > kind of subject when the goals are not clearly announced, and there
> > are gaps of week or two between messages.  Discussing several separate
> > aspects of this makes this even harder to follow and respond in a
> > useful manner.
> 
> I was initially exploring what should be done allow async threads in
> Emacs. The discussion established several important global states in
> Emacs that must be changed. Then, I explored if those blockers are
> possible to solve. Now, I believe that all the blockers can be solved,
> if we accept to leave GC and redisplay synchronous.

So the goal is to eventually support true concurrency in Emacs?

> However, reducing the global state will not be easy and should better be
> done in steps. Ideally, steps that can be tested using the existing
> synchronous thread paradigm.

But the goal is concurrency? so these steps only make sense if the
concurrency is attainable via these measures?

> >> This way, when a thread yields and later continues executing, its point
> >> and restriction will not be changed.
> >
> > Why is the last sentence a worthy goal? what do you think will be the
> > advantage of that?
> 
> Consider a simple regexp search running in a thread:
> 
> (while (re-search-forward re nil t)
>   (do-useful-staff)
>   (thread-yield))
> 
> This search will fail if another thread ever moves point or changes
> restriction in the same buffer.

You are considering just the simplest scenario.  Usually, Lisp
programs in Emacs not only read text, they also write and delete it.
What if one thread makes changes to buffer text and another thread
then wants to access it using its state variables?  E.g., what do you
plan doing about the gap? will thread switch move the gap or
something? that alone is a performance killer.

> That's why I think that having threads store their own point and
> restriction is useful, even without considering that it will reduce the
> global state.

To be a step in the general direction of achieving concurrency, we
need some plan that will at least in principle allow concurrent
editing.  Even if "concurrent" here means that only one thread can
write to a buffer at a time, we still need to see how this will work
when other threads are unblocked once the writer thread is done
writing.  Can you describe how this will work, assuming we keep the
current design of buffer with a gap?



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 13:41                                                                                           ` Eli Zaretskii
@ 2023-07-24 14:13                                                                                             ` Ihor Radchenko
  0 siblings, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-24 14:13 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> Switching buffer already involves juggling with pt_marker_ in
>> record_buffer_markers and fetch_buffer_markers.
>
> No, it doesn't, not if the code only sets current_buffer.
>> They still use set_buffer_internal.
>
> That's not necessary for some operations.

Then, we can store pt, zv, and begv as markers.

PT/ZV/BEGV macros will do something like

current_buffer == current_thread->m_pt->buffer?
  current_thread->m_pt->charpos:
  (<update current_thread->m_pt and return it>)

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 13:54                                                                                       ` Eli Zaretskii
@ 2023-07-24 14:24                                                                                         ` Ihor Radchenko
  2023-07-24 16:00                                                                                           ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-24 14:24 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

> So the goal is to eventually support true concurrency in Emacs?

Yes.

> ... so these steps only make sense if the
> concurrency is attainable via these measures?

Yes, except that the part about thread-local point and restriction may
be more generally useful.

>> Consider a simple regexp search running in a thread:
>> 
>> (while (re-search-forward re nil t)
>>   (do-useful-staff)
>>   (thread-yield))
>> 
>> This search will fail if another thread ever moves point or changes
>> restriction in the same buffer.
>
> You are considering just the simplest scenario.  Usually, Lisp
> programs in Emacs not only read text, they also write and delete it.
> What if one thread makes changes to buffer text and another thread
> then wants to access it using its state variables?  E.g., what do you
> plan doing about the gap? will thread switch move the gap or
> something? that alone is a performance killer.

AFAIK, reading buffer does not require moving the gap.

We only need to move the gap when buffer is changed or before copying
text region from buffer to another buffer. Both such operations should
be considered buffer modifications and must be blocking.

> To be a step in the general direction of achieving concurrency, we
> need some plan that will at least in principle allow concurrent
> editing.  Even if "concurrent" here means that only one thread can
> write to a buffer at a time, we still need to see how this will work
> when other threads are unblocked once the writer thread is done
> writing.  Can you describe how this will work, assuming we keep the
> current design of buffer with a gap?

The idea is the same as with what is already done for indirect buffers.
Indirect buffer modification will affect buffer_text object in the
base buffer (automatically - buffer_text object is shared). And they
will also affect point position in the base buffer.

The point adjustment in the base buffer is done simply by storing point
as a marker. We can do the same for thread-local point positions.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 14:24                                                                                         ` Ihor Radchenko
@ 2023-07-24 16:00                                                                                           ` Eli Zaretskii
  2023-07-24 16:38                                                                                             ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-24 16:00 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Mon, 24 Jul 2023 14:24:32 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > So the goal is to eventually support true concurrency in Emacs?
> 
> Yes.

OK, thanks.

> > ... so these steps only make sense if the
> > concurrency is attainable via these measures?
> 
> Yes, except that the part about thread-local point and restriction may
> be more generally useful.

I suggest to forget that latter part: we have a "generally useful"
code already, so changing it without a very good reason is not wise,
from where I stand.

> > You are considering just the simplest scenario.  Usually, Lisp
> > programs in Emacs not only read text, they also write and delete it.
> > What if one thread makes changes to buffer text and another thread
> > then wants to access it using its state variables?  E.g., what do you
> > plan doing about the gap? will thread switch move the gap or
> > something? that alone is a performance killer.
> 
> AFAIK, reading buffer does not require moving the gap.

We've been through that: at least xml.c moves the gap to allow
external libraries access to buffer text as a single contiguous C
string.  This is a capability I wouldn't want to lose, because it
might come in handy in future developments.

> We only need to move the gap when buffer is changed or before copying
> text region from buffer to another buffer. Both such operations should
> be considered buffer modifications and must be blocking.
> 
> > To be a step in the general direction of achieving concurrency, we
> > need some plan that will at least in principle allow concurrent
> > editing.  Even if "concurrent" here means that only one thread can
> > write to a buffer at a time, we still need to see how this will work
> > when other threads are unblocked once the writer thread is done
> > writing.  Can you describe how this will work, assuming we keep the
> > current design of buffer with a gap?
> 
> The idea is the same as with what is already done for indirect buffers.
> Indirect buffer modification will affect buffer_text object in the
> base buffer (automatically - buffer_text object is shared). And they
> will also affect point position in the base buffer.
> 
> The point adjustment in the base buffer is done simply by storing point
> as a marker. We can do the same for thread-local point positions.

I still don't quite see how this will work.  Indirect buffers don't
introduce parallelism, and programs that modify indirect buffers
_know_ that the text of the base buffer will also be modified.  By
contrast, a thread that has been preempted won't know and won't expect
that.  It could, for example, keep buffer positions in simple
variables, not in markers; almost all Lisp programs do that, and use
markers only in very special situations.

In addition, on the C level, some code computes pointers to buffer
text via BYTE_POS_ADDR, and then uses the pointer as any C program
would.  If such a thread is suspended, and some other thread modifies
buffer text in the meantime, all those pointers will be invalid, and
we have bugs.  So it looks like, if we want to allow concurrent access
to buffers from several threads, we will have a lot of code rewriting
on our hands, and the rewritten code will be less efficient, because
it will have to always access buffer text via buffer positions and
macros like FETCH_BYTE and fetch_char_advance; access through char *
pointers will be lost forever.

So maybe we should take a step back and consider a restriction that
only one thread can access a buffer at any given time?  WDYT?



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 16:00                                                                                           ` Eli Zaretskii
@ 2023-07-24 16:38                                                                                             ` Ihor Radchenko
  2023-07-25  0:20                                                                                               ` Po Lu
  2023-07-25 11:29                                                                                               ` Eli Zaretskii
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-24 16:38 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> AFAIK, reading buffer does not require moving the gap.
>
> We've been through that: at least xml.c moves the gap to allow
> external libraries access to buffer text as a single contiguous C
> string.  This is a capability I wouldn't want to lose, because it
> might come in handy in future developments.

move_gap_both should lock the buffer as it is buffer modification (of
the buffer_text object). htmlReadMemory should also lock it because it
expects constant string segment.

And if we really want xml.c:parse_region to be asynchronous (not
blocking other threads reading the same buffer), we can still do it at
the cost of extra memcpy into newly allocated contiguous memory segment.

>> The point adjustment in the base buffer is done simply by storing point
>> as a marker. We can do the same for thread-local point positions.
>
> I still don't quite see how this will work.  Indirect buffers don't
> introduce parallelism, and programs that modify indirect buffers
> _know_ that the text of the base buffer will also be modified.  By
> contrast, a thread that has been preempted won't know and won't expect
> that.  It could, for example, keep buffer positions in simple
> variables, not in markers; almost all Lisp programs do that, and use
> markers only in very special situations.

Any async thread should expect that current buffer might be modified. Or
lock the buffer for text modifications explicitly (the feature we should
probably provide - something more enforcing compared to read-only-mode
we have now).

> In addition, on the C level, some code computes pointers to buffer
> text via BYTE_POS_ADDR, and then uses the pointer as any C program
> would.  If such a thread is suspended, and some other thread modifies
> buffer text in the meantime, all those pointers will be invalid, and
> we have bugs.  So it looks like, if we want to allow concurrent access
> to buffers from several threads, we will have a lot of code rewriting
> on our hands, and the rewritten code will be less efficient, because
> it will have to always access buffer text via buffer positions and
> macros like FETCH_BYTE and fetch_char_advance; access through char *
> pointers will be lost forever.

Not necessarily lost. We should provide facilities to prevent buffer
from being modified ("write mutex").

This problem is not limited to buffers - any low-level function that
modifies C object struct must enforce the condition when other threads
cannot modify the same object. For example SETCAR will have to mark the
modified object non-writable first, set its car, and release the lock.

So, any time we need a guarantee that an object remains unchanged, we
should acquire object-specific write-preventing mutex.

Of course, such write locks should happen for short periods of time to
be efficient.

Some of the existing uses of BYTE_POS_ADDRS may be converted into
explicit dynamic calls to
 (n < GPT_BYTE ? 0 : GAP_SIZE) + n + BEG_ADDR - BEG_BYTE;
If necessary.

> So maybe we should take a step back and consider a restriction that
> only one thread can access a buffer at any given time?  WDYT?

Buffers are so central in Emacs that I do not want to give up before we
try our best.

Alternatively, I can try to look into other global states first and
leave async buffer access to later. If we can get rid of the truly
global states (which buffer point is not; given that each thread has an
exclusive lock on its buffer), we can later come back to per-thread
point and restriction.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 13:53                                                                                             ` Ihor Radchenko
@ 2023-07-25  0:12                                                                                               ` Po Lu
  2023-07-25  4:28                                                                                                 ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-25  0:12 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> May you please elaborate?
> I routinely deal with buffers having hundreds of markers.
> How will adding a couple of markers from threads will make things worse?

See bug#64391 for a performance regression in Gnus resulting from a
_single_ marker.

> Usually not. The worst case could be some match being skipped, which is
> often acceptable. I have seen plenty of examples because Org provides
> `org-element-map' API where we allow the user function to change buffer.

But Org doesn't run in another thread, does it?  Besides, text matching
is hardly the only tasks our users want to perform in a different
thread.

> Point and restriction changing unpredictably is much bigger problem in
> practice, because it can be triggered even without editing the buffer.

Code that believes this is a problem should devise and make use of
additional synchronization protocols independent from Emacs's internal
buffer synchronization.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 16:38                                                                                             ` Ihor Radchenko
@ 2023-07-25  0:20                                                                                               ` Po Lu
  2023-07-25  4:36                                                                                                 ` Ihor Radchenko
  2023-07-25 11:29                                                                                               ` Eli Zaretskii
  1 sibling, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-25  0:20 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> This problem is not limited to buffers - any low-level function that
> modifies C object struct must enforce the condition when other threads
> cannot modify the same object. For example SETCAR will have to mark the
> modified object non-writable first, set its car, and release the lock.

No, because reading the car of a Lisp_Cons does not depend on any other
field within the Lisp_Cons.

No interlocking is required to read from or write to machine word-sized
fields, as changes to such fields are always propagated coherently to
other CPUs.  At worst, you will need to flush the write cache on any CPU
that has written to one such field.  (Even these machines are rare -- I
don't think Emacs currently supports any.)

Interlocking is only required when correctly interpreting the value of
one field requires the state of other field(s) to be the same as when
the first field was set.  For example, PT and PT_BYTE must always be
interlocked: a thread must not see one value of PT and then read an
earlier or later PT_BYTE corresponding to a different value.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  0:12                                                                                               ` Po Lu
@ 2023-07-25  4:28                                                                                                 ` Ihor Radchenko
  0 siblings, 0 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-25  4:28 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> May you please elaborate?
>> I routinely deal with buffers having hundreds of markers.
>> How will adding a couple of markers from threads will make things worse?
>
> See bug#64391 for a performance regression in Gnus resulting from a
> _single_ marker.

That's not a single marker. Quite far from it.

/* Record the accessible range of the buffer when narrow-to-region
     is called, that is, before applying the narrowing.  That
     information is used only by internal--label-restriction.  */
  Fset (Qoutermost_restriction, list3 (Qoutermost_restriction,
				       Fpoint_min_marker (),
				       Fpoint_max_marker ()));

will create a pair of __new__ markers every time it is called.
And, AFAIU, did not clear them all the way until the next GC.
So, the reproducer mentioned in the report was likely dealing with a
growing number of markers.

There is no doubt that processing markers takes time - Emacs goes
through the whole marker list on every buffer change. But it is
acceptable when the number of markers is moderate, not in the
pathological cases with huge number of markers.

Note, however, that it can (and probably should) be improved.
As discussed in the past, we can utilize itree.c to store markers and
remove the need in O(N_markers) processing when updating marker
positions.

>> Usually not. The worst case could be some match being skipped, which is
>> often acceptable. I have seen plenty of examples because Org provides
>> `org-element-map' API where we allow the user function to change buffer.
>
> But Org doesn't run in another thread, does it?  Besides, text matching
> is hardly the only tasks our users want to perform in a different
> thread.

My point was to show that per-thread point can be quite useful. I did
not try to prove that all the possible tasks potentially done via
threads need it.

Text matching is one of the _common_ tasks when working with buffers,
don't you agree?

>> Point and restriction changing unpredictably is much bigger problem in
>> practice, because it can be triggered even without editing the buffer.
>
> Code that believes this is a problem should devise and make use of
> additional synchronization protocols independent from Emacs's internal
> buffer synchronization.

Please refer to my other message where I showed why synchronization is
extremely difficult with the available tools even for something as
simple as incremental regexp search.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  0:20                                                                                               ` Po Lu
@ 2023-07-25  4:36                                                                                                 ` Ihor Radchenko
  2023-07-25  7:27                                                                                                   ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-25  4:36 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> This problem is not limited to buffers - any low-level function that
>> modifies C object struct must enforce the condition when other threads
>> cannot modify the same object. For example SETCAR will have to mark the
>> modified object non-writable first, set its car, and release the lock.
>
> No, because reading the car of a Lisp_Cons does not depend on any other
> field within the Lisp_Cons.
>
> No interlocking is required to read from or write to machine word-sized
> fields, as changes to such fields are always propagated coherently to
> other CPUs.  At worst, you will need to flush the write cache on any CPU
> that has written to one such field.  (Even these machines are rare -- I
> don't think Emacs currently supports any.)

This is a dangerous assumption.
What if Emacs decides to support such architecture in future?

Simultaneous write and read generally has undefined outcome, unless we
use READ_ONCE and WRITE_ONCE. There are known architectures where
simultaneous read may result in mixing bits from the old and new values.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  4:36                                                                                                 ` Ihor Radchenko
@ 2023-07-25  7:27                                                                                                   ` Po Lu
  2023-07-25  7:59                                                                                                     ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-25  7:27 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> This is a dangerous assumption.
> What if Emacs decides to support such architecture in future?

No such architecture is likely to run Emacs in the future, because it
will fail to support every single piece of multiprocessing software
written up to now.

And if Emacs does need to support such an architecture in the future
(which is a very long shot), we can revisit our choices then.  Until
that time, I see no need to slow down reads and writes to conses or
vectors with interlocking on the basis of purely theoretical
considerations.

> Simultaneous write and read generally has undefined outcome

Only the order in which the reads and writes appear to take place is
undefined.  Provided that only valid Lisp_Objects are written, no
invalid Lisp_Object will ever be read.

>, unless we use READ_ONCE and WRITE_ONCE.

Emacs is not the Linux kernel and is not subject to its considerations.
Besides, READ_ONCE and WRITE_ONCE involve _NO_ interlocking!

> There are known architectures where simultaneous read may result in
> mixing bits from the old and new values.

Which architectures would that be?  Unless you're describing unaligned
reads and writes, or for data types larger or smaller (on the 21064)
than a machine word.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  7:27                                                                                                   ` Po Lu
@ 2023-07-25  7:59                                                                                                     ` Ihor Radchenko
  2023-07-25  8:27                                                                                                       ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-25  7:59 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> There are known architectures where simultaneous read may result in
>> mixing bits from the old and new values.
>
> Which architectures would that be?  Unless you're describing unaligned
> reads and writes, or for data types larger or smaller (on the 21064)
> than a machine word.
 
McKenney (2023) Is Parallel Programming Hard, And, If So, What Can You
Do About It? p. 483

    "For an example, consider
    embedded systems with 32-bit pointers and 16-bit busses.
    On such a system, a data race involving a store to and a
    load from a given pointer might well result in the load
    returning the low-order 16 bits of the old value of the
    pointer concatenated with the high-order 16 bits of the
    new value of the pointer."

>>, unless we use READ_ONCE and WRITE_ONCE.
>
> Emacs is not the Linux kernel and is not subject to its considerations.
> Besides, READ_ONCE and WRITE_ONCE involve _NO_ interlocking!

Sure. I just meant that we need to avoid data races. Using
READ/WRITE_ONCE or mutexes. READ/WRITE_ONCE should be actually avoided,
AFAIK. They will degrade performance significantly (one-two orders of
magnitude) for each individual operation (see Figure 5.1 in the
McKenney's book)

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  7:59                                                                                                     ` Ihor Radchenko
@ 2023-07-25  8:27                                                                                                       ` Po Lu
  2023-07-25  8:45                                                                                                         ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-25  8:27 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> McKenney (2023) Is Parallel Programming Hard, And, If So, What Can You
> Do About It? p. 483
>
>     "For an example, consider
>     embedded systems with 32-bit pointers and 16-bit busses.
>     On such a system, a data race involving a store to and a
>     load from a given pointer might well result in the load
>     returning the low-order 16 bits of the old value of the
>     pointer concatenated with the high-order 16 bits of the
>     new value of the pointer."

Emacs supports such systems?  And they are SMPs as well?

> Sure. I just meant that we need to avoid data races.

Again, there are NO data races on systems Emacs supports.

> Using READ/WRITE_ONCE or mutexes.

So nothing needs to be done.

> READ/WRITE_ONCE should be actually avoided

Because on the hypothetical systems that Emacs will need to support in
the future _and_ do not coherently propagate word sized writes between
CPUs, they must be implemented with interlocking, either in hardware or
in software.  Mutexes, of course, will then need to be implemented on
top of that.

> AFAIK. They will degrade
> performance significantly (one-two orders of magnitude) for each
> individual operation (see Figure 5.1 in the McKenney's book)

Figure 5.1 in that book illustrates the scalability of x86 interlocked
instructions by comparing an increment mechanism using plain loads and
stores to an atomic increment mechanism.  It is not relevant to the
subject at hand.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  8:27                                                                                                       ` Po Lu
@ 2023-07-25  8:45                                                                                                         ` Ihor Radchenko
  2023-07-25  8:53                                                                                                           ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-25  8:45 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

> Ihor Radchenko <yantar92@posteo.net> writes:
>
>> McKenney (2023) Is Parallel Programming Hard, And, If So, What Can You
>> Do About It? p. 483
>>
>>     "For an example, consider
>>     embedded systems with 32-bit pointers and 16-bit busses.
>>     On such a system, a data race involving a store to and a
>>     load from a given pointer might well result in the load
>>     returning the low-order 16 bits of the old value of the
>>     pointer concatenated with the high-order 16 bits of the
>>     new value of the pointer."
>
> Emacs supports such systems?  And they are SMPs as well?

My point is that it will be very difficult to support such systems in
future if we decide that we want to rely upon safety of concurrent read
and write.

>> AFAIK. They will degrade
>> performance significantly (one-two orders of magnitude) for each
>> individual operation (see Figure 5.1 in the McKenney's book)
>
> Figure 5.1 in that book illustrates the scalability of x86 interlocked
> instructions by comparing an increment mechanism using plain loads and
> stores to an atomic increment mechanism.  It is not relevant to the
> subject at hand.

I understood that figure and the associated section differently.
Do you have some kind of reference showing the performance of
READ_ONCE/WRITE_ONCE?

If we need less interlocks, it will certainly make things easier, but I
do not want to run into hard-to-debug bugs caused by data races.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  8:45                                                                                                         ` Ihor Radchenko
@ 2023-07-25  8:53                                                                                                           ` Po Lu
  2023-07-25  9:03                                                                                                             ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-25  8:53 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> My point is that it will be very difficult to support such systems in
> future if we decide that we want to rely upon safety of concurrent read
> and write.

It's very unlikely that these systems will exist in the future and that
Emacs will want to run on them.  These assumptions are currently being
made by Linux and the JVM, just to name a few multiprocessor programs
that do so.

If, by some miracle, these systems do appear, Emacs can either run on a
single CPU, or we can revisit our decisions at that time.

> I understood that figure and the associated section differently.
> Do you have some kind of reference showing the performance of
> READ_ONCE/WRITE_ONCE?

The figure is titled ``Figure 5.1: Atomic Increment Scalability on
x86''.  The surrounding text and source code listings make the
comparison taking place unambiguous.

> If we need less interlocks, it will certainly make things easier, but I
> do not want to run into hard-to-debug bugs caused by data races.

And you won't, if you run your program on a machine Emacs currently
supports.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  8:53                                                                                                           ` Po Lu
@ 2023-07-25  9:03                                                                                                             ` Ihor Radchenko
  2023-07-25  9:17                                                                                                               ` Po Lu
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-25  9:03 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> I understood that figure and the associated section differently.
>> Do you have some kind of reference showing the performance of
>> READ_ONCE/WRITE_ONCE?
>
> The figure is titled ``Figure 5.1: Atomic Increment Scalability on
> x86''.  The surrounding text and source code listings make the
> comparison taking place unambiguous.

Sure, but the whole chapter design shows that READ_ONCE/WRITE_ONCE is
not the best design. They go deep and complex just to avoid using it too
much.

That said, the chapter is dealing with increments specifically and
frequent writes.

In Elisp, we will deal with frequent writes into symbol objects, but
probably not into conses and other non-vectorlike objects.

>> If we need less interlocks, it will certainly make things easier, but I
>> do not want to run into hard-to-debug bugs caused by data races.
>
> And you won't, if you run your program on a machine Emacs currently
> supports.

Ok. I will take your word for granted.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  9:03                                                                                                             ` Ihor Radchenko
@ 2023-07-25  9:17                                                                                                               ` Po Lu
  2023-07-25  9:27                                                                                                                 ` Ihor Radchenko
  0 siblings, 1 reply; 192+ messages in thread
From: Po Lu @ 2023-07-25  9:17 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> Sure, but the whole chapter design shows that READ_ONCE/WRITE_ONCE is
> not the best design. They go deep and complex just to avoid using it too
> much.

That's because READ_ONCE and WRITE_ONCE both have additional semantics
beyond ensuring that loads and stores are coherently propagated and
received to and from other CPUs.  For example, the compiler is not
allowed to merge loads and stores, although it and the CPU are both
allowed to reorder them.

We don't have similar requirements in Emacs.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  9:17                                                                                                               ` Po Lu
@ 2023-07-25  9:27                                                                                                                 ` Ihor Radchenko
  2023-07-25  9:37                                                                                                                   ` Po Lu
  2023-07-25 12:40                                                                                                                   ` Eli Zaretskii
  0 siblings, 2 replies; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-25  9:27 UTC (permalink / raw)
  To: Po Lu; +Cc: Eli Zaretskii, emacs-devel

Po Lu <luangruo@yahoo.com> writes:

>> Sure, but the whole chapter design shows that READ_ONCE/WRITE_ONCE is
>> not the best design. They go deep and complex just to avoid using it too
>> much.
>
> That's because READ_ONCE and WRITE_ONCE both have additional semantics
> beyond ensuring that loads and stores are coherently propagated and
> received to and from other CPUs.  For example, the compiler is not
> allowed to merge loads and stores, although it and the CPU are both
> allowed to reorder them.
>
> We don't have similar requirements in Emacs.

I've seen a couple of volatile variables in the code.
So, there is at least some fighting with GCC optimizations going on.

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  9:27                                                                                                                 ` Ihor Radchenko
@ 2023-07-25  9:37                                                                                                                   ` Po Lu
  2023-07-25 12:40                                                                                                                   ` Eli Zaretskii
  1 sibling, 0 replies; 192+ messages in thread
From: Po Lu @ 2023-07-25  9:37 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: Eli Zaretskii, emacs-devel

Ihor Radchenko <yantar92@posteo.net> writes:

> I've seen a couple of volatile variables in the code.
> So, there is at least some fighting with GCC optimizations going on.

sig_atomic_t used within signal handlers must be volatile.  Others are
due to interactions between longjmp and automatic register allocation,
not due to store merging or CSE.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-24 16:38                                                                                             ` Ihor Radchenko
  2023-07-25  0:20                                                                                               ` Po Lu
@ 2023-07-25 11:29                                                                                               ` Eli Zaretskii
  2023-07-25 11:52                                                                                                 ` Ihor Radchenko
  1 sibling, 1 reply; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-25 11:29 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Mon, 24 Jul 2023 16:38:55 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> AFAIK, reading buffer does not require moving the gap.
> >
> > We've been through that: at least xml.c moves the gap to allow
> > external libraries access to buffer text as a single contiguous C
> > string.  This is a capability I wouldn't want to lose, because it
> > might come in handy in future developments.
> 
> move_gap_both should lock the buffer as it is buffer modification (of
> the buffer_text object). htmlReadMemory should also lock it because it
> expects constant string segment.

This basically means we will only allow access of a single thread to
the same buffer, see below.

> > I still don't quite see how this will work.  Indirect buffers don't
> > introduce parallelism, and programs that modify indirect buffers
> > _know_ that the text of the base buffer will also be modified.  By
> > contrast, a thread that has been preempted won't know and won't expect
> > that.  It could, for example, keep buffer positions in simple
> > variables, not in markers; almost all Lisp programs do that, and use
> > markers only in very special situations.
> 
> Any async thread should expect that current buffer might be modified.

That's impossible without rewriting a lot of code.  And even after
that, how is a thread supposed to "expect" such changes, when they can
happen at any point in the Lisp program execution?  What kind of
measures can a Lisp program take to 'expect" that?  The only one that
I could think of is to copy the entire buffer to another one, and work
on that.  (Which is also not fool-proof.)

> Or lock the buffer for text modifications explicitly (the feature we
> should probably provide - something more enforcing compared to
> read-only-mode we have now).

Locking while accessing a buffer would in practice mean only one
thread can access a given buffer at the same time.  Which is what I
suggested to begin with, but you said you didn't want to give up.

> > In addition, on the C level, some code computes pointers to buffer
> > text via BYTE_POS_ADDR, and then uses the pointer as any C program
> > would.  If such a thread is suspended, and some other thread modifies
> > buffer text in the meantime, all those pointers will be invalid, and
> > we have bugs.  So it looks like, if we want to allow concurrent access
> > to buffers from several threads, we will have a lot of code rewriting
> > on our hands, and the rewritten code will be less efficient, because
> > it will have to always access buffer text via buffer positions and
> > macros like FETCH_BYTE and fetch_char_advance; access through char *
> > pointers will be lost forever.
> 
> Not necessarily lost. We should provide facilities to prevent buffer
> from being modified ("write mutex").

That again means only one thread can access a given buffer, the rest
will be stuck waiting for the mutex.

> This problem is not limited to buffers - any low-level function that
> modifies C object struct must enforce the condition when other threads
> cannot modify the same object. For example SETCAR will have to mark the
> modified object non-writable first, set its car, and release the lock.
> 
> So, any time we need a guarantee that an object remains unchanged, we
> should acquire object-specific write-preventing mutex.

So we will allow access to many/all objects to only one thread at a
time.  How is this better than the current threads?

> Of course, such write locks should happen for short periods of time to
> be efficient.

How can this be done in practice?  Suppose a Lisp program needs to
access some object, so it locks it.  When will it be able to release
the lock, except after it is basically done?  because accessing an
object is not contiguous: you access it, then do something else, then
access it again, etc. -- and assume that the object will not change
between successive accesses.  If you release the lock after each
individual access, that assumption will be false, and all bets are off
again.

> > So maybe we should take a step back and consider a restriction that
> > only one thread can access a buffer at any given time?  WDYT?
> 
> Buffers are so central in Emacs that I do not want to give up before we
> try our best.

But in practice, what you suggest instead does mean we must give up on
that, see above.

> Alternatively, I can try to look into other global states first and
> leave async buffer access to later. If we can get rid of the truly
> global states (which buffer point is not; given that each thread has an
> exclusive lock on its buffer), we can later come back to per-thread
> point and restriction.

That's up to you, although I don't see how the other objects are
different, as explained above.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25 11:29                                                                                               ` Eli Zaretskii
@ 2023-07-25 11:52                                                                                                 ` Ihor Radchenko
  2023-07-25 14:27                                                                                                   ` Eli Zaretskii
  0 siblings, 1 reply; 192+ messages in thread
From: Ihor Radchenko @ 2023-07-25 11:52 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: luangruo, emacs-devel

Eli Zaretskii <eliz@gnu.org> writes:

>> Any async thread should expect that current buffer might be modified.
>
> That's impossible without rewriting a lot of code.  And even after
> that, how is a thread supposed to "expect" such changes, when they can
> happen at any point in the Lisp program execution?  What kind of
> measures can a Lisp program take to 'expect" that?  The only one that
> I could think of is to copy the entire buffer to another one, and work
> on that.  (Which is also not fool-proof.)

>> Of course, such write locks should happen for short periods of time to
>> be efficient.
>
> How can this be done in practice?  Suppose a Lisp program needs to
> access some object, so it locks it.  When will it be able to release
> the lock, except after it is basically done?  because accessing an
> object is not contiguous: you access it, then do something else, then
> access it again, etc. -- and assume that the object will not change
> between successive accesses.  If you release the lock after each
> individual access, that assumption will be false, and all bets are off
> again.

This is just a basic problem with any kind of async code. It should
either (1) lock the shared object it works with to prevent async writing
(not async reading though); (2) copy the shared object value and work
with the copy; (3) Design the code logic taking into account the
possibility that any shared object might change any time.

All 3 approaches may be combined.

>> Or lock the buffer for text modifications explicitly (the feature we
>> should probably provide - something more enforcing compared to
>> read-only-mode we have now).
>
> Locking while accessing a buffer would in practice mean only one
> thread can access a given buffer at the same time.  Which is what I
> suggested to begin with, but you said you didn't want to give up.

Not necessary. Multiple threads may still read the buffer in parallel,
except special case when read also involves moving the gap (being
written at low level). Or are you saying that moving the gap by read
primitives is common?

>> So, any time we need a guarantee that an object remains unchanged, we
>> should acquire object-specific write-preventing mutex.
>
> So we will allow access to many/all objects to only one thread at a
> time.

Only when modifying global objects by side-effect. Like `setcar' or
`move-marker', or `puthash' (on shared objects). And only for the
duration of that modification.

I do not think that "many/all" objects will be locked in practice.

Symbol objects will be treated separately, with value slot storing
multiple values for different threads. (This is necessary because of how
specpdl and rewind works for symbols)

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25  9:27                                                                                                                 ` Ihor Radchenko
  2023-07-25  9:37                                                                                                                   ` Po Lu
@ 2023-07-25 12:40                                                                                                                   ` Eli Zaretskii
  1 sibling, 0 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-25 12:40 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: Eli Zaretskii <eliz@gnu.org>, emacs-devel@gnu.org
> Date: Tue, 25 Jul 2023 09:27:03 +0000
> 
> I've seen a couple of volatile variables in the code.
> So, there is at least some fighting with GCC optimizations going on.

That's usually related to setjmp/longjmp, fork/vfork, etc.



^ permalink raw reply	[flat|nested] 192+ messages in thread

* Re: Concurrency via isolated process/thread
  2023-07-25 11:52                                                                                                 ` Ihor Radchenko
@ 2023-07-25 14:27                                                                                                   ` Eli Zaretskii
  0 siblings, 0 replies; 192+ messages in thread
From: Eli Zaretskii @ 2023-07-25 14:27 UTC (permalink / raw)
  To: Ihor Radchenko; +Cc: luangruo, emacs-devel

> From: Ihor Radchenko <yantar92@posteo.net>
> Cc: luangruo@yahoo.com, emacs-devel@gnu.org
> Date: Tue, 25 Jul 2023 11:52:32 +0000
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> >> Of course, such write locks should happen for short periods of time to
> >> be efficient.
> >
> > How can this be done in practice?  Suppose a Lisp program needs to
> > access some object, so it locks it.  When will it be able to release
> > the lock, except after it is basically done?  because accessing an
> > object is not contiguous: you access it, then do something else, then
> > access it again, etc. -- and assume that the object will not change
> > between successive accesses.  If you release the lock after each
> > individual access, that assumption will be false, and all bets are off
> > again.
> 
> This is just a basic problem with any kind of async code.

Async code avoids accessing the same object as much as possible, and
if that's not possible, uses synchronization features.  You also
suggest using synchronization feature, and there's nothing wrong with
that -- except that if those are used to lock buffers and global
objects, we no longer have true concurrency, because in many cases
only one thread will be able to do something useful.  That is all I'm
saying.  If you agree, then this whole development sounds
questionable, because it will require massive changes for very little
practical gain, and with many disadvantages, like the necessity to
call synchronization primitives from Lisp (which makes it impractical
to run the same Lisp program with and without concurrency).

Bottom line: I think your view of what will need to change vs what
will be the costs is very optimistic.  We have only scratched the
surface of the subject, and already there are quite serious issues and
obstacles to be negotiated.  Feel free to work on devising solutions,
but my gut feeling is that this is not worth it, and that Emacs cannot
be adapted to true concurrency without a very thorough redesign of the
core data structures and rethinking of the main architectural
assumptions and models.  E.g., do we even believe that MVC is a proper
architecture for a multithreaded program?

And one other remark: please always keep in mind that Emacs is unusual
in the depth and intensity of the control it lets Lisp programs have
on both input and display.  Any significant architectural change will
have to live up to the expectations of Lisp programmers who are used
to the level of control they have in Emacs.  Providing the same level
of control when input and display run in separate threads from the
Lisp machine, let alone allowing several Lisp threads run in parallel
and complete for such control, might prove much more difficult.  But
if not provided, this will no longer be Emacs, and will not be as
popular with the present community.  That's an additional uphill
battle any new design will have to fight.



^ permalink raw reply	[flat|nested] 192+ messages in thread

end of thread, other threads:[~2023-07-25 14:27 UTC | newest]

Thread overview: 192+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-04 16:58 Concurrency via isolated process/thread Ihor Radchenko
2023-07-04 17:12 ` Eli Zaretskii
2023-07-04 17:29   ` Ihor Radchenko
2023-07-04 17:35     ` Eli Zaretskii
2023-07-04 17:52       ` Ihor Radchenko
2023-07-04 18:24         ` Eli Zaretskii
2023-07-05 11:23           ` Ihor Radchenko
2023-07-05 11:49             ` Eli Zaretskii
2023-07-05 12:40               ` Ihor Radchenko
2023-07-05 13:02                 ` Lynn Winebarger
2023-07-05 13:10                   ` Ihor Radchenko
2023-07-06 18:35                     ` Lynn Winebarger
2023-07-07 11:48                       ` Ihor Radchenko
2023-07-05 13:33                 ` Eli Zaretskii
2023-07-05 13:35                   ` Ihor Radchenko
2023-07-05  0:34         ` Po Lu
2023-07-05 11:26           ` Ihor Radchenko
2023-07-05 12:11             ` Po Lu
2023-07-05 12:44               ` Ihor Radchenko
2023-07-05 13:21                 ` Po Lu
2023-07-05 13:26                   ` Ihor Radchenko
2023-07-05 13:51                     ` Eli Zaretskii
2023-07-05 14:00                       ` Ihor Radchenko
2023-07-06  0:32                         ` Po Lu
2023-07-06 10:46                           ` Ihor Radchenko
2023-07-06 12:24                             ` Po Lu
2023-07-06 12:31                               ` Ihor Radchenko
2023-07-06 12:41                                 ` Po Lu
2023-07-06 12:51                                   ` Ihor Radchenko
2023-07-06 12:58                                     ` Po Lu
2023-07-06 13:13                                       ` Ihor Radchenko
2023-07-06 14:13                                         ` Eli Zaretskii
2023-07-06 14:47                                           ` Ihor Radchenko
2023-07-06 15:10                                             ` Eli Zaretskii
2023-07-06 16:17                                               ` Ihor Radchenko
2023-07-06 18:19                                                 ` Eli Zaretskii
2023-07-07 12:04                                                   ` Ihor Radchenko
2023-07-07 13:16                                                     ` Eli Zaretskii
2023-07-07 14:29                                                       ` Ihor Radchenko
2023-07-07 14:47                                                         ` Eli Zaretskii
2023-07-07 15:21                                                           ` Ihor Radchenko
2023-07-07 18:04                                                             ` Eli Zaretskii
2023-07-07 18:24                                                               ` Ihor Radchenko
2023-07-07 19:36                                                                 ` Eli Zaretskii
2023-07-07 20:05                                                                   ` Ihor Radchenko
2023-07-08  7:05                                                                     ` Eli Zaretskii
2023-07-08 10:53                                                                       ` Ihor Radchenko
2023-07-08 14:26                                                                         ` Eli Zaretskii
2023-07-09  9:36                                                                           ` Ihor Radchenko
2023-07-09  9:56                                                                             ` Po Lu
2023-07-09 10:04                                                                               ` Ihor Radchenko
2023-07-09 11:59                                                                             ` Eli Zaretskii
2023-07-09 13:58                                                                               ` Ihor Radchenko
2023-07-09 14:52                                                                                 ` Eli Zaretskii
2023-07-09 15:49                                                                                   ` Ihor Radchenko
2023-07-09 16:35                                                                                     ` Eli Zaretskii
2023-07-10 11:30                                                                                       ` Ihor Radchenko
2023-07-10 12:13                                                                                         ` Po Lu
2023-07-10 12:28                                                                                           ` Ihor Radchenko
2023-07-10 12:48                                                                                             ` Po Lu
2023-07-10 12:53                                                                                               ` Ihor Radchenko
2023-07-10 13:18                                                                                                 ` Po Lu
2023-07-10 13:09                                                                                         ` Eli Zaretskii
2023-07-10 13:58                                                                                           ` Ihor Radchenko
2023-07-10 14:37                                                                                             ` Eli Zaretskii
2023-07-10 14:55                                                                                               ` Ihor Radchenko
2023-07-10 16:03                                                                                                 ` Eli Zaretskii
2023-07-16 14:58                                                                                 ` Ihor Radchenko
2023-07-17  7:55                                                                                   ` Ihor Radchenko
2023-07-17  8:36                                                                                     ` Po Lu
2023-07-17  8:52                                                                                       ` Ihor Radchenko
2023-07-17  9:39                                                                                         ` Po Lu
2023-07-17  9:54                                                                                           ` Ihor Radchenko
2023-07-17 10:08                                                                                             ` Po Lu
2023-07-24  8:42                                                                                 ` Ihor Radchenko
2023-07-24  9:52                                                                                   ` Po Lu
2023-07-24 10:09                                                                                     ` Ihor Radchenko
2023-07-24 12:15                                                                                       ` Po Lu
2023-07-24 12:25                                                                                         ` Ihor Radchenko
2023-07-24 13:31                                                                                           ` Po Lu
2023-07-24 13:53                                                                                             ` Ihor Radchenko
2023-07-25  0:12                                                                                               ` Po Lu
2023-07-25  4:28                                                                                                 ` Ihor Radchenko
2023-07-24 12:50                                                                                       ` Eli Zaretskii
2023-07-24 13:15                                                                                         ` Ihor Radchenko
2023-07-24 13:41                                                                                           ` Eli Zaretskii
2023-07-24 14:13                                                                                             ` Ihor Radchenko
2023-07-24 12:44                                                                                   ` Eli Zaretskii
2023-07-24 13:02                                                                                     ` Ihor Radchenko
2023-07-24 13:54                                                                                       ` Eli Zaretskii
2023-07-24 14:24                                                                                         ` Ihor Radchenko
2023-07-24 16:00                                                                                           ` Eli Zaretskii
2023-07-24 16:38                                                                                             ` Ihor Radchenko
2023-07-25  0:20                                                                                               ` Po Lu
2023-07-25  4:36                                                                                                 ` Ihor Radchenko
2023-07-25  7:27                                                                                                   ` Po Lu
2023-07-25  7:59                                                                                                     ` Ihor Radchenko
2023-07-25  8:27                                                                                                       ` Po Lu
2023-07-25  8:45                                                                                                         ` Ihor Radchenko
2023-07-25  8:53                                                                                                           ` Po Lu
2023-07-25  9:03                                                                                                             ` Ihor Radchenko
2023-07-25  9:17                                                                                                               ` Po Lu
2023-07-25  9:27                                                                                                                 ` Ihor Radchenko
2023-07-25  9:37                                                                                                                   ` Po Lu
2023-07-25 12:40                                                                                                                   ` Eli Zaretskii
2023-07-25 11:29                                                                                               ` Eli Zaretskii
2023-07-25 11:52                                                                                                 ` Ihor Radchenko
2023-07-25 14:27                                                                                                   ` Eli Zaretskii
2023-07-09 17:13                                                                             ` Gregory Heytings
2023-07-10 11:37                                                                               ` Ihor Radchenko
2023-07-13 13:54                                                                                 ` Gregory Heytings
2023-07-13 14:23                                                                                   ` Ihor Radchenko
2023-07-07  0:21                                         ` Po Lu
2023-07-06 14:08                             ` Eli Zaretskii
2023-07-06 15:01                               ` Ihor Radchenko
2023-07-06 15:16                                 ` Eli Zaretskii
2023-07-06 16:32                                   ` Ihor Radchenko
2023-07-06 17:50                                     ` Eli Zaretskii
2023-07-07 12:30                                       ` Ihor Radchenko
2023-07-07 13:34                                         ` Eli Zaretskii
2023-07-07 15:17                                           ` Ihor Radchenko
2023-07-07 19:31                                             ` Eli Zaretskii
2023-07-07 20:01                                               ` Ihor Radchenko
2023-07-08  6:50                                                 ` Eli Zaretskii
2023-07-08 11:55                                                   ` Ihor Radchenko
2023-07-08 14:43                                                     ` Eli Zaretskii
2023-07-09  9:57                                                       ` Ihor Radchenko
2023-07-09 12:08                                                         ` Eli Zaretskii
2023-07-09 14:16                                                           ` Ihor Radchenko
2023-07-09 15:00                                                             ` Eli Zaretskii
2023-07-09 12:22                                                         ` Po Lu
2023-07-09 13:12                                                           ` Eli Zaretskii
2023-07-10  0:18                                                             ` Po Lu
2023-07-08  0:51                                               ` Po Lu
2023-07-08  4:18                                                 ` tomas
2023-07-08  5:51                                                   ` Po Lu
2023-07-08  6:01                                                     ` tomas
2023-07-08 10:02                                                       ` Ihor Radchenko
2023-07-08 19:39                                                         ` tomas
2023-07-08  6:25                                                 ` Eli Zaretskii
2023-07-08  6:38                                                   ` Ihor Radchenko
2023-07-08  7:45                                                     ` Eli Zaretskii
2023-07-08  8:16                                                       ` Ihor Radchenko
2023-07-08 10:13                                                         ` Eli Zaretskii
2023-07-07 13:35                                         ` Po Lu
2023-07-07 15:31                                           ` Ihor Radchenko
2023-07-08  0:44                                             ` Po Lu
2023-07-08  4:29                                               ` tomas
2023-07-08  7:21                                               ` Eli Zaretskii
2023-07-08  7:48                                                 ` Po Lu
2023-07-08 10:02                                                   ` Eli Zaretskii
2023-07-08 11:54                                                     ` Po Lu
2023-07-08 14:12                                                       ` Eli Zaretskii
2023-07-09  0:37                                                         ` Po Lu
2023-07-09  7:01                                                           ` Eli Zaretskii
2023-07-09  7:14                                                             ` Po Lu
2023-07-09  7:35                                                               ` Eli Zaretskii
2023-07-09  7:57                                                                 ` Ihor Radchenko
2023-07-09  8:41                                                                   ` Eli Zaretskii
2023-07-10 14:53                                                                     ` Dmitry Gutov
2023-07-09  9:25                                                                 ` Po Lu
2023-07-09 11:14                                                                   ` Eli Zaretskii
2023-07-09 11:23                                                                     ` Ihor Radchenko
2023-07-09 12:10                                                                     ` Po Lu
2023-07-09 13:03                                                                       ` Eli Zaretskii
2023-07-08 12:01                                                     ` Ihor Radchenko
2023-07-08 14:45                                                       ` Eli Zaretskii
2023-07-07  0:41                                     ` Po Lu
2023-07-07 12:42                                       ` Ihor Radchenko
2023-07-07 13:31                                         ` Po Lu
2023-07-07  0:27                                 ` Po Lu
2023-07-07 12:45                                   ` Ihor Radchenko
2023-07-06  0:27                     ` Po Lu
2023-07-06 10:48                       ` Ihor Radchenko
2023-07-06 12:15                         ` Po Lu
2023-07-06 14:10                         ` Eli Zaretskii
2023-07-06 15:09                           ` Ihor Radchenko
2023-07-06 15:18                             ` Eli Zaretskii
2023-07-06 16:36                               ` Ihor Radchenko
2023-07-06 17:53                                 ` Eli Zaretskii
2023-07-07  0:22                             ` Po Lu
2023-07-05  0:33 ` Po Lu
2023-07-05  2:31   ` Eli Zaretskii
2023-07-17 20:43     ` Hugo Thunnissen
2023-07-18  4:51       ` tomas
2023-07-18  5:25       ` Ihor Radchenko
2023-07-18  5:39         ` Po Lu
2023-07-18  5:49           ` Ihor Radchenko
2023-07-18 12:14         ` Hugo Thunnissen
2023-07-18 12:39           ` Async IO and queing process sentinels (was: Concurrency via isolated process/thread) Ihor Radchenko
2023-07-18 12:49             ` Ihor Radchenko
2023-07-18 14:12             ` Async IO and queing process sentinels Michael Albinus

Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.