all messages for Emacs-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
From: Ken Raeburn <raeburn@raeburn.org>
To: Eli Zaretskii <eliz@gnu.org>
Cc: johnw@newartisans.com, emacs-devel@gnu.org
Subject: Re: "concurrency" branch updated
Date: Wed, 4 Nov 2015 14:48:12 -0500	[thread overview]
Message-ID: <11F7A3CD-5203-42C9-93EF-842A1D4F9EEB@raeburn.org> (raw)
In-Reply-To: <83vb9hu612.fsf@gnu.org>

>> 
>> Implementing a generator with a thread seems somewhat straightforward, needing 
>> some sort of simple communication channel between the main thread and the 
>> generator thread to pass “need next value” and “here’s the next value” messages 
>> back and forth; some extra work would be needed so that dropping all references 
>> to a generator makes everything, including the thread, go away.  Raising an 
>> error in the thread’s “yield” calls may be a way to tackle that, though it 
>> changes the semantics within the generator a bit.
> 
> Both the generator and its consumer run Lisp, so they can only run in
> sequence.  How is this different from running them both in a single
> thread?

In this case, it’s about how you'd write the generator code.  While the multithreaded version would have other issues (like having to properly quit the new thread when we’re done with the generator), it wouldn’t require writing everything using special macros to do CPS transformations.  If I want to yield values from within a function invoked via mapcar, I don’t have to write an iter-mapcar macro to turn everything inside-out under the covers.

> 
>> For prefetching file contents or searching existing buffers, the “main” thread 
>> can release the global lock when it prompts for the user’s input, and a 
>> background thread can create buffers and load files, or search buffers for 
>> patterns, tossing results onto some sort of queue or other data structure for 
>> consumption by the main thread when it finishes with the file it’s on.
> 
> But then you are not talking about "normal" visiting of files or
> searching of buffers.  You are talking about specialized features that
> visit large number of files or are capable of somehow marking lots of
> search hits for future presentation to users.  That is a far cry from
> how we do this stuff currently -- you ask the user first, _then_ you
> search or visit the file she asked for.

I haven’t used tags-query-replace in a while, but I don’t recall it asking me if I wanted to visit each file.  But yes, I’m thinking of larger operations where the next stage is fairly predictable, and probably does no harm if we optimistically start it early.  Smaller stuff may be good too (I hope), but I’d guess there’s a greater chance the thread-switching overhead could become an issue; I could well be overestimating it.  And some of the simpler ones, like highlighting all regexp matches in the visible part of the current buffer while doing a search, are already done, though we could look at how the code would compare if rewritten to use threads.


> You are talking about some significant refactoring here, we currently
> o all of this on the fly.  In any case, I can understand how this
> would be a win with remote files, but with local files I'm quite sure
> most of the time for inserting a file is taken by stuff like decoding
> its contents, which we also do on the fly and which can call Lisp.
> The I/O itself is quite fast nowadays, I think.  Just compare
> insert-file-contents with insert-file-contents-literally for the same
> large file, and see the big difference, especially if it includes some
> non-ASCII text.

I haven’t done that test, but I have used an NFS server that got slow at times.  And NFS from Amazon virtual machines back to my office, which is always a bit slow.  And sshfs, which can be slow too.  None of which Emacs can do anything about directly.

>> So… yeah, I think some of them are possible, but I’m not sure any of them would 
>> be a particularly good way to show off.  Got any suggestions?
> 
> I think features that use timers, and idle timers in particular, are
> natural candidates for using threads.  Stealth font-lock comes to
> mind, for example.

That’s what I was thinking of when I mentioned fontification.  I hope thread switches are fast enough.

>> I’m not sure.  I’m not picturing redisplay running concurrently with Lisp so 
>> much as redisplay on display 1 running concurrently with redisplay on display 
>> 2, all happening at the same point in the code where we now run redisplay.  
> 
> Why is this use case important?  Do we really believe someone might
> look at 2 different X displays at the same time?

No, but occasionally redisplay still needs to talk to multiple displays and get responses back, even with the work you and Stefan have done.  Fortunately, it’s much more rare now, and the color-handling work I did may help further.  And people using multiple displays *and* slow display connections like me are probably not very common among the user base.  So it’s an area where threads might help, but maybe not a terribly important one for the project.

>> I am making some assumptions that redisplay isn’t doing many costly
>> calculations compared to the cost of pushing the bits to the glass.
> 
> That's not really true, although the display engine tries very hard to
> be very fast.  But I've seen cycles taking 10 msec and even 50 msec
> (which already borders on crossing the annoyance threshold).  So there
> are some pretty costly calculations during redisplay, which is why the
> display engine is heavily optimized to avoid them as much as possible.

In that case, maybe it’s still worth considering after all.

> 
>> I suspect TLS is probably the more interesting case.
> 
> What do we have in TLS that we don't have in any network connection?

Encryption, optional compression, possibly key renegotiation, possible receipt of incomplete messages that can’t yet be decrypted and thus can’t give us any new data bytes.  The thread(s) running user Lisp code needn’t spend any cycles on these things.

Ken


  reply	other threads:[~2015-11-04 19:48 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-01  6:19 "concurrency" branch updated Ken Raeburn
2015-11-02 17:09 ` Eli Zaretskii
2015-11-02 20:23 ` John Wiegley
2015-11-02 20:35   ` Eli Zaretskii
2015-11-02 20:41     ` Eli Zaretskii
2015-11-02 21:57       ` John Wiegley
2015-11-03  3:58         ` Elias Mårtenson
2015-11-03  9:40           ` Ken Raeburn
2015-11-03 16:26             ` Eli Zaretskii
2015-11-03 16:23           ` Eli Zaretskii
2015-11-03  9:40   ` Ken Raeburn
2015-11-03 11:50     ` other "concurrency" approaches Nic Ferrier
2015-11-03 15:44       ` Eli Zaretskii
2015-11-03 17:16         ` Nic Ferrier
2015-11-03 17:23           ` Eli Zaretskii
2015-11-03 22:28             ` Nic Ferrier
2015-11-04  3:48               ` Eli Zaretskii
2015-11-03 15:14     ` "concurrency" branch updated Filipp Gunbin
2015-11-03 15:35       ` Michael Albinus
2015-11-03 16:38         ` Thierry Volpiatto
2015-11-03 16:29     ` Eli Zaretskii
2015-11-04  9:20       ` Ken Raeburn
2015-11-04 15:40         ` Eli Zaretskii
2015-11-04 19:48           ` Ken Raeburn [this message]
2015-11-04 20:51             ` Eli Zaretskii
2015-11-05  5:16               ` Ken Raeburn
2015-11-04 23:09         ` Richard Stallman
2015-11-05  3:41           ` Eli Zaretskii
2015-11-05  6:29           ` Ken Raeburn
2015-11-05 13:17             ` John Wiegley
2015-11-05 14:17               ` David Kastrup
2015-11-05 15:07                 ` John Wiegley
2015-11-05 21:55               ` Tom Tromey
2015-11-05 22:01                 ` John Wiegley
2015-11-05 22:46             ` Richard Stallman
2015-11-06  8:37               ` Eli Zaretskii
2015-11-05 21:49           ` Tom Tromey
2015-11-05 21:46         ` Tom Tromey
2015-11-06  7:58           ` Eli Zaretskii
2015-11-06 14:58             ` Tom Tromey

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=11F7A3CD-5203-42C9-93EF-842A1D4F9EEB@raeburn.org \
    --to=raeburn@raeburn.org \
    --cc=eliz@gnu.org \
    --cc=emacs-devel@gnu.org \
    --cc=johnw@newartisans.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/emacs.git
	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.