unofficial mirror of emacs-devel@gnu.org 
 help / color / mirror / code / Atom feed
* Should we land Lisp reader optimizations?
@ 2017-06-19 16:58 Eli Zaretskii
  2017-06-20  7:08 ` Ken Raeburn
  0 siblings, 1 reply; 17+ messages in thread
From: Eli Zaretskii @ 2017-06-19 16:58 UTC (permalink / raw)
  To: Ken Raeburn; +Cc: emacs-devel

Ken,

I understand that some of the optimizations you made on your startup
branch for reading and processing *.elc files are general-purpose
enough and mature enough to start using them in Emacs 26.  Would it
make sense to land them on master right now?  I think significant
speedups in that department are always a win.

TIA



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-19 16:58 Should we land Lisp reader optimizations? Eli Zaretskii
@ 2017-06-20  7:08 ` Ken Raeburn
  2017-06-20 10:12   ` Ken Raeburn
  0 siblings, 1 reply; 17+ messages in thread
From: Ken Raeburn @ 2017-06-20  7:08 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: emacs-devel

On Jun 19, 2017, at 12:58, Eli Zaretskii <eliz@gnu.org> wrote:
> Ken,
> 
> I understand that some of the optimizations you made on your startup
> branch for reading and processing *.elc files are general-purpose
> enough and mature enough to start using them in Emacs 26.  Would it
> make sense to land them on master right now?  I think significant
> speedups in that department are always a win.
> 
> TIA

I think several of them would be reasonable to merge.  Others improve speed a little at some maintenance cost such as having mostly-duplicated code, or using more complex data structure management than we have currently.  But I’ve been doing all my work with an eye towards the big dumped.elc file produced by Stefan’s changes; it has some unusual characteristics, like most of the file being one big “progn” with lots of multiply-referenced forms, versus having lots of separate “defalias” calls and the like.

It would be good to come up with some new benchmark or two by which to evaluate some of the individual changes to see if they’re worth the maintenance cost.  Probably something like “read all Lisp forms from a set of .elc files previously loaded into buffers” or “load a set of .elc files from disk”…

Looking over the changes, here are my initial thoughts:

1. Use getc_unlocked instead of getc: Small change, makes a bigger difference for Darwin/mac OS than under glibc.  I don’t know if the *BSD distributions are similar to Darwin here.

2. Reduce lread substitutions for multiply-referenced cons cells: This change of Stefan’s made a big difference in handling circular structures, and is quite localized to a small bit of reader code.

3. Skip recursive scanning of some object types: Very localized, addresses some of the remaining recursive-substitution time if we don’t have to look up objects we know won’t be found in the list we’ve saved.  Less relevant if that list won’t be large anyway.

4. Use hash tables instead of lists (read_objects and/or seen_list) in recursive substitution code: A little bit more intrusive in the reader code setup, but not terribly much so.  Reduces lookups to O(log n) from O(n), so again, mostly interesting if we care about the case where the collections are large and multiply-referenced objects are common.

5. Reducing nested calls for lists in substitution: This wasn’t about performance per se, or reducing stack depth, but changing the list processing to iterative from recursive made analysis easier with some of the Apple tools that organize execution profile samples by stack traces.  It’s a localized change, though, and I tend to prefer iteration over unbounded recursion with possibly limited stack sizes.  I’m interested what others think.

6. Optimizing reading of ASCII symbols from a file: Very specific to that case.  Code duplication with specialization, in what’s already a large function.

7. Don’t memset charset maps before filling them in: Tiny change, small optimization. Not really a Lisp reader change.

8. Generate less garbage when reading symbols: Open-code some of the “intern” process so we don’t have to create a string that we’ll just throw away if the symbol is already interned.  Probably doesn’t make a big difference in speed directly unless it reduce garbage collection passes.

9. Use #N# syntax for repeated use of symbols: Reading the numbers is faster than reading the strings.  Only relevant if symbol names are repeated often within a single S-expression, so more helpful for dumped.elc than for regular .elc files.  Works better with the hash table changes.

There’s some overlap, obviously; the time spent reading symbols in a .elc file would be reduced by #6 and by #9, but either of them will reduce the impact of the other.

Some of these are actually expressed as multiple changes on the scratch/raeburn-startup branch, though I’ve been working on a rebased version from a recent master snapshot that cleans up and merges some of the changes, gets rid of a couple things no longer needed, and fixes a few more bugs with the dumped data.  I’ve still got an annoying coding-system problem to track down before I push the updated branch, though.

#1-3, and #8 are simple and straightforward enough that I think they’re probably good to take, even if the savings aren’t large with normal Lisp files.  #7 also, though it’s for charset maps, not Lisp.  #4 and #9 are dependent on the sorts of files being loaded; benchmarking will show us how much of a difference they might make.

#6 is probably the most annoying for ongoing maintenance.  Existing code is copied, specialized for the case of reading from files, some functions are expanded inline, irrelevant code is removed, and the block/unblock calls are pulled out from inner loops.  Depending how the numbers work out with the other changes, it might not be worth it, or maybe only if we go the big-elc-file route.

Ken


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-20  7:08 ` Ken Raeburn
@ 2017-06-20 10:12   ` Ken Raeburn
  2017-06-20 15:25     ` Eli Zaretskii
  0 siblings, 1 reply; 17+ messages in thread
From: Ken Raeburn @ 2017-06-20 10:12 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: emacs-devel

I implemented the benchmark “read all Lisp forms from a set of .elc files previously loaded into buffers” as mentioned in my earlier email.  I used all the .elc files installed by the build, sorted by name.  The test iterated over the whole set of buffers 10 times, after an untimed initial pass.

With unchanged master sources (63ec338) it took 327s and 920 GC passes.

I added the change to short-circuit the recursive processing of certain types (#3); 171s.

I added the change to mutate the placeholder for a cons object instead of doing recursive substitution (#2); 168s.  (I think it did better for me before but maybe it’s specific to dumped.elc.)

I added the getc_unlocked change (#1), mainly because other patches I was testing updated the same code and I wanted to get numbers quickly without manually updating the patches; 171s.  (The test is reading from buffers, not files, so this is probably just some random run-to-run variability.)

I then pulled in the change to replace the read_objects list with two hash tables (part of #4); this brought the run time down to 33.4s, despite an increase to 1049 GC passes.

I added the iteration change (#5) and replaced seen_list with a hash table (other part of #4); no change.  This is expected from #5, and the seen_list change probably requires more #N# values in an expression for it to be significant.

I added the symbol interning change (#8); the run time is down to 24.6s, with 631 GC passes, and the speedup appears to be mostly from reducing the GC passes.  Overall improvement: 13x reduction in run time, 31% reduction in GC passes.  Remember this is just for parsing the Lisp expressions, not for evaluating or for reading the bits off the disk.

The #N# use for symbols will take some tweaking to make it get used for normal byte compilation, and then a full bootstrap, so I’ll try testing that later in the week.

I guess that gives a pretty clear indication which changes I should look at first.

Ken


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-20 10:12   ` Ken Raeburn
@ 2017-06-20 15:25     ` Eli Zaretskii
  2017-06-20 15:39       ` Clément Pit-Claudel
                         ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Eli Zaretskii @ 2017-06-20 15:25 UTC (permalink / raw)
  To: Ken Raeburn; +Cc: emacs-devel

> From: Ken Raeburn <raeburn@raeburn.org>
> Date: Tue, 20 Jun 2017 06:12:05 -0400
> Cc: emacs-devel@gnu.org
> 
> I implemented the benchmark “read all Lisp forms from a set of .elc files previously loaded into buffers” as mentioned in my earlier email.  I used all the .elc files installed by the build, sorted by name.  The test iterated over the whole set of buffers 10 times, after an untimed initial pass.
> 
> With unchanged master sources (63ec338) it took 327s and 920 GC passes.
> 
> I added the change to short-circuit the recursive processing of certain types (#3); 171s.
> 
> I added the change to mutate the placeholder for a cons object instead of doing recursive substitution (#2); 168s.  (I think it did better for me before but maybe it’s specific to dumped.elc.)
> 
> I added the getc_unlocked change (#1), mainly because other patches I was testing updated the same code and I wanted to get numbers quickly without manually updating the patches; 171s.  (The test is reading from buffers, not files, so this is probably just some random run-to-run variability.)
> 
> I then pulled in the change to replace the read_objects list with two hash tables (part of #4); this brought the run time down to 33.4s, despite an increase to 1049 GC passes.
> 
> I added the iteration change (#5) and replaced seen_list with a hash table (other part of #4); no change.  This is expected from #5, and the seen_list change probably requires more #N# values in an expression for it to be significant.
> 
> I added the symbol interning change (#8); the run time is down to 24.6s, with 631 GC passes, and the speedup appears to be mostly from reducing the GC passes.  Overall improvement: 13x reduction in run time, 31% reduction in GC passes.  Remember this is just for parsing the Lisp expressions, not for evaluating or for reading the bits off the disk.

How much faster does it make reading a large .elc file from disk?

In any case, a 13x speedup sounds very impressive, so I think we want
this on master as soon as you can do it.

What do others think?

Thanks.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-20 15:25     ` Eli Zaretskii
@ 2017-06-20 15:39       ` Clément Pit-Claudel
  2017-06-20 16:06         ` Paul Eggert
  2017-06-20 23:12       ` John Wiegley
  2017-06-21  9:46       ` Ken Raeburn
  2 siblings, 1 reply; 17+ messages in thread
From: Clément Pit-Claudel @ 2017-06-20 15:39 UTC (permalink / raw)
  To: emacs-devel

On 2017-06-20 11:25, Eli Zaretskii wrote:
> In any case, a 13x speedup sounds very impressive, so I think we want
> this on master as soon as you can do it.
> 
> What do others think?

I think this looks like fabulous work.  If I read this correctly #2, #3, #4, and #8 all contribute, and they are all relatively localized/small, so it all sounds very good.

Clément.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-20 15:39       ` Clément Pit-Claudel
@ 2017-06-20 16:06         ` Paul Eggert
  0 siblings, 0 replies; 17+ messages in thread
From: Paul Eggert @ 2017-06-20 16:06 UTC (permalink / raw)
  To: Clément Pit-Claudel, emacs-devel

Clément Pit-Claudel wrote:
> On 2017-06-20 11:25, Eli Zaretskii wrote:
>> In any case, a 13x speedup sounds very impressive, so I think we want
>> this on master as soon as you can do it.
>>
>> What do others think?
> 
> I think this looks like fabulous work.  If I read this correctly #2, #3, #4, and #8 all contribute, and they are all relatively localized/small, so it all sounds very good.
> 
> Clément.
> 

Looks good to me too.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-20 15:25     ` Eli Zaretskii
  2017-06-20 15:39       ` Clément Pit-Claudel
@ 2017-06-20 23:12       ` John Wiegley
  2017-06-21  2:50         ` michael schuldt
  2017-06-22  1:57         ` Richard Stallman
  2017-06-21  9:46       ` Ken Raeburn
  2 siblings, 2 replies; 17+ messages in thread
From: John Wiegley @ 2017-06-20 23:12 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: Ken Raeburn, emacs-devel

>>>>> "EZ" == Eli Zaretskii <eliz@gnu.org> writes:

EZ> In any case, a 13x speedup sounds very impressive, so I think we want this
EZ> on master as soon as you can do it.

EZ> What do others think?

Yes, please. :)

-- 
John Wiegley                  GPG fingerprint = 4710 CF98 AF9B 327B B80F
http://newartisans.com                          60E1 46C4 BD1A 7AC1 4BA2



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-20 23:12       ` John Wiegley
@ 2017-06-21  2:50         ` michael schuldt
  2017-06-21 10:07           ` Ken Raeburn
                             ` (3 more replies)
  2017-06-22  1:57         ` Richard Stallman
  1 sibling, 4 replies; 17+ messages in thread
From: michael schuldt @ 2017-06-21  2:50 UTC (permalink / raw)
  To: Eli Zaretskii, Ken Raeburn, emacs-devel

[-- Attachment #1: Type: text/plain, Size: 642 bytes --]

Since the time spent in GC appears so significant, why not disable GC while
reading?

I have not followed all the previous threads so apologies if this question
is uninformed

On Tue, Jun 20, 2017 at 4:12 PM, John Wiegley <jwiegley@gmail.com> wrote:

> >>>>> "EZ" == Eli Zaretskii <eliz@gnu.org> writes:
>
> EZ> In any case, a 13x speedup sounds very impressive, so I think we want
> this
> EZ> on master as soon as you can do it.
>
> EZ> What do others think?
>
> Yes, please. :)
>
> --
> John Wiegley                  GPG fingerprint = 4710 CF98 AF9B 327B B80F
> http://newartisans.com                          60E1 46C4 BD1A 7AC1 4BA2
>
>

[-- Attachment #2: Type: text/html, Size: 1219 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-20 15:25     ` Eli Zaretskii
  2017-06-20 15:39       ` Clément Pit-Claudel
  2017-06-20 23:12       ` John Wiegley
@ 2017-06-21  9:46       ` Ken Raeburn
  2017-06-21 17:56         ` Eli Zaretskii
  2 siblings, 1 reply; 17+ messages in thread
From: Ken Raeburn @ 2017-06-21  9:46 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: emacs-devel

On Jun 20, 2017, at 11:25, Eli Zaretskii <eliz@gnu.org> wrote:

> 
> How much faster does it make reading a large .elc file from disk?

I tried a couple of file-loading tests:

1) Loading several Gnus nn* files, several progmodes, and a few others, then unloading them all, in a loop 50 times.  Run time dropped from 14.6s to 13.3s, about a 9% drop; GCs went from 176 to 136.  The getc_unlocked and symbol-interning changes appear to have had the biggest effect in this case.

2) Loading ja-dic.elc, the biggest .elc file in my build tree, 100 times.  Run time went from 14.3s to 9.3s, about a 35% drop; GCs went from 200 to 101.  I didn’t break this one down by individual code changes.

I don’t think anything in the tree is likely to show the high degree of object sharing and the huge number of shared objects being tracked at one time that dumped.elc does.

> In any case, a 13x speedup sounds very impressive, so I think we want
> this on master as soon as you can do it.

Okay, looks like people are in favor, so I’ll try to get the more effective smaller patches pulled in this week.  The less helpful ones I’ll keep on my scratch branch for now.  And I’ve still got the #N# symbol sharing and file-reading specialization changes to evaluate with the master branch.

Ken


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-21  2:50         ` michael schuldt
@ 2017-06-21 10:07           ` Ken Raeburn
  2017-06-21 17:41           ` Eli Zaretskii
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 17+ messages in thread
From: Ken Raeburn @ 2017-06-21 10:07 UTC (permalink / raw)
  To: michael schuldt; +Cc: Eli Zaretskii, emacs-devel

On Jun 20, 2017, at 22:50, michael schuldt <mbschuldt@gmail.com> wrote:

> Since the time spent in GC appears so significant, why not disable GC while reading?
> 
> I have not followed all the previous threads so apologies if this question is uninformed

GC doesn’t happen during reading per se, it happens during evaluation of expressions and when stopping to wait for user input, but whether it happens at those points depends on the amount of storage allocated since the last GC pass.  It’s probably possible to come up with GC heuristics that do better than what we’ve got now, but there are tradeoffs.  The trick is coming up with the right metrics (memory size? CPU time? delay in interactive response? startup speed?) and the right use cases to optimize for.  It’d be relatively easy to improve a couple of numbers, at the cost of a worse experience in other important cases.  GC improvement would be a significant research project of its own, one I’m not solving this week. :-)

But finding simple ways to avoid doing allocations in the first place is a pretty clear win.  Sometimes you get lucky.

My test was parsing each Lisp expression in each of 1447 .elc files (total size over 50MB) and looping over all of that 10 times; that means we started with something like one GC pass per 16 files processed, on average, which doesn’t seem so bad.  (It would be more frequent if we were actually evaluating the Lisp expressions that were read.  But I’m only working on the reader code with these changes.)  With the various patches, it’s more like one GC pass per 23 files scanned.

Disabling GC for the duration of reading an entire file probably wouldn’t make much difference, then.  This one-GC-per-16-files thing is an average, of course; I’d guess the GC probably took place between reading one expression from a file and reading the next expression from the same file, but would there be any benefit from delaying it until we were between files?

Raising the gc-cons-threshold value so that GC happens less often could also be done.  But then you’re likely to get faster memory growth.  It’s something that can be considered, but for the purposes of evaluating changes to the Lisp reader, I figure one less knob to fiddle with is simplest.

In the scratch branch I’ve been working on, with Stefan’s code to load a saved Lisp environment as one big .elc file, I have for now raised the gc-cons-threshold value considerably, because the one big .elc file is big enough to trigger multiple GC passes, and the entire file has to be read and evaluated before interactive use of the Emacs process can start.  But it’s not a very good “fix”, as it’ll affect a lot more than startup.  Perhaps I should reset it to its normal value at the end of loading the file, but that’d likely trigger GC pretty much right away, and I was trying to avoid triggering any GC delays before getting to the point of responding to the user’s keyboard input.  It’ll have to be worked out before we can properly evaluate the performance of the big-elc-file approach….

Ken


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-21  2:50         ` michael schuldt
  2017-06-21 10:07           ` Ken Raeburn
@ 2017-06-21 17:41           ` Eli Zaretskii
  2017-06-22  1:58           ` Richard Stallman
  2017-06-22 13:26           ` Stefan Monnier
  3 siblings, 0 replies; 17+ messages in thread
From: Eli Zaretskii @ 2017-06-21 17:41 UTC (permalink / raw)
  To: michael schuldt; +Cc: raeburn, emacs-devel

> From: michael schuldt <mbschuldt@gmail.com>
> Date: Tue, 20 Jun 2017 19:50:59 -0700
> 
> Since the time spent in GC appears so significant, why not disable GC while reading?

Because you can easily run out of memory that way: the amount of
memory required to read and process a given .elc file is not known in
advance.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-21  9:46       ` Ken Raeburn
@ 2017-06-21 17:56         ` Eli Zaretskii
  0 siblings, 0 replies; 17+ messages in thread
From: Eli Zaretskii @ 2017-06-21 17:56 UTC (permalink / raw)
  To: Ken Raeburn; +Cc: emacs-devel

> > In any case, a 13x speedup sounds very impressive, so I think we want
> > this on master as soon as you can do it.
> 
> Okay, looks like people are in favor, so I’ll try to get the more effective smaller patches pulled in this week.

Yes, please, and thanks.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-20 23:12       ` John Wiegley
  2017-06-21  2:50         ` michael schuldt
@ 2017-06-22  1:57         ` Richard Stallman
  1 sibling, 0 replies; 17+ messages in thread
From: Richard Stallman @ 2017-06-22  1:57 UTC (permalink / raw)
  To: John Wiegley; +Cc: eliz, raeburn, emacs-devel

[[[ To any NSA and FBI agents reading my email: please consider    ]]]
[[[ whether defending the US Constitution against all enemies,     ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]

These reader speedups sound uncontroversial.  However, the question of
what should replace unexec is another matter.  I don't want starting
Emacs to take 26 seconds -- and since I don't know what kind of
machine that was, I worry that mine might take twice as long.


-- 
Dr Richard Stallman
President, Free Software Foundation (gnu.org, fsf.org)
Internet Hall-of-Famer (internethalloffame.org)
Skype: No way! See stallman.org/skype.html.




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-21  2:50         ` michael schuldt
  2017-06-21 10:07           ` Ken Raeburn
  2017-06-21 17:41           ` Eli Zaretskii
@ 2017-06-22  1:58           ` Richard Stallman
  2017-06-22  2:56             ` michael schuldt
  2017-06-22 13:26           ` Stefan Monnier
  3 siblings, 1 reply; 17+ messages in thread
From: Richard Stallman @ 2017-06-22  1:58 UTC (permalink / raw)
  To: michael schuldt; +Cc: eliz, raeburn, emacs-devel

[[[ To any NSA and FBI agents reading my email: please consider    ]]]
[[[ whether defending the US Constitution against all enemies,     ]]]
[[[ foreign or domestic, requires you to follow Snowden's example. ]]]

  > Since the time spent in GC appears so significant, why not disable GC while
  > reading?

That could bloat the total memory size of Emacs, with garbage scattered among
the actually used memory so that no entire blocks could be freed.

However, whether the bloat is enough to be a significant drawback,
I don't know.  It might be interesting to measure the amount of memory
used in two alternatives: GCs during reading, and just one GC at the end
of reading.

-- 
Dr Richard Stallman
President, Free Software Foundation (gnu.org, fsf.org)
Internet Hall-of-Famer (internethalloffame.org)
Skype: No way! See stallman.org/skype.html.




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-22  1:58           ` Richard Stallman
@ 2017-06-22  2:56             ` michael schuldt
  2017-06-22  6:25               ` John Wiegley
  0 siblings, 1 reply; 17+ messages in thread
From: michael schuldt @ 2017-06-22  2:56 UTC (permalink / raw)
  To: rms; +Cc: Eli Zaretskii, Ken Raeburn, emacs-devel

[-- Attachment #1: Type: text/plain, Size: 777 bytes --]

I was actually only thinking about disabling it while reading the big .elc
file, not all the time.

But in either case Ken seems to make it clear that it does not really
matter -
GC overhead is small when reading normal files and already minimized for
the big .elc read

On Wed, Jun 21, 2017 at 6:58 PM, Richard Stallman <rms@gnu.org> wrote:

>   > Since the time spent in GC appears so significant, why not disable GC
> while
>   > reading?
>
> That could bloat the total memory size of Emacs, with garbage scattered
> among
> the actually used memory so that no entire blocks could be freed.
>

Is this scattered memory ever compacted?
I've had times where Emacs refuses to let go of massive amounts of
seemingly unused
memory, despite forced GCs. Maybe this was the problem

[-- Attachment #2: Type: text/html, Size: 1299 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-22  2:56             ` michael schuldt
@ 2017-06-22  6:25               ` John Wiegley
  0 siblings, 0 replies; 17+ messages in thread
From: John Wiegley @ 2017-06-22  6:25 UTC (permalink / raw)
  To: michael schuldt; +Cc: Eli Zaretskii, Ken Raeburn, rms, emacs-devel

>>>>> "ms" == michael schuldt <mbschuldt@gmail.com> writes:

ms> But in either case Ken seems to make it clear that it does not really
ms> matter - GC overhead is small when reading normal files and already
ms> minimized for the big .elc read

And in fact, the GC can make things faster, by quickly freeing temporarily
allocated memory that can be reused in a subsequent loop iteration. Although
it does take "work" to walk the heap and free objects, sometimes this work is
less than freshly allocating new temporary blocks at ever-new places on the
heap.

-- 
John Wiegley                  GPG fingerprint = 4710 CF98 AF9B 327B B80F
http://newartisans.com                          60E1 46C4 BD1A 7AC1 4BA2



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Should we land Lisp reader optimizations?
  2017-06-21  2:50         ` michael schuldt
                             ` (2 preceding siblings ...)
  2017-06-22  1:58           ` Richard Stallman
@ 2017-06-22 13:26           ` Stefan Monnier
  3 siblings, 0 replies; 17+ messages in thread
From: Stefan Monnier @ 2017-06-22 13:26 UTC (permalink / raw)
  To: emacs-devel

> Since the time spent in GC appears so significant, why not disable GC while
> reading?

This presumes that running the GC is useless.  Usually, it's not the case.


        Stefan




^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-06-22 13:26 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-06-19 16:58 Should we land Lisp reader optimizations? Eli Zaretskii
2017-06-20  7:08 ` Ken Raeburn
2017-06-20 10:12   ` Ken Raeburn
2017-06-20 15:25     ` Eli Zaretskii
2017-06-20 15:39       ` Clément Pit-Claudel
2017-06-20 16:06         ` Paul Eggert
2017-06-20 23:12       ` John Wiegley
2017-06-21  2:50         ` michael schuldt
2017-06-21 10:07           ` Ken Raeburn
2017-06-21 17:41           ` Eli Zaretskii
2017-06-22  1:58           ` Richard Stallman
2017-06-22  2:56             ` michael schuldt
2017-06-22  6:25               ` John Wiegley
2017-06-22 13:26           ` Stefan Monnier
2017-06-22  1:57         ` Richard Stallman
2017-06-21  9:46       ` Ken Raeburn
2017-06-21 17:56         ` Eli Zaretskii

Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/emacs.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).