* GC mark stack
@ 2022-06-26 15:31 Lynn Winebarger
2022-06-26 15:50 ` Lynn Winebarger
2022-08-15 14:01 ` Stefan Monnier
0 siblings, 2 replies; 10+ messages in thread
From: Lynn Winebarger @ 2022-06-26 15:31 UTC (permalink / raw)
To: emacs-devel
I was reviewing alloc.c in the 28.1 source, and noted that it uses a
semi-naive computation of the transitive closure of the graph of live
data structures (weak hash tables is where I see it).
Since the fix to https://debbugs.gnu.org/cgi/bugreport.cgi?bug=54698
(commit 7a8798de95a57c8ff85f070075e0a0176b458578) moved to using an
explicit stack, I was curious if you'd considered using a variant of
Tarjan's SCC algorithm, such as the one described in
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.9019&rep=rep1&type=pdf
,
to guarantee that each edge is traversed at most once. The algorithm
described never removes traversed objects from its stack (so it can
tell when an object has already been traversed, even if the current
link into it is not part of its SCC).
The algorithm would only need to account for objects like weak hash
tables, so a direct implementation would only leave those on the
stack. An alternative would be to create an additional field in the
objects (like weak hash table) recording the order in which they were
traversed, which also makes the algorithm more efficient since there's
no stack search involved when determining the SCC representative of
particular node - it's just a matter of comparing their stack
ordering.
Of course, I don't have any idea how much time is spent on this type
of recursion for weak references. The SCC-based algorithms can make a
significant performance improvement compared to semi-naive calculation
of transitive closure for general relational queries. It might not be
so useful when you don't require an explicit enumeration of the set of
answers.
Lynn
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: GC mark stack
2022-06-26 15:31 GC mark stack Lynn Winebarger
@ 2022-06-26 15:50 ` Lynn Winebarger
2022-06-27 12:27 ` bug#54698: Fwd: " Lynn Winebarger
2022-08-15 14:01 ` Stefan Monnier
1 sibling, 1 reply; 10+ messages in thread
From: Lynn Winebarger @ 2022-06-26 15:50 UTC (permalink / raw)
To: emacs-devel
On Sun, Jun 26, 2022 at 11:31 AM Lynn Winebarger <owinebar@gmail.com> wrote:
>
> I was reviewing alloc.c in the 28.1 source, and noted that it uses a
> semi-naive computation of the transitive closure of the graph of live
> data structures (weak hash tables is where I see it).
> Since the fix to https://debbugs.gnu.org/cgi/bugreport.cgi?bug=54698
> (commit 7a8798de95a57c8ff85f070075e0a0176b458578) moved to using an
> explicit stack, I was curious if you'd considered using a variant of
> Tarjan's SCC algorithm, such as the one described in
> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.9019&rep=rep1&type=pdf
> ,
> to guarantee that each edge is traversed at most once. The algorithm
> described never removes traversed objects from its stack (so it can
> tell when an object has already been traversed, even if the current
> link into it is not part of its SCC).
> The algorithm would only need to account for objects like weak hash
> tables, so a direct implementation would only leave those on the
> stack. An alternative would be to create an additional field in the
> objects (like weak hash table) recording the order in which they were
> traversed, which also makes the algorithm more efficient since there's
> no stack search involved when determining the SCC representative of
> particular node - it's just a matter of comparing their stack
> ordering.
> Of course, I don't have any idea how much time is spent on this type
> of recursion for weak references. The SCC-based algorithms can make a
> significant performance improvement compared to semi-naive calculation
> of transitive closure for general relational queries. It might not be
> so useful when you don't require an explicit enumeration of the set of
> answers.
I should point out here, that with only strong references, there's
only one SCC of interest (liveness) and the mark-bit is exactly the
stack-traversal-ordering count I suggested as an alternative to
keeping items on the stack as in the implementation described in the
paper. For the more general problem of weak references, the algorithm
requires explicitly tracking the SCC candidate sets and
representatives. Those structures can also be "pre-allocated" in the
weak reference objects if allocation during garbage collection is an
issue.
Lynn
^ permalink raw reply [flat|nested] 10+ messages in thread
* bug#54698: Fwd: GC mark stack
2022-06-26 15:50 ` Lynn Winebarger
@ 2022-06-27 12:27 ` Lynn Winebarger
0 siblings, 0 replies; 10+ messages in thread
From: Lynn Winebarger @ 2022-06-27 12:27 UTC (permalink / raw)
To: 54698
This bug was archived, so I first sent the below to the emacs-devel
list. I'm not sure what the best/most appropriate target is for the
suggestion.
Lynn
---------- Forwarded message ---------
From: Lynn Winebarger <owinebar@gmail.com>
Date: Sun, Jun 26, 2022 at 11:50 AM
Subject: Re: GC mark stack
To: emacs-devel <emacs-devel@gnu.org>
On Sun, Jun 26, 2022 at 11:31 AM Lynn Winebarger <owinebar@gmail.com> wrote:
>
> I was reviewing alloc.c in the 28.1 source, and noted that it uses a
> semi-naive computation of the transitive closure of the graph of live
> data structures (weak hash tables is where I see it).
> Since the fix to https://debbugs.gnu.org/cgi/bugreport.cgi?bug=54698
> (commit 7a8798de95a57c8ff85f070075e0a0176b458578) moved to using an
> explicit stack, I was curious if you'd considered using a variant of
> Tarjan's SCC algorithm, such as the one described in
> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.9019&rep=rep1&type=pdf
> ,
> to guarantee that each edge is traversed at most once. The algorithm
> described never removes traversed objects from its stack (so it can
> tell when an object has already been traversed, even if the current
> link into it is not part of its SCC).
> The algorithm would only need to account for objects like weak hash
> tables, so a direct implementation would only leave those on the
> stack. An alternative would be to create an additional field in the
> objects (like weak hash table) recording the order in which they were
> traversed, which also makes the algorithm more efficient since there's
> no stack search involved when determining the SCC representative of
> particular node - it's just a matter of comparing their stack
> ordering.
> Of course, I don't have any idea how much time is spent on this type
> of recursion for weak references. The SCC-based algorithms can make a
> significant performance improvement compared to semi-naive calculation
> of transitive closure for general relational queries. It might not be
> so useful when you don't require an explicit enumeration of the set of
> answers.
I should point out here, that with only strong references, there's
only one SCC of interest (liveness) and the mark-bit is exactly the
stack-traversal-ordering count I suggested as an alternative to
keeping items on the stack as in the implementation described in the
paper. For the more general problem of weak references, the algorithm
requires explicitly tracking the SCC candidate sets and
representatives. Those structures can also be "pre-allocated" in the
weak reference objects if allocation during garbage collection is an
issue.
Lynn
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: GC mark stack
2022-06-26 15:31 GC mark stack Lynn Winebarger
2022-06-26 15:50 ` Lynn Winebarger
@ 2022-08-15 14:01 ` Stefan Monnier
2022-08-15 14:51 ` Lynn Winebarger
1 sibling, 1 reply; 10+ messages in thread
From: Stefan Monnier @ 2022-08-15 14:01 UTC (permalink / raw)
To: Lynn Winebarger; +Cc: emacs-devel
Lynn Winebarger [2022-06-26 11:31:43] wrote:
> I was reviewing alloc.c in the 28.1 source, and noted that it uses a
> semi-naive computation of the transitive closure of the graph of live
> data structures (weak hash tables is where I see it).
> Since the fix to https://debbugs.gnu.org/cgi/bugreport.cgi?bug=54698
> (commit 7a8798de95a57c8ff85f070075e0a0176b458578) moved to using an
> explicit stack, I was curious if you'd considered using a variant of
> Tarjan's SCC algorithm, such as the one described in
> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.9019&rep=rep1&type=pdf
I must admit that I don't understand what you're referring to.
And seeing the lack of response by other people, I suspect I'm not alone.
Are you talking about the code in `mark_and_sweep_weak_table_contents`?
Maybe the `do .. while` loop?
Stefan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: GC mark stack
2022-08-15 14:01 ` Stefan Monnier
@ 2022-08-15 14:51 ` Lynn Winebarger
2022-08-15 16:28 ` Stefan Monnier
0 siblings, 1 reply; 10+ messages in thread
From: Lynn Winebarger @ 2022-08-15 14:51 UTC (permalink / raw)
To: Stefan Monnier; +Cc: emacs-devel
On Mon, Aug 15, 2022 at 10:02 AM Stefan Monnier
<monnier@iro.umontreal.ca> wrote:
>
> Lynn Winebarger [2022-06-26 11:31:43] wrote:
> > I was reviewing alloc.c in the 28.1 source, and noted that it uses a
> > semi-naive computation of the transitive closure of the graph of live
> > data structures (weak hash tables is where I see it).
> > Since the fix to https://debbugs.gnu.org/cgi/bugreport.cgi?bug=54698
> > (commit 7a8798de95a57c8ff85f070075e0a0176b458578) moved to using an
> > explicit stack, I was curious if you'd considered using a variant of
> > Tarjan's SCC algorithm, such as the one described in
> > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.9019&rep=rep1&type=pdf
>
> I must admit that I don't understand what you're referring to.
> And seeing the lack of response by other people, I suspect I'm not alone.
>
> Are you talking about the code in `mark_and_sweep_weak_table_contents`?
> Maybe the `do .. while` loop?
That is exactly it. Some of my characterization of the mark-sweep
algorithm (for non-weak references) in terms of SCC calculation was
incorrect - I don't really think about this day-to-day anymore. But,
you can certainly calculate the SCCs of the "weakly-references *"
(Kleene * operation intended) relation while recursing the entries of
a weak hash table in one pass, then perform the union on the SCC
representatives, then mark all elements of the live SCCs, rather than
traversing the graph over and over. The academic treatment I
referenced provides a refinement of the algorithm used by DeRemer and
Pennello to optimize the time spent calculating unions. You can
preallocate the memory required for bookkeeping (for determining
traversal ordering, done with a stack in Tarjan's formulation) by
adding a field to each weak reference recording the traversal order,
which is then used to identify the SCC representative (assuming you do
the calculation on the graph of weak references, to determine which
are actually strong, before doing the proper marking pass). Since the
algorithm records the traversal order for each element, rather than
using an explicit stack, comparing elements (to determine the SCC
representative of a particular node) is constant-time rather than the
linear search of the stack suggested in Tarjan's formulation.
Ultimately you know there are only 2 sets in the SCC graph of "root
strongly-references*" image (hence only require 1 bit for the SCC
representative of each node in the heap), but you need to determine
the SCCs of the "weakly-references*" relation as an intermediate step,
and that can't be reduced to "live or dead" a priori.
Whether it is worth the effort to do so is another question.
Apologies to the extent I'm still getting some of the details wrong,
because I'm deliberately not working it out until I know whether my
employer will try to assert ownership of the result.
Lynn
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: GC mark stack
2022-08-15 14:51 ` Lynn Winebarger
@ 2022-08-15 16:28 ` Stefan Monnier
2022-08-15 17:52 ` Lynn Winebarger
0 siblings, 1 reply; 10+ messages in thread
From: Stefan Monnier @ 2022-08-15 16:28 UTC (permalink / raw)
To: Lynn Winebarger; +Cc: emacs-devel
>> Are you talking about the code in `mark_and_sweep_weak_table_contents`?
>> Maybe the `do .. while` loop?
> That is exactly it. Some of my characterization of the mark-sweep
> algorithm (for non-weak references) in terms of SCC calculation was
> incorrect - I don't really think about this day-to-day anymore. But,
This `do...while` loop can be a source of performance problems, indeed,
if we have many/large weak hash tables (i.e. many weak references) since
it's O(n²) for `n` weak references.
AFAIK, so far this has not been a problem (I suspect in practice it's
usually O(n) because we usually don't have many cycles between weak hash
tables), so any improvement would need to be "cheap" (both in terms of
code complexity and in terms of best-case CPU&heap cost).
> But, you can certainly calculate the SCCs of the "weakly-references *"
> (Kleene * operation intended) relation while recursing the entries of
> a weak hash table in one pass
I'm not sure that's the case because we don't know the full set of weak
references right from the start (or at least, in order to know the full
set, we'd have to traverse all the weak references before we know for
sure that they survive).
Stefan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: GC mark stack
2022-08-15 16:28 ` Stefan Monnier
@ 2022-08-15 17:52 ` Lynn Winebarger
2022-08-15 18:17 ` Stefan Monnier
0 siblings, 1 reply; 10+ messages in thread
From: Lynn Winebarger @ 2022-08-15 17:52 UTC (permalink / raw)
To: Stefan Monnier; +Cc: emacs-devel
On Mon, Aug 15, 2022 at 12:28 PM Stefan Monnier
<monnier@iro.umontreal.ca> wrote:
>
> >> Are you talking about the code in `mark_and_sweep_weak_table_contents`?
> >> Maybe the `do .. while` loop?
> > That is exactly it. Some of my characterization of the mark-sweep
> > algorithm (for non-weak references) in terms of SCC calculation was
> > incorrect - I don't really think about this day-to-day anymore. But,
>
> This `do...while` loop can be a source of performance problems, indeed,
> if we have many/large weak hash tables (i.e. many weak references) since
> it's O(n²) for `n` weak references.
>
> AFAIK, so far this has not been a problem (I suspect in practice it's
> usually O(n) because we usually don't have many cycles between weak hash
> tables), so any improvement would need to be "cheap" (both in terms of
> code complexity and in terms of best-case CPU&heap cost).
>
> > But, you can certainly calculate the SCCs of the "weakly-references *"
> > (Kleene * operation intended) relation while recursing the entries of
> > a weak hash table in one pass
>
> I'm not sure that's the case because we don't know the full set of weak
> references right from the start (or at least, in order to know the full
> set, we'd have to traverse all the weak references before we know for
> sure that they survive).
If the loop can be expressed by a relational expression consisting of
a concrete relation (like "references"), composition, union, and
transitive closure, then there is a systematic way to construct an
algorithm of the type I described. And that loop sure looks like it
is calculating the transitive closure of such a relation.
Perhaps the confusion is because any marking of
non-weak-hash-table-entries done during the calculation of the
weak-reference SCCs would only be to avoid infinite cycling, not for
marking liveness? The only meaningful marking would be of weak hash
table entries, with the traversal order/SCC representative.
As I mentioned, I purposefully have not sat down and worked through
the details required to construct the relational expression.
Lynn
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: GC mark stack
2022-08-15 17:52 ` Lynn Winebarger
@ 2022-08-15 18:17 ` Stefan Monnier
2022-08-16 1:53 ` Lynn Winebarger
0 siblings, 1 reply; 10+ messages in thread
From: Stefan Monnier @ 2022-08-15 18:17 UTC (permalink / raw)
To: Lynn Winebarger; +Cc: emacs-devel
> If the loop can be expressed by a relational expression consisting of
> a concrete relation (like "references"), composition, union, and
> transitive closure, then there is a systematic way to construct an
> algorithm of the type I described.
I think the core of the problem is the following:
In a "key weak" hash table, if a reachable object points (transitively) to
an object that is a key in that table, then the value associated with
that key becomes reachable. So conceptually we have a reference (an
edge) from the key *object* to the value, but this reference is not
actually represented in the heap.
More specifically, the code that does the marking (i.e. `mark_object`)
only knows about the key object itself: it doesn't know that this object
is a key in a weak hash table nor what is the associated value.
We could try and reify those conceptual references, e.g. in some kind of
auxiliary table, but the added cost of making `mark_object` consult this
table for every object it marks (at least while marking weak-refs)
seems prohibitive.
Stefan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: GC mark stack
2022-08-15 18:17 ` Stefan Monnier
@ 2022-08-16 1:53 ` Lynn Winebarger
2022-08-16 9:50 ` Lynn Winebarger
0 siblings, 1 reply; 10+ messages in thread
From: Lynn Winebarger @ 2022-08-16 1:53 UTC (permalink / raw)
To: Stefan Monnier; +Cc: emacs-devel
On Mon, Aug 15, 2022 at 2:17 PM Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>
> > If the loop can be expressed by a relational expression consisting of
> > a concrete relation (like "references"), composition, union, and
> > transitive closure, then there is a systematic way to construct an
> > algorithm of the type I described.
>
> I think the core of the problem is the following:
>
> In a "key weak" hash table, if a reachable object points (transitively) to
> an object that is a key in that table, then the value associated with
> that key becomes reachable. So conceptually we have a reference (an
> edge) from the key *object* to the value, but this reference is not
> actually represented in the heap.
Right, so the weak part(s) of the hash table entry essentially invert
the strong reference relation graph - I wasn't clear on this part. As
you surmised, explicit construction of the inverted relation (i.e.
reifying the back-reference) is the most straightforward method for
creating the graph to traverse with Tarjan's algorithm when inversion
is added to the set of operators allowed in the relational
expressions. I'm not 100% on the semantics of weak hash table
reference types, so can you verify:
* Key - key strongly-references entry, entry strongly-references
value [ asserting the existence of 2 edges in the graph of the
strong-references relation ]
* Value - value strongly-references entry, entry strongly-references key
* KeyOrValue - ( key strongly-references entry, entry
strongly-references value ) OR ( value strongly-references entry,
entry strongly-references key ) - maybe equivalent to introducing
intermediate entries for each case:
* (key strongly-references key-entry, key-entry
strongly-references entry, key-entry strongly-references value),
(value strongly-references value-entry, value-entry
strongly-references entry, value-entry strongly-reference key)
* KeyAndValue - (key, value) strongly-references entry, where (key,
value) is a special node that is only live if both key and value are
(or entry itself is such node)
I'm really unsure about those last two. I'm also not sure what the
right logic is if the hash function is for equal or eqv and not eq.
Or, that matter, whether a fixnum or float as an eq key should always
or never keep an entry live.
I would also be concerned about adding costs to code that does not use
weak references at all. The best I think you could do is add a simple
bit flag to indicate whether an object has been used as the (or a)
target of the weak part of a weak hash table entry, or the target of a
weak reference in general. Then at least it's only a quick check in
mark_object in general, with more involved lookup into the reified
references if it has been set.
I think there's probably room for such a bit in all types that would
make sense as weak keys. The complexity is probably prohibitive for
any savings, I don't know.
I did a brief survey of the tree in correspondence with Mattias
Engdegård for appearances of weak hash tables in the "core":
* all_loaded_comp_units_h in src/comp.c
* composition_hash_table in src/composite.c - comment is out of
date, says it's weak, but then when actually created is not
* advertised-signature-table in lisp/emacs-lisp/byte-run.el - used
in byte compiler to detect obsolete calling convention
* cl--generic-combined-method-memoization in lisp/emacs-lisp/cl-generic.el
* eieio--object-names in lisp/emacs-lisp/eieio.el
* ert--test-buffers in lisp/emacs-lisp/ert-x.el
* macro-exp--warned in lisp/emacs-lisp/macroexp.el
* read-answer-map--memoize in lisp/emacs-lisp/map-ynp.el
* pcase--memoize in lisp/emacs-lisp/pcase.el
* undo-equiv-table in lisp/simple.el
* add-to-ordered-list in lisp/subr.el uses a local variable
ordering bound to a weak hash table
What would be interesting on the first one is if, after purifying
strings before dumping, the weak references from NCU values kept the
filename strings used as keys alive, and those were also (possibly
weak) keys in some other hash table. I have no idea if that's the
case, it would just be a weird, probably unintended side effect of
pure space. I'm not sure how many of the others are long-lived or
potentially large enough to be a concern (with 2100+ loaded NCUs, the
first one is neither short-lived nor particularly negligible in size)
I'm not sure how you can determine whether it's an issue in practice.
Lynn
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: GC mark stack
2022-08-16 1:53 ` Lynn Winebarger
@ 2022-08-16 9:50 ` Lynn Winebarger
0 siblings, 0 replies; 10+ messages in thread
From: Lynn Winebarger @ 2022-08-16 9:50 UTC (permalink / raw)
To: Stefan Monnier; +Cc: emacs-devel
On Mon, Aug 15, 2022 at 9:53 PM Lynn Winebarger <owinebar@gmail.com> wrote:
>
> On Mon, Aug 15, 2022 at 2:17 PM Stefan Monnier <monnier@iro.umontreal.ca> wrote:
> >
> > > If the loop can be expressed by a relational expression consisting of
> > > a concrete relation (like "references"), composition, union, and
> > > transitive closure, then there is a systematic way to construct an
> > > algorithm of the type I described.
> >
> > I think the core of the problem is the following:
> >
> > In a "key weak" hash table, if a reachable object points (transitively) to
> > an object that is a key in that table, then the value associated with
> > that key becomes reachable. So conceptually we have a reference (an
> > edge) from the key *object* to the value, but this reference is not
> > actually represented in the heap.
>
> Right, so the weak part(s) of the hash table entry essentially invert
> the strong reference relation graph - I wasn't clear on this part. As
> you surmised, explicit construction of the inverted relation (i.e.
> reifying the back-reference) is the most straightforward method for
> creating the graph to traverse with Tarjan's algorithm when inversion
> is added to the set of operators allowed in the relational
> expressions. I'm not 100% on the semantics of weak hash table
> reference types, so can you verify:
> * Key - key strongly-references entry, entry strongly-references
> value [ asserting the existence of 2 edges in the graph of the
> strong-references relation ]
> * Value - value strongly-references entry, entry strongly-references key
> * KeyOrValue - ( key strongly-references entry, entry
> strongly-references value ) OR ( value strongly-references entry,
> entry strongly-references key ) - maybe equivalent to introducing
> intermediate entries for each case:
> * (key strongly-references key-entry, key-entry
> strongly-references entry, key-entry strongly-references value),
> (value strongly-references value-entry, value-entry
> strongly-references entry, value-entry strongly-reference key)
> * KeyAndValue - (key, value) strongly-references entry, where (key,
> value) is a special node that is only live if both key and value are
> (or entry itself is such node)
>
Also every entry strongly references the hash table containing it,
if that's not obvious.
> I'm really unsure about those last two. I'm also not sure what the
> right logic is if the hash function is for equal or eqv and not eq.
> Or, that matter, whether a fixnum or float as an eq key should always
> or never keep an entry live.
> I would also be concerned about adding costs to code that does not use
> weak references at all. The best I think you could do is add a simple
> bit flag to indicate whether an object has been used as the (or a)
> target of the weak part of a weak hash table entry, or the target of a
> weak reference in general. Then at least it's only a quick check in
> mark_object in general, with more involved lookup into the reified
> references if it has been set.
> I think there's probably room for such a bit in all types that would
> make sense as weak keys. The complexity is probably prohibitive for
> any savings, I don't know.
>
> I did a brief survey of the tree in correspondence with Mattias
> Engdegård for appearances of weak hash tables in the "core":
>
> * all_loaded_comp_units_h in src/comp.c
> * composition_hash_table in src/composite.c - comment is out of
> date, says it's weak, but then when actually created is not
> * advertised-signature-table in lisp/emacs-lisp/byte-run.el - used
> in byte compiler to detect obsolete calling convention
> * cl--generic-combined-method-memoization in lisp/emacs-lisp/cl-generic.el
> * eieio--object-names in lisp/emacs-lisp/eieio.el
> * ert--test-buffers in lisp/emacs-lisp/ert-x.el
> * macro-exp--warned in lisp/emacs-lisp/macroexp.el
> * read-answer-map--memoize in lisp/emacs-lisp/map-ynp.el
> * pcase--memoize in lisp/emacs-lisp/pcase.el
> * undo-equiv-table in lisp/simple.el
> * add-to-ordered-list in lisp/subr.el uses a local variable
> ordering bound to a weak hash table
>
> What would be interesting on the first one is if, after purifying
> strings before dumping, the weak references from NCU values kept the
> filename strings used as keys alive, and those were also (possibly
> weak) keys in some other hash table. I have no idea if that's the
> case, it would just be a weird, probably unintended side effect of
> pure space. I'm not sure how many of the others are long-lived or
> potentially large enough to be a concern (with 2100+ loaded NCUs, the
> first one is neither short-lived nor particularly negligible in size)
> I'm not sure how you can determine whether it's an issue in practice.
An alternative might be to just have a bit of bookkeeping for the
reified references when creating or modifying weak entries that could
create cycles to add them to an explicit set (rather than a list of
weak hash tables). Then if there are no such entries, you could avoid
the loop entirely. Or at least the loop would be explicitly
proportional in size to the number of entries that might require
traversal, rather than looking at every entry of every reachable weak
hash table.
Lynn
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2022-08-16 9:50 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-06-26 15:31 GC mark stack Lynn Winebarger
2022-06-26 15:50 ` Lynn Winebarger
2022-06-27 12:27 ` bug#54698: Fwd: " Lynn Winebarger
2022-08-15 14:01 ` Stefan Monnier
2022-08-15 14:51 ` Lynn Winebarger
2022-08-15 16:28 ` Stefan Monnier
2022-08-15 17:52 ` Lynn Winebarger
2022-08-15 18:17 ` Stefan Monnier
2022-08-16 1:53 ` Lynn Winebarger
2022-08-16 9:50 ` Lynn Winebarger
Code repositories for project(s) associated with this external index
https://git.savannah.gnu.org/cgit/emacs.git
https://git.savannah.gnu.org/cgit/emacs/org-mode.git
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.