unofficial mirror of guile-devel@gnu.org 
 help / color / mirror / Atom feed
From: Andy Wingo <wingo@pobox.com>
To: ludo@gnu.org (Ludovic Courtès)
Cc: guile-devel@gnu.org
Subject: Re: Fluids
Date: Sun, 14 Feb 2010 16:50:57 +0100	[thread overview]
Message-ID: <m33a13byu6.fsf@pobox.com> (raw)
In-Reply-To: <87eiknx4zl.fsf@gnu.org> ("Ludovic Courtès"'s message of "Sun, 14 Feb 2010 15:32:30 +0100")

Heya,

On Sun 14 Feb 2010 15:32, ludo@gnu.org (Ludovic Courtès) writes:

> Andy Wingo <wingo@pobox.com> writes:
>
>> But you can't / shouldn't make a new fluid every time you enter a
>> `catch', because currently fluids are never garbage collected! We really
>> need to fix this. I think it's a 1.9 regression.
>
> Indeed.  We should use a weak vector or some such instead of the current
> scm_gc_malloc’d array.
>
>> To do so effectively, I think you'd need to make fluid objects store
>> their values directly, so that the GC doesn't have to go through hoops
>> to know that they're collectable. Ideally they would get their values
>> via pthread_getspecific; but that would defeat some bits of ours about
>> "dynamic states" (not a very useful concept IMO), and the GC would need
>> help. Actually it would be nice if libgc supported thread-local
>> allocations. (Does it?)
>
> I think dynamically allocating thread-local storage can only be done
> with pthread_key_create ().  Libgc knows how to scan pthread keys.  So
> we could have fluids be wrappers around pthread keys and fluid-ref would
> boil down to pthread_getspecific ().  Then we wouldn’t even need the
> fluid number hack.
>
> Is it what you had in mind?

Yes, this is what I had in mind; I was not aware that libgc could scan
thread-specific variables. This is great news, I think.

My only qualm regards the number of potential pthread_key variables. My
current emacs session has about 15K functions and 7K variables. Does the
pthread_key mechanism scale well to this number of thread-local
variables?

Also we would probably still need a weak vector of all fluids, to
support dynamic states. But this is well-supported by libgc.

Andy
-- 
http://wingolog.org/




  reply	other threads:[~2010-02-14 15:50 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-14 12:33 catch, throw, prompt, control, fluids, garbage collection Andy Wingo
2010-02-14 14:32 ` Fluids Ludovic Courtès
2010-02-14 15:50   ` Andy Wingo [this message]
2010-02-14 19:09     ` Fluids Ken Raeburn
2010-03-02 23:52   ` Fluids Ludovic Courtès
2010-03-03 12:29     ` Fluids Andy Wingo
2010-03-03 13:09       ` Fluids Ludovic Courtès
2010-03-05 17:24       ` Fluids Ludovic Courtès
2010-02-14 14:45 ` Plan for the next release Ludovic Courtès
2010-02-14 15:54   ` Andy Wingo
2010-02-15 22:07 ` catch, throw, prompt, control, fluids, garbage collection Andy Wingo
2010-02-18 22:35   ` Andy Wingo
2010-02-25  0:00     ` Andy Wingo
2010-02-26 12:27       ` Andy Wingo
2010-02-28 22:16         ` Neil Jerram
2010-07-17 10:15           ` Andy Wingo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/guile/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=m33a13byu6.fsf@pobox.com \
    --to=wingo@pobox.com \
    --cc=guile-devel@gnu.org \
    --cc=ludo@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).