* vhash speed thread safeness
@ 2013-10-18 16:21 Stefan Israelsson Tampe
2013-10-29 12:34 ` Ludovic Courtès
2014-03-24 20:59 ` Andy Wingo
0 siblings, 2 replies; 6+ messages in thread
From: Stefan Israelsson Tampe @ 2013-10-18 16:21 UTC (permalink / raw)
To: guile-devel
Hi all,
I did some tests witha C-based vhash implementation, it's possible to
increse the speed by 30x compared to current vlist imlpementation in
guile. It would be possible to get this speed today if one implemented
vhash-assoc as a VM op. Anyway using the source as a standard lib will
get you about 10x speed increase.
Another pressing need is that vhashes are not thread safe, also a
downside is that if there are many versions spawned by each vhash the
good theoretical properties may not realize. Anyway here is a
suggestion of how to make it thread safe:
1. keep two numbers seq-int and thread-id on the block.
thread-id is a unique id for a thread. And
seq-id is a unique id for a state.
2. At thread-creation in thread 1 spawning thread 2. increment
thread 1's seq-id, and allocate a new id to thread 2 and set it's
seq-id to zero.
3. Att vhash-cons, if not the vhash seq-id and thread-id matches we
must create a new block pointing to the old vhash with size equal to
k.
4. this is thread safe and will not waste space, but can create
problems with lot's of small blocks linked and essentially get a sort
of linked list the computational lookup is o(T), T is the number of
threads.
5. If we get an issue with lot's of small block linked together we
could detect this and issue a "remake" in order to speed up
access times. This is not a problem that is unique to threading but
can be seen in a naive use of vhashes for many applications that
backtracks and use old copies of vhashes. So a solution of
autocompilation would be needed.
6. I plan to add a version of vhashes to guile-log as a replacement of
the assoc list. What I will then use is to tag vhash blocks as
referenced and when backtracking and the block is not referenced it
will simply reset the offset pointer and reuse the block-frame instead
of linking another one to it. Some kind of reference counter technique
would
be used here. n guile-log we have good control on when we reference
for storage so this solution would probably work just great.
With all this implementing good thread safe ideoms into guile-log
would be quite possible.
Regards
Stefan
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: vhash speed thread safeness
2013-10-18 16:21 vhash speed thread safeness Stefan Israelsson Tampe
@ 2013-10-29 12:34 ` Ludovic Courtès
2013-10-29 14:21 ` Stefan Israelsson Tampe
2014-03-24 20:59 ` Andy Wingo
1 sibling, 1 reply; 6+ messages in thread
From: Ludovic Courtès @ 2013-10-29 12:34 UTC (permalink / raw)
To: guile-devel
Hi, Stefan,
Stefan Israelsson Tampe <stefan.itampe@gmail.com> skribis:
> I did some tests witha C-based vhash implementation, it's possible to
> increse the speed by 30x compared to current vlist imlpementation in
> guile. It would be possible to get this speed today if one implemented
> vhash-assoc as a VM op. Anyway using the source as a standard lib will
> get you about 10x speed increase.
As we discussed, I don’t really like the idea of implementing that much
in C.
Also, I’d really like the optimization effort to be driven by actual
profiling data, such as C-level profiles of repeated calls to the
current ‘vhash-assoc’. Have you tried that?
> Another pressing need is that vhashes are not thread safe,
Section 2.8 of Bagwell’s paper proposes a simple solution. All that is
missing AFAICS is an atomic test-and-set VM op to implement it (which
may also be useful in other places.)
What do you think of this approach?
Thanks,
Ludo’.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: vhash speed thread safeness
2013-10-29 12:34 ` Ludovic Courtès
@ 2013-10-29 14:21 ` Stefan Israelsson Tampe
2013-10-29 17:54 ` Ludovic Courtès
0 siblings, 1 reply; 6+ messages in thread
From: Stefan Israelsson Tampe @ 2013-10-29 14:21 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: guile-devel
[-- Attachment #1: Type: text/plain, Size: 3074 bytes --]
On Tue, Oct 29, 2013 at 1:34 PM, Ludovic Courtès <ludo@gnu.org> wrote:
> Hi, Stefan,
>
> Stefan Israelsson Tampe <stefan.itampe@gmail.com> skribis:
>
> > I did some tests witha C-based vhash implementation, it's possible to
> > increse the speed by 30x compared to current vlist imlpementation in
> > guile. It would be possible to get this speed today if one implemented
> > vhash-assoc as a VM op. Anyway using the source as a standard lib will
> > get you about 10x speed increase.
>
> As we discussed, I don’t really like the idea of implementing that much
> in C.
>
My neither, but it is good to see that in cache friendly cases we can
improve the situation 30x. Gives us a goal to strive for. Also the main
intention is to use vashses for guile-log. For this we can note.
1. With this I managed to make an indexer that can find any ordered set of
matches that matches a 10000 consed pattern (x,y) in about 0.5 mu s.
e.g. matches for form (x y) (_ y) (x _) (_ _). the fun part is that the
indexer indexes and y list of list of list including having some elements
beeing variable. This is cool because if you want to solve first order
logic problems you will have sometimes many hundreds of matching patterns
that represents the compiled rules and hence get a speedup for searching
for rule matches.
2.A thread safe version of vhashes could be used for kanren or guile-log.
To improve scalability. In order to use this sanely we need to be able to
truncate the vhash else the lookup complexity will not be that great for
many applications. A truncation means that one need to check if the state
has been stored or not and in case it has been stored do a no-op.
Also, I’d really like the optimization effort to be driven by actual
> profiling data, such as C-level profiles of repeated calls to the
> current ‘vhash-assoc’. Have you tried that?
No. But a lot of things can be gain by unpacking the vectors and use pure C
vector lookup, I think that we will get that in the second or third version
of the native compiler at least its to much to ask for to get it for the
first one no? Then it's the overhead of VM jumps as usual which probably is
the main contributor.
> > Another pressing need is that vhashes are not thread safe,
>
> Section 2.8 of Bagwell’s paper proposes a simple solution. All that is
> missing AFAICS is an atomic test-and-set VM op to implement it (which
> may also be useful in other places.)
>
> What do you think of this approach?
For vlists it's probably a good idea, I don't know if it's enough for
vhashes though. Maybe you need a mutex. But lock overhead will be
significant and I suspect that it is faster many times to start afresh in
all threads. But untill we get a well optimized native compiler, lock
overhead is not that pressing.
BTW I have a good enough solution implemented now that I will use for
parallellizing logic programs, which is what I will concentrate on getting
the iso-prolog into shape.
>
>
Thanks,
> Ludo’.
>
>
> /Stefan
[-- Attachment #2: Type: text/html, Size: 4294 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: vhash speed thread safeness
2013-10-29 14:21 ` Stefan Israelsson Tampe
@ 2013-10-29 17:54 ` Ludovic Courtès
2013-10-29 18:50 ` Ian Price
0 siblings, 1 reply; 6+ messages in thread
From: Ludovic Courtès @ 2013-10-29 17:54 UTC (permalink / raw)
To: Stefan Israelsson Tampe; +Cc: guile-devel
Stefan Israelsson Tampe <stefan.itampe@gmail.com> skribis:
> On Tue, Oct 29, 2013 at 1:34 PM, Ludovic Courtès <ludo@gnu.org> wrote:
>
>> Hi, Stefan,
>>
>> Stefan Israelsson Tampe <stefan.itampe@gmail.com> skribis:
>>
>> > I did some tests witha C-based vhash implementation, it's possible to
>> > increse the speed by 30x compared to current vlist imlpementation in
>> > guile. It would be possible to get this speed today if one implemented
>> > vhash-assoc as a VM op. Anyway using the source as a standard lib will
>> > get you about 10x speed increase.
>>
>> As we discussed, I don’t really like the idea of implementing that much
>> in C.
>>
> My neither, but it is good to see that in cache friendly cases we can
> improve the situation 30x. Gives us a goal to strive for. Also the main
> intention is to use vashses for guile-log. For this we can note.
OK.
[...]
>> > Another pressing need is that vhashes are not thread safe,
>>
>> Section 2.8 of Bagwell’s paper proposes a simple solution. All that is
>> missing AFAICS is an atomic test-and-set VM op to implement it (which
>> may also be useful in other places.)
>>
>> What do you think of this approach?
>
>
> For vlists it's probably a good idea, I don't know if it's enough for
> vhashes though.
Oooh, right, sorry for overlooking that.
> Maybe you need a mutex. But lock overhead will be significant
Surely, especially if it’s a fat mutex.
Hmm hmm. Of course that could be an argument for doing some C
(primitives, not VM ops), but looking at ‘%vhash-assoc’, that would
clearly mean reimplementing pretty much all of the vlist + vhash in C,
which sucks.
I wonder if there’s some other data structure with similar properties
that doesn’t have the thread-safety issue. Maybe Ian has an idea?
The weight-balanced trees in MIT/GNU Scheme look interesting:
http://www.gnu.org/software/mit-scheme/documentation/mit-scheme-ref/Weight_002dBalanced-Trees.html
Thoughts?
Thanks,
Ludo’.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: vhash speed thread safeness
2013-10-18 16:21 vhash speed thread safeness Stefan Israelsson Tampe
2013-10-29 12:34 ` Ludovic Courtès
@ 2014-03-24 20:59 ` Andy Wingo
1 sibling, 0 replies; 6+ messages in thread
From: Andy Wingo @ 2014-03-24 20:59 UTC (permalink / raw)
To: Stefan Israelsson Tampe; +Cc: guile-devel
Hi,
Just a late observation :)
On Fri 18 Oct 2013 18:21, Stefan Israelsson Tampe <stefan.itampe@gmail.com> writes:
> I did some tests witha C-based vhash implementation, it's possible to
> increse the speed by 30x compared to current vlist imlpementation in
> guile.
If you are using stable-2.0 you are using a bad hash function. Master
has a much better one, though we should still replace it with Siphash or
something soon. Anyway it can be quite a problem for vhashes.
Incidentally, this test case:
(use-modules (ice-9 vlist))
(define vl
(let lp ((n 0))
(if (= n 26)
vlist-null
(vhash-consq (string->symbol
(string (integer->char (+ (char->integer #\a) n))))
n
(lp (1+ n))))))
(let lp ((n 0)) (when (< n #e1e6) (vhash-assq 'j vl) (lp (1+ n))))
takes 1.5s in stable-2.0 and 0.17s in master. The hash function isn't
the only difference of course.
Andy
--
http://wingolog.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2014-03-24 20:59 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-18 16:21 vhash speed thread safeness Stefan Israelsson Tampe
2013-10-29 12:34 ` Ludovic Courtès
2013-10-29 14:21 ` Stefan Israelsson Tampe
2013-10-29 17:54 ` Ludovic Courtès
2013-10-29 18:50 ` Ian Price
2014-03-24 20:59 ` Andy Wingo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).