unofficial mirror of guile-devel@gnu.org 
 help / color / mirror / Atom feed
* The 2.0.9 VM cores in enqueue (threads.c:309)
@ 2013-04-27 19:57 Andrew Gaylard
  2013-04-28  1:07 ` Daniel Hartwig
  2013-04-28 15:28 ` The 2.0.9 VM cores in enqueue (threads.c:309) Ludovic Courtès
  0 siblings, 2 replies; 9+ messages in thread
From: Andrew Gaylard @ 2013-04-27 19:57 UTC (permalink / raw)
  To: guile-devel

[-- Attachment #1: Type: text/plain, Size: 5848 bytes --]

Hi guile hackers,

I'm experiencing the VM coring in a repeatable manner.

My application launches a number of threads, which pass objects
from one thread to another via queues (ice-9 q).  To ensure thread-
safety, the queues are actually accessed via (container async-queue)
from guile-lib-0.2.2; see:

http://git.savannah.gnu.org/gitweb/?p=guile-lib.git;a=blob;f=src/container/async-queue.scm;h=82841f12eefe42ef6dacbbca8f0057723964323b;hb=HEAD

The idea is that if one thread adds an object to a queue, while another
is taking an object off a queue, a mutex will (or should) ensure that only
one thread alters the underlying queue objects at a time.

I've built guile with --enable-debug, and compiled with -ggdb3.
After the VM cores, gdb reveals this (apologies for the long lines):

(gdb) bt
#0  0xffffffff7e77b5f4 in enqueue (q=0x1010892c0, t=0x1018aac20) at 
threads.c:309
#1  0xffffffff7e77bc20 in block_self (queue=0x1010892c0, 
sleep_object=0x1010892d0, mutex=0x1019eef00, waittime=0x0) at threads.c:452
#2  0xffffffff7e77df50 in fat_mutex_lock (mutex=0x1010892d0, 
timeout=0x0, owner=0x904, ret=0xffffffff734f92ac) at threads.c:1473
#3  0xffffffff7e77e0e0 in scm_lock_mutex_timed (m=0x1010892d0, 
timeout=0x904, owner=0x904) at threads.c:1513
#4  0xffffffff7e78e9f4 in vm_regular_engine (vm=0x1018aabd0, 
program=0xffffffff7e94a4d0 <scm_lock_mutex_timed__subr_raw_cell>, 
argv=0xffffffff734fa2c0, nargs=3) at vm-i-system.c:858
#5  0xffffffff7e7b2ea0 in scm_c_vm_run (vm=0x1018aabd0, 
program=0x1003a3720, argv=0xffffffff734fa2a8, nargs=3) at vm.c:753
#6  0xffffffff7e68b8ac in scm_call_3 (proc=0x1003a3720, arg1=0x404, 
arg2=0x101980cc0, arg3=0x1011fac40) at eval.c:500
#7  0xffffffff7e7810c0 in scm_catch (key=0x404, thunk=0x101980cc0, 
handler=0x1011fac40) at throw.c:73
#8  0xffffffff7e77cc60 in really_launch (d=0xffffffff7fffa6f0) at 
threads.c:1009
#9  0xffffffff7e67b390 in c_body (d=0xffffffff734fb9b8) at 
continuations.c:511
#10 0xffffffff7e781564 in apply_catch_closure (clo=0x101fd30c0, 
args=0x304) at throw.c:146
#11 0xffffffff7e73cc6c in apply_1 (smob=0x101fd30c0, a=0x304) at smob.c:142
#12 0xffffffff7e78e9b0 in vm_regular_engine (vm=0x1018aabd0, 
program=0x1002c8700, argv=0xffffffff734fb690, nargs=2) at vm-i-system.c:855
#13 0xffffffff7e7b2ea0 in scm_c_vm_run (vm=0x1018aabd0, 
program=0x1003a3720, argv=0xffffffff734fb670, nargs=4) at vm.c:753
#14 0xffffffff7e68b91c in scm_call_4 (proc=0x1003a3720, arg1=0x404, 
arg2=0x101fd30c0, arg3=0x101fd30a0, arg4=0x101fd3080) at eval.c:507
#15 0xffffffff7e7811f4 in scm_catch_with_pre_unwind_handler (key=0x404, 
thunk=0x101fd30c0, handler=0x101fd30a0, pre_unwind_handler=0x101fd3080) 
at throw.c:86
#16 0xffffffff7e781664 in scm_c_catch (tag=0x404, 
body=0xffffffff7e67b364 <c_body>, body_data=0xffffffff734fb9b8, 
handler=0xffffffff7e67b3ac <c_handler>, handler_data=0xffffffff734fb9b8, 
pre_unwind_handler=0xffffffff7e67b438 <pre_unwind_handler>, 
pre_unwind_handler_data=0x1002ccaf0) at throw.c:213
#17 0xffffffff7e67b14c in scm_i_with_continuation_barrier 
(body=0xffffffff7e67b364 <c_body>, body_data=0xffffffff734fb9b8, 
handler=0xffffffff7e67b3ac <c_handler>, handler_data=0xffffffff734fb9b8, 
pre_unwind_handler=0xffffffff7e67b438 <pre_unwind_handler>, 
pre_unwind_handler_data=0x1002ccaf0) at continuations.c:449
#18 0xffffffff7e67b52c in scm_c_with_continuation_barrier 
(func=0xffffffff7e77cb74 <really_launch>, data=0xffffffff7fffa6f0) at 
continuations.c:545
#19 0xffffffff7e77c924 in with_guile_and_parent 
(base=0xffffffff734fbb50, data=0xffffffff734fbc18) at threads.c:908
#20 0xffffffff7e32e138 in GC_call_with_stack_base () from 
/opt/cs/components/3rd/bdw-gc/7.2.7e16628s16377h0398/lib/libgc.so.1
#21 0xffffffff7e77ca40 in scm_i_with_guile_and_parent 
(func=0xffffffff7e77cb74 <really_launch>, data=0xffffffff7fffa6f0, 
parent=0x100272d80) at threads.c:951
#22 0xffffffff7e77cce0 in launch_thread (d=0xffffffff7fffa6f0) at 
threads.c:1019
#23 0xffffffff7e337e00 in GC_inner_start_routine () from 
/opt/cs/components/3rd/bdw-gc/7.2.7e16628s16377h0398/lib/libgc.so.1
#24 0xffffffff7e32e138 in GC_call_with_stack_base () from 
/opt/cs/components/3rd/bdw-gc/7.2.7e16628s16377h0398/lib/libgc.so.1
#25 0xffffffff7e33ba64 in GC_start_routine () from 
/opt/cs/components/3rd/bdw-gc/7.2.7e16628s16377h0398/lib/libgc.so.1
#26 0xffffffff7c9d8b04 in _lwp_start () from /lib/64/libc.so.1

(gdb) list
304       SCM c = scm_cons (t, SCM_EOL);
305       SCM_CRITICAL_SECTION_START;
306       if (scm_is_null (SCM_CDR (q)))
307         SCM_SETCDR (q, c);
308       else
309         SCM_SETCDR (SCM_CAR (q), c);
310       SCM_SETCAR (q, c);
311       SCM_CRITICAL_SECTION_END;
312       return c;
313     }
(gdb) p q
$21 = (SCM) 0x1010892c0
(gdb) p c
$22 = (SCM) 0x103aa4ad0
(gdb) p SCM_IMP(q)
$23 = 0
(gdb) p SCM_IMP(c)
$24 = 0
(gdb) p SCM2PTR(q)
$25 = (scm_t_cell *) 0x1010892c0
(gdb) p *SCM2PTR(q)
$26 = {word_0 = 0x304, word_1 = 0x1039c4c20}
(gdb) p SCM2PTR(c)
$27 = (scm_t_cell *) 0x103aa4ad0
(gdb) p *SCM2PTR(c)
$28 = {word_0 = 0x1018aac20, word_1 = 0x304}
(gdb) p SCM_CAR (q)
$29 = (SCM) 0x304
(gdb) p SCM_CDR (c)
$30 = (SCM) 0x304
(gdb) p SCM_SETCDR (SCM_CAR (q), c)
Cannot access memory at address 0x30c

Those 0x304 values look dodgy to me, and explain why the
SCM_SETCDR causes an invalid memory access.

This problem happens on Solaris 10, both on SPARC and x86.
The failure mode is identical on both. I've been unable to replicate
the problem on Linux/x86 and Linux/x86_64.

The details are:

- gcc-4.7.2
- bdw-gc-7.2d
- guile-2.0.9

How do I fix this?

Is this even related to the use of queues, or could the problem
be due to heap corruption that has occurred somewhere else
in the program, miles away from the enqueue function?

Is there any other information I should provide that will help the
guile hackers track this down?

Thanks in advance,
-- 
Andrew

[-- Attachment #2: Type: text/html, Size: 8116 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: The 2.0.9 VM cores in enqueue (threads.c:309)
  2013-04-27 19:57 The 2.0.9 VM cores in enqueue (threads.c:309) Andrew Gaylard
@ 2013-04-28  1:07 ` Daniel Hartwig
  2013-04-29  6:56   ` Andrew Gaylard
  2013-04-28 15:28 ` The 2.0.9 VM cores in enqueue (threads.c:309) Ludovic Courtès
  1 sibling, 1 reply; 9+ messages in thread
From: Daniel Hartwig @ 2013-04-28  1:07 UTC (permalink / raw)
  To: Andrew Gaylard; +Cc: guile-devel

On 28 April 2013 03:57, Andrew Gaylard <ag@computer.org> wrote:
> Those 0x304 values look dodgy to me, and explain why the
> SCM_SETCDR causes an invalid memory access.
>

0x304 is SCM_EOL.

>
> Is this even related to the use of queues,

Not (ice-9 q) or (container async-queue).  The ‘enqueue’ procedure
here is internal to the threads module.



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: The 2.0.9 VM cores in enqueue (threads.c:309)
  2013-04-27 19:57 The 2.0.9 VM cores in enqueue (threads.c:309) Andrew Gaylard
  2013-04-28  1:07 ` Daniel Hartwig
@ 2013-04-28 15:28 ` Ludovic Courtès
  1 sibling, 0 replies; 9+ messages in thread
From: Ludovic Courtès @ 2013-04-28 15:28 UTC (permalink / raw)
  To: Andrew Gaylard; +Cc: guile-devel

Hi,

Andrew Gaylard <ag@computer.org> skribis:

> (gdb) bt
> #0  0xffffffff7e77b5f4 in enqueue (q=0x1010892c0, t=0x1018aac20) at
> threads.c:309
> #1  0xffffffff7e77bc20 in block_self (queue=0x1010892c0,
> sleep_object=0x1010892d0, mutex=0x1019eef00, waittime=0x0) at
> threads.c:452
> #2  0xffffffff7e77df50 in fat_mutex_lock (mutex=0x1010892d0,
> timeout=0x0, owner=0x904, ret=0xffffffff734f92ac) at threads.c:1473

[...]

> This problem happens on Solaris 10, both on SPARC and x86.
> The failure mode is identical on both. I've been unable to replicate
> the problem on Linux/x86 and Linux/x86_64.
>
> The details are:
>
> - gcc-4.7.2
> - bdw-gc-7.2d
> - guile-2.0.9

Could you post this to bug-guile@gnu.org (where it’ll be archived, so we
don’t lose it), along with a test case that reproduces the problem?

I can try to reproduce it and investigate on an OpenCSW box later.

Thanks,
Ludo’.



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: The 2.0.9 VM cores in enqueue (threads.c:309)
  2013-04-28  1:07 ` Daniel Hartwig
@ 2013-04-29  6:56   ` Andrew Gaylard
  2013-04-29  9:16     ` Daniel Hartwig
  2013-04-29 10:10     ` Mark H Weaver
  0 siblings, 2 replies; 9+ messages in thread
From: Andrew Gaylard @ 2013-04-29  6:56 UTC (permalink / raw)
  To: guile-devel

On 04/28/13 03:07, Daniel Hartwig wrote:
> On 28 April 2013 03:57, Andrew Gaylard <ag@computer.org> wrote:
>> Those 0x304 values look dodgy to me, and explain why the
>> SCM_SETCDR causes an invalid memory access.
>>
> 0x304 is SCM_EOL.
Hi Daniel,

Thanks for the feedback.

Are you saying that the 0x304 values are fine, and the problem lies 
elsewhere?
(e.g. heap corruption, ...)

-- 
Andrew




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: The 2.0.9 VM cores in enqueue (threads.c:309)
  2013-04-29  6:56   ` Andrew Gaylard
@ 2013-04-29  9:16     ` Daniel Hartwig
  2013-04-29 10:10     ` Mark H Weaver
  1 sibling, 0 replies; 9+ messages in thread
From: Daniel Hartwig @ 2013-04-29  9:16 UTC (permalink / raw)
  To: Andrew Gaylard; +Cc: guile-devel

On 29 April 2013 14:56, Andrew Gaylard <ag@computer.org> wrote:
> On 04/28/13 03:07, Daniel Hartwig wrote:
>>
>> On 28 April 2013 03:57, Andrew Gaylard <ag@computer.org> wrote:
>>>
>>> Those 0x304 values look dodgy to me, and explain why the
>>> SCM_SETCDR causes an invalid memory access.
>>>
>> 0x304 is SCM_EOL.
>
> Hi Daniel,
>
> Thanks for the feedback.
>
> Are you saying that the 0x304 values are fine, and the problem lies
> elsewhere?
> (e.g. heap corruption, ...)

Yes and no.  They are fine in the sense that it is a valid SCM value,
and expected in some situations.  I wanted to investigate the specific
code before commenting further.  Certainly it should never be the
subject of ‘SCM_SETCDR’.

The value of C at that point:

> (gdb) p *SCM2PTR(c)
> $28 = {word_0 = 0x1018aac20, word_1 = 0x304}

is as expected, a one-element list.  Q however:

> (gdb) p *SCM2PTR(q)
> $26 = {word_0 = 0x304, word_1 = 0x1039c4c20}

should not occur according to my reading of the three queue procedures
in threads.c.  The car (word_0) is the final pair in the queue and
should only be ‘()’ (or 0x304) when the cdr is also.  Only two lines
that could assign this value, and both appear fine unless one of the
conditions is failing somehow.

For the moment I have no more idea.  I briefly suspected the
initialization of ‘prev = q’ in ‘remqueue’ being outside the critical
section, but as Q itself never changes that can be dismissed.

So needs more looking at, maybe there is some other code that modifies
these queues.

Regards



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: The 2.0.9 VM cores in enqueue (threads.c:309)
  2013-04-29  6:56   ` Andrew Gaylard
  2013-04-29  9:16     ` Daniel Hartwig
@ 2013-04-29 10:10     ` Mark H Weaver
  2013-04-29 13:35       ` Noah Lavine
  2013-06-17 10:06       ` The 2.0.9 VM cores in enqueue (threads.c:309) -- partial fix, patch attached Andrew Gaylard
  1 sibling, 2 replies; 9+ messages in thread
From: Mark H Weaver @ 2013-04-29 10:10 UTC (permalink / raw)
  To: Andrew Gaylard; +Cc: guile-devel

Hi Andrew,

Andrew Gaylard <ag@computer.org> writes:

> On 04/28/13 03:07, Daniel Hartwig wrote:
>> On 28 April 2013 03:57, Andrew Gaylard <ag@computer.org> wrote:
>>> Those 0x304 values look dodgy to me, and explain why the
>>> SCM_SETCDR causes an invalid memory access.
>>>
>> 0x304 is SCM_EOL.
> Hi Daniel,
>
> Thanks for the feedback.
>
> Are you saying that the 0x304 values are fine, and the problem lies
> elsewhere?

As Daniel pointed out, 0x304 is SCM_EOL, i.e. the empty list '().

> #0 0xffffffff7e77b5f4 in enqueue (q=0x1010892c0, t=0x1018aac20) at
> threads.c:309
> #1 0xffffffff7e77bc20 in block_self (queue=0x1010892c0,
> sleep_object=0x1010892d0, mutex=0x1019eef00, waittime=0x0) at
> threads.c:452
> #2 0xffffffff7e77df50 in fat_mutex_lock (mutex=0x1010892d0,
> timeout=0x0, owner=0x904, ret=0xffffffff734f92ac) at threads.c:1473
[...]
> (gdb) list
> 304 SCM c = scm_cons (t, SCM_EOL);
> 305 SCM_CRITICAL_SECTION_START;
> 306 if (scm_is_null (SCM_CDR (q)))
> 307 SCM_SETCDR (q, c);
> 308 else
> 309 SCM_SETCDR (SCM_CAR (q), c);
> 310 SCM_SETCAR (q, c);
> 311 SCM_CRITICAL_SECTION_END;
> 312 return c;
> 313 }
[...]
> (gdb) p *SCM2PTR(q)
> $26 = {word_0 = 0x304, word_1 = 0x1039c4c20}

What's happening here is that the wait queue (m->waiting in fat_mutex)
is somehow getting corrupted.  The code above ('enqueue' in threads.c)
is trying to add a new element to the queue.  The queue is represented
as a pair whose CDR is the list of items in the queue, and whose CAR
points to the last pair of that list.  Somehow, the CAR is becoming null
even though the CDR is non-empty.  This should never happen.

I looked through the relevant code, and it's not obvious to me how this
could happen.  The only functions I see that manipulate this queue are
'enqueue', 'remqueue', and 'dequeue', all static functions in threads.c.
As far as I can see, these functions maintain the invariant that the CAR
is null if and only if the CDR is null.  All queue manipulation is done
between SCM_CRITICAL_SECTION_START and SCM_CRITICAL_SECTION_END (defined
in async.h) which lock a single global pthread mutex.

Any ideas?

     Thanks,
       Mark



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: The 2.0.9 VM cores in enqueue (threads.c:309)
  2013-04-29 10:10     ` Mark H Weaver
@ 2013-04-29 13:35       ` Noah Lavine
  2013-06-17 10:06       ` The 2.0.9 VM cores in enqueue (threads.c:309) -- partial fix, patch attached Andrew Gaylard
  1 sibling, 0 replies; 9+ messages in thread
From: Noah Lavine @ 2013-04-29 13:35 UTC (permalink / raw)
  To: Mark H Weaver; +Cc: guile-devel

[-- Attachment #1: Type: text/plain, Size: 330 bytes --]

Hello,

On Mon, Apr 29, 2013 at 6:10 AM, Mark H Weaver <mhw@netris.org> wrote:

> Any ideas?
>
>      Thanks,
>        Mark
>
>
It should be possible to use a watchpoint in GDB to figure out what code is
corrupting that piece of memory. It probably won't tell us exactly what's
going on, but it would be interesting to see.

Noah

[-- Attachment #2: Type: text/html, Size: 738 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: The 2.0.9 VM cores in enqueue (threads.c:309) -- partial fix, patch attached
  2013-04-29 10:10     ` Mark H Weaver
  2013-04-29 13:35       ` Noah Lavine
@ 2013-06-17 10:06       ` Andrew Gaylard
  2013-06-17 19:00         ` Mark H Weaver
  1 sibling, 1 reply; 9+ messages in thread
From: Andrew Gaylard @ 2013-06-17 10:06 UTC (permalink / raw)
  To: Mark H Weaver, guile-devel

[-- Attachment #1: Type: text/plain, Size: 1911 bytes --]

On 04/29/13 12:10, Mark H Weaver wrote:
> Hi Andrew,
> On 28 April 2013 03:57, Andrew Gaylard <ag@computer.org> wrote:
>>>> Those 0x304 values look dodgy to me, and explain why the
>>>> SCM_SETCDR causes an invalid memory access.
>> (gdb) p *SCM2PTR(q)
>> $26 = {word_0 = 0x304, word_1 = 0x1039c4c20}
> What's happening here is that the wait queue (m->waiting in fat_mutex)
> is somehow getting corrupted.  The code above ('enqueue' in threads.c)
> is trying to add a new element to the queue.  The queue is represented
> as a pair whose CDR is the list of items in the queue, and whose CAR
> points to the last pair of that list.  Somehow, the CAR is becoming null
> even though the CDR is non-empty.  This should never happen.
>
> I looked through the relevant code, and it's not obvious to me how this
> could happen.  The only functions I see that manipulate this queue are
> 'enqueue', 'remqueue', and 'dequeue', all static functions in threads.c.
> As far as I can see, these functions maintain the invariant that the CAR
> is null if and only if the CDR is null.  All queue manipulation is done
> between SCM_CRITICAL_SECTION_START and SCM_CRITICAL_SECTION_END (defined
> in async.h) which lock a single global pthread mutex.
>
> Any ideas?
>
>       Thanks,
>         Mark
Hi,

I've had some more time to look into this problem, and now have a 
partial fix.

The problem does not occur on Linux x86 or x86_64 (Ubuntu-12.04).
The problem always occurs on Solaris-10u9, both x86_64 and SPARC.

The problem is always the segmentation fault trying to write to write
to 0x30c, at threads.c:309.  Inspection of the remqueue function shows
that the logic is not correct when removing the last entry in the queue.

The patch attached helps -- my code runs for much longer, but doesn't crash.
However it now hangs somewhere else (which may be an unrelated problem).

I'd be grateful for any feedback.
-- 
Andrew

[-- Attachment #2: fix-guile-thread-remqueue.patch --]
[-- Type: text/x-patch, Size: 852 bytes --]

diff -U4 -r guile-2.0.9/libguile/threads.c guile-2.0.9-new/libguile/threads.c
--- guile-2.0.9/libguile/threads.c	Mon Mar 18 23:30:13 2013
+++ guile-2.0.9-new/libguile/threads.c	Mon Jun 17 11:03:04 2013
@@ -325,12 +325,22 @@
   for (p = SCM_CDR (q); !scm_is_null (p); p = SCM_CDR (p))
     {
       if (scm_is_eq (p, c))
 	{
-	  if (scm_is_eq (c, SCM_CAR (q)))
-	    SCM_SETCAR (q, SCM_CDR (c));
+	  /* Remove c from the list */
 	  SCM_SETCDR (prev, SCM_CDR (c));
 
+	  /* If c is the last entry in the list,
+	     then update the (car q) to be the new last entry.
+	     Check whether the q is now empty. */
+	  if (scm_is_eq (c, SCM_CAR (q)))
+	    {
+	      if (scm_is_null (SCM_CDR (q)))
+		SCM_SETCAR (q, SCM_EOL);
+	      else
+		SCM_SETCAR (q, prev);
+	    }
+		
 	  /* GC-robust */
 	  SCM_SETCDR (c, SCM_EOL);
 
 	  SCM_CRITICAL_SECTION_END;

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: The 2.0.9 VM cores in enqueue (threads.c:309) -- partial fix, patch attached
  2013-06-17 10:06       ` The 2.0.9 VM cores in enqueue (threads.c:309) -- partial fix, patch attached Andrew Gaylard
@ 2013-06-17 19:00         ` Mark H Weaver
  0 siblings, 0 replies; 9+ messages in thread
From: Mark H Weaver @ 2013-06-17 19:00 UTC (permalink / raw)
  To: Andrew Gaylard; +Cc: guile-devel

Hi Andrew,

Andrew Gaylard <ag@computer.org> writes:
> Inspection of the remqueue function shows
> that the logic is not correct when removing the last entry in the queue.

Indeed, thanks very much for debugging this!
I pushed a fix to stable-2.0.

> However it now hangs somewhere else (which may be an unrelated problem).

Well, that's progress at least.  Please let us know what you find.

    Thanks,
      Mark



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-06-17 19:00 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-27 19:57 The 2.0.9 VM cores in enqueue (threads.c:309) Andrew Gaylard
2013-04-28  1:07 ` Daniel Hartwig
2013-04-29  6:56   ` Andrew Gaylard
2013-04-29  9:16     ` Daniel Hartwig
2013-04-29 10:10     ` Mark H Weaver
2013-04-29 13:35       ` Noah Lavine
2013-06-17 10:06       ` The 2.0.9 VM cores in enqueue (threads.c:309) -- partial fix, patch attached Andrew Gaylard
2013-06-17 19:00         ` Mark H Weaver
2013-04-28 15:28 ` The 2.0.9 VM cores in enqueue (threads.c:309) Ludovic Courtès

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).