unofficial mirror of bug-guile@gnu.org 
 help / color / mirror / Atom feed
* git guile hangs in fluids.test
@ 2010-03-30 10:16 dsmich
  2010-03-30 10:35 ` dsmich
  2010-03-30 11:56 ` Ludovic Courtès
  0 siblings, 2 replies; 8+ messages in thread
From: dsmich @ 2010-03-30 10:16 UTC (permalink / raw)
  To: bug-guile

Recently, git guile has been consistently hanging in fluids.test.  Cpu usage drops to 0.  This is on a single core machine running a fairly up-to-date Debian Testing.

$ gcc --version
gcc (Debian 4.4.2-9) 4.4.3 20100108 (prerelease)

$ git describe
release_1-9-9-23-g6128f34

Here is a gdb backtrace connected to the hung process:

(gdb) bt
#0  0x4001e424 in __kernel_vsyscall ()
#1  0x4046c285 in sem_wait@@GLIBC_2.1 () at ../nptl/sysdeps/unix/sysv/linux/i386/i686/../i486/sem_wait.S:80
#2  0x40168018 in GC_stop_world () at pthread_stop_world.c:426
#3  0x40157b1e in GC_stopped_mark (stop_func=0x40156f40 <GC_never_stop_func>) at alloc.c:474
#4  0x40157df9 in GC_try_to_collect_inner (stop_func=0x40156f40 <GC_never_stop_func>) at alloc.c:362
#5  0x40158194 in GC_try_to_collect (stop_func=0x40156f40 <GC_never_stop_func>) at alloc.c:762
#6  0x40158270 in GC_gcollect () at alloc.c:774
#7  0x40076277 in scm_i_gc (what=0x4010efee "fluids") at gc.c:401
#8  0x40070d43 in new_fluid () at fluids.c:132
#9  scm_make_fluid () at fluids.c:180
#10 0x400f24a7 in vm_debug_engine (vm=0x9b2b590, program=0x9b6f910, argv=0xbfa168d0, nargs=1) at vm-i-system.c:853
#11 0x400e163a in scm_c_vm_run (vm=0x9b2b590, program=0x9b6f910, argv=0xbfa168d0, nargs=1) at vm.c:518
#12 0x4006e9c7 in scm_primitive_eval (exp=0x9fe2de0) at eval.c:859
#13 0x4008dac3 in scm_primitive_load (filename=0x9e62990) at load.c:125
#14 0x400f249c in vm_debug_engine (vm=0x9b2b590, program=0x9c3ac48, argv=0xbfa16a24, nargs=1) at vm-i-system.c:856
#15 0x400e163a in scm_c_vm_run (vm=0x9b2b590, program=0x9c3ac48, argv=0xbfa16a24, nargs=1) at vm.c:518
#16 0x4006d945 in scm_call_1 (proc=0x9c3ac48, arg1=0x9c4d1d0) at eval.c:574
#17 0x4006ee0e in scm_for_each (proc=0x9c3ac48, arg1=0x9c532a0, args=0x304) at eval.c:802
#18 0x400f2517 in vm_debug_engine (vm=0x9b2b590, program=0x9b6f910, argv=0xbfa16b90, nargs=1) at vm-i-system.c:862
#19 0x400e163a in scm_c_vm_run (vm=0x9b2b590, program=0x9b6f910, argv=0xbfa16b90, nargs=1) at vm.c:518
#20 0x4006e9c7 in scm_primitive_eval (exp=0x9c2a048) at eval.c:859
#21 0x4006ea41 in scm_eval (exp=0x9c2a048, module_or_state=0x9aca738) at eval.c:893
#22 0x400b83e5 in scm_shell (argc=10, argv=0xbfa17014) at script.c:762
#23 0x40086a36 in invoke_main_func (body_data=0xbfa16f50) at init.c:381
#24 0x40065622 in c_body (d=0xbfa16ea4) at continuations.c:475
#25 0x400ddfd2 in apply_catch_closure (clo=0x0, args=0x304) at throw.c:147
#26 0x400f189f in vm_debug_engine (vm=0x9b2b590, program=0x9ac4858, argv=0xbfa16d80, nargs=4) at vm-i-system.c:924
#27 0x400e163a in scm_c_vm_run (vm=0x9b2b590, program=0x9ac4858, argv=0xbfa16d80, nargs=4) at vm.c:518
#28 0x4006d85d in scm_call_4 (proc=0x9ac4858, arg1=0x404, arg2=0x9bf6d00, arg3=0x9bf6cf0, arg4=0x9bf6ce0)
    at eval.c:595
#29 0x400de9b2 in scm_catch_with_pre_unwind_handler (key=0x404, thunk=0x9bf6d00, handler=0x9bf6cf0, 
    pre_unwind_handler=0x9bf6ce0) at throw.c:87
#30 0x400dea82 in scm_c_catch (tag=0x404, body=0x40065610 <c_body>, body_data=0xbfa16ea4, 
    handler=0x40065630 <c_handler>, handler_data=0xbfa16ea4, 
    pre_unwind_handler=0x400de340 <scm_handle_by_message_noexit>, pre_unwind_handler_data=0x0) at throw.c:214
#31 0x400658eb in scm_i_with_continuation_barrier (body=0x40065610 <c_body>, body_data=0xbfa16ea4, 
    handler=0x40065630 <c_handler>, handler_data=0xbfa16ea4, 
    pre_unwind_handler=0x400de340 <scm_handle_by_message_noexit>, pre_unwind_handler_data=0x0) at continuations.c:452
#32 0x400659c3 in scm_c_with_continuation_barrier (func=0x400869f0 <invoke_main_func>, data=0xbfa16f50)
    at continuations.c:493
#33 0x400ddbac in scm_i_with_guile_and_parent (func=0x400869f0 <invoke_main_func>, data=0xbfa16f50, parent=0x0)
    at threads.c:734
#34 0x400ddcce in scm_with_guile (func=0x400869f0 <invoke_main_func>, data=0xbfa16f50) at threads.c:713
#35 0x400869cf in scm_boot_guile (argc=10, argv=0xbfa17014, main_func=0x8048860 <inner_main>, closure=0x0)
    at init.c:364
#36 0x0804885b in main (argc=10, argv=0xbfa17014) at guile.c:70
(gdb) 


-Dale





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: git guile hangs in fluids.test
  2010-03-30 10:16 git guile hangs in fluids.test dsmich
@ 2010-03-30 10:35 ` dsmich
  2010-03-30 11:56 ` Ludovic Courtès
  1 sibling, 0 replies; 8+ messages in thread
From: dsmich @ 2010-03-30 10:35 UTC (permalink / raw)
  To: bug-guile, dsmich


---- dsmich@roadrunner.com wrote: 
> Recently, git guile has been consistently hanging in fluids.test.  Cpu usage drops to 0.  This is on a single core machine running a fairly up-to-date Debian Testing.
> 
> Here is a gdb backtrace connected to the hung process:

After 10 make check runs, fluids.test hung 6 times, and threads.test hung 4.  Here is a bactracce from a hung threads.test:

(gdb) bt
#0  0x4001e424 in __kernel_vsyscall ()
#1  0x4046c285 in sem_wait@@GLIBC_2.1 () at ../nptl/sysdeps/unix/sysv/linux/i386/i686/../i486/sem_wait.S:80
#2  0x40168018 in GC_stop_world () at pthread_stop_world.c:426
#3  0x40157b1e in GC_stopped_mark (stop_func=0x40156f40 <GC_never_stop_func>) at alloc.c:474
#4  0x40157df9 in GC_try_to_collect_inner (stop_func=0x40156f40 <GC_never_stop_func>) at alloc.c:362
#5  0x40158194 in GC_try_to_collect (stop_func=0x40156f40 <GC_never_stop_func>) at alloc.c:762
#6  0x40158270 in GC_gcollect () at alloc.c:774
#7  0x40076277 in scm_i_gc (what=0x401150a0 "call") at gc.c:401
#8  0x400762b3 in scm_gc () at gc.c:384
#9  0x400f24a7 in vm_debug_engine (vm=0x8c83590, program=0x8cc7910, argv=0xbfecf110, nargs=1) at vm-i-system.c:853
#10 0x400e163a in scm_c_vm_run (vm=0x8c83590, program=0x8cc7910, argv=0xbfecf110, nargs=1) at vm.c:518
#11 0x4006e9c7 in scm_primitive_eval (exp=0xa8a3f98) at eval.c:859
#12 0x4008dac3 in scm_primitive_load (filename=0x995b370) at load.c:125
#13 0x400f249c in vm_debug_engine (vm=0x8c83590, program=0x8d93d80, argv=0xbfecf264, nargs=1) at vm-i-system.c:856
#14 0x400e163a in scm_c_vm_run (vm=0x8c83590, program=0x8d93d80, argv=0xbfecf264, nargs=1) at vm.c:518
#15 0x4006d945 in scm_call_1 (proc=0x8d93d80, arg1=0x8cd4220) at eval.c:574
#16 0x4006ee0e in scm_for_each (proc=0x8d93d80, arg1=0x8dac8c0, args=0x304) at eval.c:802
#17 0x400f2517 in vm_debug_engine (vm=0x8c83590, program=0x8cc7910, argv=0xbfecf3d0, nargs=1) at vm-i-system.c:862
#18 0x400e163a in scm_c_vm_run (vm=0x8c83590, program=0x8cc7910, argv=0xbfecf3d0, nargs=1) at vm.c:518
#19 0x4006e9c7 in scm_primitive_eval (exp=0x8d83048) at eval.c:859
#20 0x4006ea41 in scm_eval (exp=0x8d83048, module_or_state=0x8c22738) at eval.c:893
#21 0x400b83e5 in scm_shell (argc=10, argv=0xbfecf854) at script.c:762
#22 0x40086a36 in invoke_main_func (body_data=0xbfecf790) at init.c:381
#23 0x40065622 in c_body (d=0xbfecf6e4) at continuations.c:475
#24 0x400ddfd2 in apply_catch_closure (clo=0x0, args=0x304) at throw.c:147
#25 0x400f189f in vm_debug_engine (vm=0x8c83590, program=0x8c1c858, argv=0xbfecf5c0, nargs=4) at vm-i-system.c:924
#26 0x400e163a in scm_c_vm_run (vm=0x8c83590, program=0x8c1c858, argv=0xbfecf5c0, nargs=4) at vm.c:518
#27 0x4006d85d in scm_call_4 (proc=0x8c1c858, arg1=0x404, arg2=0x8d516d0, arg3=0x8d516c0, arg4=0x8d516b0)
    at eval.c:595
#28 0x400de9b2 in scm_catch_with_pre_unwind_handler (key=0x404, thunk=0x8d516d0, handler=0x8d516c0, 
    pre_unwind_handler=0x8d516b0) at throw.c:87
#29 0x400dea82 in scm_c_catch (tag=0x404, body=0x40065610 <c_body>, body_data=0xbfecf6e4, 
    handler=0x40065630 <c_handler>, handler_data=0xbfecf6e4, 
    pre_unwind_handler=0x400de340 <scm_handle_by_message_noexit>, pre_unwind_handler_data=0x0) at throw.c:214
#30 0x400658eb in scm_i_with_continuation_barrier (body=0x40065610 <c_body>, body_data=0xbfecf6e4, 
    handler=0x40065630 <c_handler>, handler_data=0xbfecf6e4, 
    pre_unwind_handler=0x400de340 <scm_handle_by_message_noexit>, pre_unwind_handler_data=0x0) at continuations.c:452
#31 0x400659c3 in scm_c_with_continuation_barrier (func=0x400869f0 <invoke_main_func>, data=0xbfecf790)
    at continuations.c:493
#32 0x400ddbac in scm_i_with_guile_and_parent (func=0x400869f0 <invoke_main_func>, data=0xbfecf790, parent=0x0)
    at threads.c:734
#33 0x400ddcce in scm_with_guile (func=0x400869f0 <invoke_main_func>, data=0xbfecf790) at threads.c:713
#34 0x400869cf in scm_boot_guile (argc=10, argv=0xbfecf854, main_func=0x8048860 <inner_main>, closure=0x0)
    at init.c:364
#35 0x0804885b in main (argc=10, argv=0xbfecf854) at guile.c:70
(gdb)

-Dale





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: git guile hangs in fluids.test
  2010-03-30 10:16 git guile hangs in fluids.test dsmich
  2010-03-30 10:35 ` dsmich
@ 2010-03-30 11:56 ` Ludovic Courtès
  2010-03-30 12:10   ` dsmich
  1 sibling, 1 reply; 8+ messages in thread
From: Ludovic Courtès @ 2010-03-30 11:56 UTC (permalink / raw)
  To: bug-guile

Hi Dale,

<dsmich@roadrunner.com> writes:

> Recently, git guile has been consistently hanging in fluids.test.  Cpu
> usage drops to 0.  This is on a single core machine running a fairly
> up-to-date Debian Testing.

What does ./build-aux/config.guess say?

Thanks,
Ludo’.





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: git guile hangs in fluids.test
  2010-03-30 11:56 ` Ludovic Courtès
@ 2010-03-30 12:10   ` dsmich
  2010-03-30 12:48     ` Ludovic Courtès
  0 siblings, 1 reply; 8+ messages in thread
From: dsmich @ 2010-03-30 12:10 UTC (permalink / raw)
  To: Ludovic Courtès, bug-guile


---- "Ludovic Courtès" <ludo@gnu.org> wrote: 
> Hi Dale,
> 
> <dsmich@roadrunner.com> writes:
> 
> > Recently, git guile has been consistently hanging in fluids.test.  Cpu
> > usage drops to 0.  This is on a single core machine running a fairly
> > up-to-date Debian Testing.
> 
> What does ./build-aux/config.guess say?


$ ./build-aux/config.guess 
i686-pc-linux-gnu

And as raeburn suggested:

(gdb) thread apply all bt

Thread 2 (Thread 0x40c1cb70 (LWP 15152)):
#0  0x4001e424 in __kernel_vsyscall ()
#1  0x4033cd47 in do_sigsuspend (set=0x40170320) at ../sysdeps/unix/sysv/linux/sigsuspend.c:63
#2  *__GI___sigsuspend (set=0x40170320) at ../sysdeps/unix/sysv/linux/sigsuspend.c:78
#3  0x4016822b in GC_suspend_handler_inner (sig_arg=0x1e <Address 0x1e out of bounds>, context=0x40c1b99c)
    at pthread_stop_world.c:207
#4  0x401682b5 in GC_suspend_handler (sig=30, info=0x40c1b91c, context=0x40c1b99c) at pthread_stop_world.c:142
#5  <signal handler called>
#6  0x4001e422 in __kernel_vsyscall ()
#7  0x403d23cb in read () from /lib/i686/cmov/libc.so.6
#8  0x400b6f82 in signal_delivery_thread (data=0x0) at scmsigs.c:164
#9  0x400ddfd2 in apply_catch_closure (clo=0x1, args=0x304) at throw.c:147
#10 0x400f189f in vm_debug_engine (vm=0x93a8ee0, program=0x91ea858, argv=0x40c1bec4, nargs=3) at vm-i-system.c:924
#11 0x400e163a in scm_c_vm_run (vm=0x93a8ee0, program=0x91ea858, argv=0x40c1bec4, nargs=3) at vm.c:518
#12 0x4006d8b7 in scm_call_3 (proc=0x91ea858, arg1=0x404, arg2=0x9618310, arg3=0x96182f0) at eval.c:588
#13 0x400de908 in scm_catch (key=0x404, thunk=0x9618310, handler=0x96182f0) at throw.c:74
#14 0x400dea16 in scm_catch_with_pre_unwind_handler (key=0x404, thunk=0x9618310, handler=0x96182f0, 
    pre_unwind_handler=0x904) at throw.c:82
#15 0x400dea82 in scm_c_catch (tag=0x404, body=0x400b6f20 <signal_delivery_thread>, body_data=0x0, 
    handler=0x400de3c0 <scm_handle_by_message>, handler_data=0x4011b344, pre_unwind_handler=0, 
    pre_unwind_handler_data=0x0) at throw.c:214
#16 0x400dead9 in scm_internal_catch (tag=0x404, body=0x400b6f20 <signal_delivery_thread>, body_data=0x0, 
    handler=0x400de3c0 <scm_handle_by_message>, handler_data=0x4011b344) at throw.c:223
#17 0x400dd2fb in really_spawn (d=0x40a1b16c) at threads.c:922
#18 0x40065622 in c_body (d=0x40c1c2a4) at continuations.c:475
#19 0x400ddfd2 in apply_catch_closure (clo=0x1, args=0x304) at throw.c:147
#20 0x400f189f in vm_debug_engine (vm=0x93a8ee0, program=0x91ea858, argv=0x40c1c180, nargs=4) at vm-i-system.c:924
#21 0x400e163a in scm_c_vm_run (vm=0x93a8ee0, program=0x91ea858, argv=0x40c1c180, nargs=4) at vm.c:518
#22 0x4006d85d in scm_call_4 (proc=0x91ea858, arg1=0x404, arg2=0x9618cc0, arg3=0x9618cb0, arg4=0x9618c70)
    at eval.c:595
#23 0x400de9b2 in scm_catch_with_pre_unwind_handler (key=0x404, thunk=0x9618cc0, handler=0x9618cb0, 
    pre_unwind_handler=0x9618c70) at throw.c:87
#24 0x400dea82 in scm_c_catch (tag=0x404, body=0x40065610 <c_body>, body_data=0x40c1c2a4, 
    handler=0x40065630 <c_handler>, handler_data=0x40c1c2a4, 
    pre_unwind_handler=0x400de340 <scm_handle_by_message_noexit>, pre_unwind_handler_data=0x0) at throw.c:214
#25 0x400658eb in scm_i_with_continuation_barrier (body=0x40065610 <c_body>, body_data=0x40c1c2a4, 
    handler=0x40065630 <c_handler>, handler_data=0x40c1c2a4, 
    pre_unwind_handler=0x400de340 <scm_handle_by_message_noexit>, pre_unwind_handler_data=0x0) at continuations.c:452
#26 0x400659c3 in scm_c_with_continuation_barrier (func=0x400dd260 <really_spawn>, data=0x40a1b16c)
    at continuations.c:493
#27 0x400ddbac in scm_i_with_guile_and_parent (func=0x400dd260 <really_spawn>, data=0x40a1b16c, parent=0x93a8fe8)
    at threads.c:734
#28 0x400ddc3f in spawn_thread (d=0x40a1b16c) at threads.c:934
#29 0x40166e48 in GC_inner_start_routine (sb=0x40c1c380, arg=0x99cdfe0) at pthread_support.c:1073
#30 0x4016121c in GC_call_with_stack_base (fn=0x40166da0 <GC_inner_start_routine>, arg=0x99cdfe0) at misc.c:1165
#31 0x40166cd7 in GC_start_routine (arg=0x99cdfe0) at pthread_support.c:1104
#32 0x40466585 in start_thread (arg=0x40c1cb70) at pthread_create.c:300
#33 0x403e129e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:130

Thread 1 (Thread 0x404f92f0 (LWP 14857)):
#0  0x4001e424 in __kernel_vsyscall ()
#1  0x4046c285 in sem_wait@@GLIBC_2.1 () at ../nptl/sysdeps/unix/sysv/linux/i386/i686/../i486/sem_wait.S:80
#2  0x40168018 in GC_stop_world () at pthread_stop_world.c:426
#3  0x40157b1e in GC_stopped_mark (stop_func=0x40156f40 <GC_never_stop_func>) at alloc.c:474
#4  0x40157df9 in GC_try_to_collect_inner (stop_func=0x40156f40 <GC_never_stop_func>) at alloc.c:362
#5  0x40158194 in GC_try_to_collect (stop_func=0x40156f40 <GC_never_stop_func>) at alloc.c:762
---Type <return> to continue, or q <return> to quit---
#6  0x40158270 in GC_gcollect () at alloc.c:774
#7  0x40076277 in scm_i_gc (what=0x4010efee "fluids") at gc.c:401
#8  0x40070d43 in new_fluid () at fluids.c:132
#9  scm_make_fluid () at fluids.c:180
#10 0x400f24a7 in vm_debug_engine (vm=0x9251590, program=0x9295910, argv=0xbfe8f7f0, nargs=1) at vm-i-system.c:853
#11 0x400e163a in scm_c_vm_run (vm=0x9251590, program=0x9295910, argv=0xbfe8f7f0, nargs=1) at vm.c:518
#12 0x4006e9c7 in scm_primitive_eval (exp=0x93c50f8) at eval.c:859
#13 0x4008dac3 in scm_primitive_load (filename=0x960f430) at load.c:125
#14 0x400f249c in vm_debug_engine (vm=0x9251590, program=0x920e9d8, argv=0xbfe8f944, nargs=1) at vm-i-system.c:856
#15 0x400e163a in scm_c_vm_run (vm=0x9251590, program=0x920e9d8, argv=0xbfe8f944, nargs=1) at vm.c:518
#16 0x4006d945 in scm_call_1 (proc=0x920e9d8, arg1=0x9409220) at eval.c:574
#17 0x4006ee0e in scm_for_each (proc=0x920e9d8, arg1=0x93630a0, args=0x304) at eval.c:802
#18 0x400f2517 in vm_debug_engine (vm=0x9251590, program=0x9295910, argv=0xbfe8fab0, nargs=1) at vm-i-system.c:862
#19 0x400e163a in scm_c_vm_run (vm=0x9251590, program=0x9295910, argv=0xbfe8fab0, nargs=1) at vm.c:518
#20 0x4006e9c7 in scm_primitive_eval (exp=0x9350048) at eval.c:859
#21 0x4006ea41 in scm_eval (exp=0x9350048, module_or_state=0x91f0738) at eval.c:893
#22 0x400b83e5 in scm_shell (argc=10, argv=0xbfe8ff34) at script.c:762
#23 0x40086a36 in invoke_main_func (body_data=0xbfe8fe70) at init.c:381
#24 0x40065622 in c_body (d=0xbfe8fdc4) at continuations.c:475
#25 0x400ddfd2 in apply_catch_closure (clo=0x0, args=0x304) at throw.c:147
#26 0x400f189f in vm_debug_engine (vm=0x9251590, program=0x91ea858, argv=0xbfe8fca0, nargs=4) at vm-i-system.c:924
#27 0x400e163a in scm_c_vm_run (vm=0x9251590, program=0x91ea858, argv=0xbfe8fca0, nargs=4) at vm.c:518
#28 0x4006d85d in scm_call_4 (proc=0x91ea858, arg1=0x404, arg2=0x931f6d0, arg3=0x931f6c0, arg4=0x931f6b0)
    at eval.c:595
#29 0x400de9b2 in scm_catch_with_pre_unwind_handler (key=0x404, thunk=0x931f6d0, handler=0x931f6c0, 
    pre_unwind_handler=0x931f6b0) at throw.c:87
#30 0x400dea82 in scm_c_catch (tag=0x404, body=0x40065610 <c_body>, body_data=0xbfe8fdc4, 
    handler=0x40065630 <c_handler>, handler_data=0xbfe8fdc4, 
    pre_unwind_handler=0x400de340 <scm_handle_by_message_noexit>, pre_unwind_handler_data=0x0) at throw.c:214
#31 0x400658eb in scm_i_with_continuation_barrier (body=0x40065610 <c_body>, body_data=0xbfe8fdc4, 
    handler=0x40065630 <c_handler>, handler_data=0xbfe8fdc4, 
    pre_unwind_handler=0x400de340 <scm_handle_by_message_noexit>, pre_unwind_handler_data=0x0) at continuations.c:452
#32 0x400659c3 in scm_c_with_continuation_barrier (func=0x400869f0 <invoke_main_func>, data=0xbfe8fe70)
    at continuations.c:493
#33 0x400ddbac in scm_i_with_guile_and_parent (func=0x400869f0 <invoke_main_func>, data=0xbfe8fe70, parent=0x0)
    at threads.c:734
#34 0x400ddcce in scm_with_guile (func=0x400869f0 <invoke_main_func>, data=0xbfe8fe70) at threads.c:713
#35 0x400869cf in scm_boot_guile (argc=10, argv=0xbfe8ff34, main_func=0x8048860 <inner_main>, closure=0x0)
    at init.c:364
#36 0x0804885b in main (argc=10, argv=0xbfe8ff34) at guile.c:70
(gdb)


-Dale





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: git guile hangs in fluids.test
  2010-03-30 12:10   ` dsmich
@ 2010-03-30 12:48     ` Ludovic Courtès
  2011-01-21 21:02       ` Ludovic Courtès
  0 siblings, 1 reply; 8+ messages in thread
From: Ludovic Courtès @ 2010-03-30 12:48 UTC (permalink / raw)
  To: bug-guile

Hi,

<dsmich@roadrunner.com> writes:

> Thread 1 (Thread 0x404f92f0 (LWP 14857)):
> #0  0x4001e424 in __kernel_vsyscall ()
> #1  0x4046c285 in sem_wait@@GLIBC_2.1 () at ../nptl/sysdeps/unix/sysv/linux/i386/i686/../i486/sem_wait.S:80
> #2  0x40168018 in GC_stop_world () at pthread_stop_world.c:426

The other thread should have call sem_post() to release this one.  Can
you print the value of ‘GC_suspend_ack_sem’?

Thanks,
Ludo’.





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: git guile hangs in fluids.test
  2010-03-30 12:48     ` Ludovic Courtès
@ 2011-01-21 21:02       ` Ludovic Courtès
  2011-01-22 21:39         ` Ludovic Courtès
  0 siblings, 1 reply; 8+ messages in thread
From: Ludovic Courtès @ 2011-01-21 21:02 UTC (permalink / raw)
  To: bug-guile

Hello!

ludo@gnu.org (Ludovic Courtès) writes:

> <dsmich@roadrunner.com> writes:
>
>> Thread 1 (Thread 0x404f92f0 (LWP 14857)):
>> #0  0x4001e424 in __kernel_vsyscall ()
>> #1  0x4046c285 in sem_wait@@GLIBC_2.1 () at ../nptl/sysdeps/unix/sysv/linux/i386/i686/../i486/sem_wait.S:80
>> #2  0x40168018 in GC_stop_world () at pthread_stop_world.c:426
>
> The other thread should have call sem_post() to release this one.  Can
> you print the value of ‘GC_suspend_ack_sem’?

For the record we’ve been having this problem on Hydra[*] for a couple
of weeks and I can reproduce it using an i686 build and personality.

  http://hydra.nixos.org/build/863871

It happens while running ./check-guile, somewhere in between
futures.test and gc.test (when the latter calls ‘scm_gc’ for the first
time), which sounds like a race condition making libgc think there are
more threads than in actuality.

I’ll investigate more...

Ludo’.




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: git guile hangs in fluids.test
  2011-01-21 21:02       ` Ludovic Courtès
@ 2011-01-22 21:39         ` Ludovic Courtès
  2011-01-25 13:34           ` dsmich
  0 siblings, 1 reply; 8+ messages in thread
From: Ludovic Courtès @ 2011-01-22 21:39 UTC (permalink / raw)
  To: bug-guile

Hello!

ludo@gnu.org (Ludovic Courtès) writes:

> ludo@gnu.org (Ludovic Courtès) writes:
>
>> <dsmich@roadrunner.com> writes:
>>
>>> Thread 1 (Thread 0x404f92f0 (LWP 14857)):
>>> #0  0x4001e424 in __kernel_vsyscall ()
>>> #1  0x4046c285 in sem_wait@@GLIBC_2.1 () at ../nptl/sysdeps/unix/sysv/linux/i386/i686/../i486/sem_wait.S:80
>>> #2  0x40168018 in GC_stop_world () at pthread_stop_world.c:426
>>
>> The other thread should have call sem_post() to release this one.  Can
>> you print the value of ‘GC_suspend_ack_sem’?
>
> For the record we’ve been having this problem on Hydra[*] for a couple
> of weeks and I can reproduce it using an i686 build and personality.
>
>   http://hydra.nixos.org/build/863871
>
> It happens while running ./check-guile, somewhere in between
> futures.test and gc.test (when the latter calls ‘scm_gc’ for the first
> time), which sounds like a race condition making libgc think there are
> more threads than in actuality.

After further investigation, it turns out to be due to the lack of
pthread_exit interception in both 7.1 and 7.2alpha4, which is fixed in
current CVS:

2010-08-14  Ivan Maidanski <ivmai@mail.ru> (with help from Hans Boehm)

	* include/gc_pthread_redirects.h: Test GC_PTHREADS and GC_H at the
	beginning of the file.
	* include/gc_pthread_redirects.h (GC_PTHREAD_EXIT_ATTRIBUTE): New
	macro (defined only for Linux and Solaris).
	* include/gc_pthread_redirects.h (GC_pthread_cancel,
	GC_pthread_exit): Declare new API function (only if
	GC_PTHREAD_EXIT_ATTRIBUTE).
	* include/gc_pthread_redirects.h (pthread_cancel, pthread_exit):
	Redirect (if GC_PTHREAD_EXIT_ATTRIBUTE).
	* include/private/pthread_support.h (DISABLED_GC): New macro.
	* pthread_support.c (pthread_cancel, pthread_exit): Restore
	original definition or declare "real" function (if needed and
	GC_PTHREAD_EXIT_ATTRIBUTE).
	* pthread_support.c (GC_pthread_cancel_t, GC_pthread_exit_t):
	Declare new types if needed.
	* pthread_support.c (GC_pthread_cancel, GC_pthread_exit): New
	function definition (only if GC_PTHREAD_EXIT_ATTRIBUTE).
	* pthread_support.c (GC_init_real_syms): Initialize pointers to
	the "real" pthread_cancel and pthread_exit (only if
	GC_PTHREAD_EXIT_ATTRIBUTE).
	* pthread_support.c (GC_unregister_my_thread): Enable collections
	if DISABLED_GC was set (only if GC_PTHREAD_EXIT_ATTRIBUTE).
	* pthread_support.c (pthread_cancel, pthread_exit): New wrapped
	function definition (only if GC_PTHREAD_EXIT_ATTRIBUTE defined).
	* pthread_support.c (GC_start_routine): Refine the comment.
	* extra/threadlibs.c (main): Adjust --wrap (add "read",
	"pthread_exit", "pthread_cancel" but remove "sleep").
	* doc/README.linux (GC_USE_LD_WRAP): Ditto.
	* doc/README.linux: Expand all tabs to spaces; remove trailing
	spaces at EOLn.

Initially discussed at
<http://thread.gmane.org/gmane.comp.programming.garbage-collection.boehmgc/4023>.

Thanks,
Ludo’.




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: git guile hangs in fluids.test
  2011-01-22 21:39         ` Ludovic Courtès
@ 2011-01-25 13:34           ` dsmich
  0 siblings, 0 replies; 8+ messages in thread
From: dsmich @ 2011-01-25 13:34 UTC (permalink / raw)
  To: Ludovic Courtès, bug-guile

---- "Ludovic Courtès" <ludo@gnu.org> wrote: 
> Hello!
> 
> ludo@gnu.org (Ludovic Courtès) writes:
> 
> > ludo@gnu.org (Ludovic Courtès) writes:
> >
> >> <dsmich@roadrunner.com> writes:
> >>
> >>> Thread 1 (Thread 0x404f92f0 (LWP 14857)):
> >>> #0  0x4001e424 in __kernel_vsyscall ()
> >>> #1  0x4046c285 in sem_wait@@GLIBC_2.1 () at ../nptl/sysdeps/unix/sysv/linux/i386/i686/../i486/sem_wait.S:80
> >>> #2  0x40168018 in GC_stop_world () at pthread_stop_world.c:426
> >>
> >> The other thread should have call sem_post() to release this one.  Can
> >> you print the value of ‘GC_suspend_ack_sem’?
> >
> > For the record we’ve been having this problem on Hydra[*] for a couple
> > of weeks and I can reproduce it using an i686 build and personality.
> >
> >   http://hydra.nixos.org/build/863871
> >
> > It happens while running ./check-guile, somewhere in between
> > futures.test and gc.test (when the latter calls ‘scm_gc’ for the first
> > time), which sounds like a race condition making libgc think there are
> > more threads than in actuality.
> 
> After further investigation, it turns out to be due to the lack of
> pthread_exit interception in both 7.1 and 7.2alpha4, which is fixed in
> current CVS:

Yes!  I can confirm that fluids.test and threads.test (which was also hanging) no longer hangs.

I am getting a failure now in popen.test however.  More details to come.

Now if they can only get a release out!  And if it can advance beyond Debian Experimantal!

Thanks!
  -Dale




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-01-25 13:34 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-30 10:16 git guile hangs in fluids.test dsmich
2010-03-30 10:35 ` dsmich
2010-03-30 11:56 ` Ludovic Courtès
2010-03-30 12:10   ` dsmich
2010-03-30 12:48     ` Ludovic Courtès
2011-01-21 21:02       ` Ludovic Courtès
2011-01-22 21:39         ` Ludovic Courtès
2011-01-25 13:34           ` dsmich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).