Ludovic Courtès writes: > I’m very much convinced by the patch. Yet it bothers me that I cannot > reproduce the problem. I tried first with this test, which attempts to > reproduce what you describe in the commit log above: > But there’s no deadlock, and I think that’s because the problem we’re > seeing has to do with substitute goals, and there’s no such goal here. The problem we've been seeing in the wild has to do with substitute goals, yes, but the same problem also exists for derivation goals. Starting the 3 builds at basically the same time seems to be a bit too "nice", given that some effort has already been made to make the scheduler more deterministic, and that they all use the exact same mechanism for deciding how long to wait before retrying the locks. Also, with only 3 inputs, even if the assignment of processes to output path roles (e.g. builds, sits-on, blocked) were completely random, only 2 out of 8 possible assignments to the latter two roles result in a deadlock. While 3 inputs makes for a nice, simple demonstration of the problem, for reliably recreating it, we're probably going to want more. Also, if I understand correctly, the issue with destructors not being run in a timely manner is only for top-level goals. I've managed to create a deadlock using a derivation with 10 inputs, passing all of the inputs and the dependent derivation as top-level derivations to build-things. I've also changed the duration of each input derivation build from 3 seconds to 4 and added a 1-second sleep between starting each thread. I have only seen this arrangement fail to create a deadlock once, but for good measure, I've subsequently bumped up the number of inputs to 15. Note that this means that this test will require 23 seconds to pass (3 daemon processes, 1 job per daemon process, 4 seconds per build round, 3 jobs per build round, 15 jobs in 5 rounds = 20 seconds, plus 3 seconds for the dependent derivation). I also had to set #:verbosity 3 instead of 4 because I kept getting encoding errors that killed one of my threads. These are most puzzling because they occur even after changing everything to just use (current-error-port) and making sure to run (set-port-conversion-strategy! (current-error-port) 'escape) I suspect there may be some underlying bug in guix or guile. Also, I don't have make-custom-textual-output-port here; it appears to only be in guile-next. Here is the reproducer: --8<---------------cut here---------------start------------->8--- ;; https://issues.guix.gnu.org/31785 (use-modules (guix) ((gnu packages) #:select (specification->package)) (srfi srfi-1) (ice-9 threads) (ice-9 match) (ice-9 textual-ports)) (define (nonce) (logxor (car (gettimeofday)) (cdr (gettimeofday)) (getpid))) (define input-drvs (map (lambda (n) (computed-file (string-append "drv" (number->string n)) #~(begin #$(nonce) (sleep 4) (mkdir #$output)))) (iota 15))) (define top-drv (computed-file "top-drv" #~(begin #$(nonce) (sleep 3) (pk 'deps: #$@input-drvs) (mkdir #$output)))) (%graft? #f) (let* ((drvs (cons top-drv input-drvs)) (builder (lambda (name lst) (call-with-new-thread (lambda () (with-store store (set-build-options store #:verbosity 3 #:max-build-jobs 1) (run-with-store store (mlet %store-monad ((lst (mapm %store-monad lower-object lst))) (built-derivations lst)))))))) (thread1 (begin (sleep 1) (builder 'thread1 drvs))) (thread2 (begin (sleep 1) (builder 'thread2 drvs))) (thread3 (begin (sleep 1) (builder 'thread3 drvs)))) (join-thread thread1) (join-thread thread2) (join-thread thread3)) --8<---------------cut here---------------end--------------->8--- P.S: If in attempting to turn this into a proper test, you try using the timeout argument to join-thread, be aware that a second attempt at calling join-thread on the same thread will fail with "In procedure lock-mutex: mutex already locked by thread". This is because join-thread in (ice-9 threads) has a bug in it: unlock-mutex is not called in the "else" case of the cond. I am mentioning this here in case I forget to make a proper report of it. - reepca