unofficial mirror of bug-guix@gnu.org 
 help / color / mirror / code / Atom feed
* bug#24496: offloading should fall back to local build after n tries
@ 2016-09-21  9:39 ng0
  2016-09-26  9:20 ` Ludovic Courtès
  0 siblings, 1 reply; 9+ messages in thread
From: ng0 @ 2016-09-21  9:39 UTC (permalink / raw)
  To: 24496

When I forgot that my build machine is offline and I did not pass
--no-build-hook, the offloading keeps trying forever until I had to
cancel the build, boot the build-machine and started the build again.

A solution could be a config option or default behavior which after
failing to offload for n times gives up and uses the local builder.

Is this desired at all? Setups like hydra could get problems, but for
small setups with the same architecture there could be a solution beyond
--no-build-hook?
-- 
              ng0

^ permalink raw reply	[flat|nested] 9+ messages in thread

* bug#24496: offloading should fall back to local build after n tries
  2016-09-21  9:39 bug#24496: offloading should fall back to local build after n tries ng0
@ 2016-09-26  9:20 ` Ludovic Courtès
  2016-10-04 17:08   ` ng0
  2021-12-16 12:52   ` zimoun
  0 siblings, 2 replies; 9+ messages in thread
From: Ludovic Courtès @ 2016-09-26  9:20 UTC (permalink / raw)
  To: ng0; +Cc: 24496

Hello!

ng0 <ngillmann@runbox.com> skribis:

> When I forgot that my build machine is offline and I did not pass
> --no-build-hook, the offloading keeps trying forever until I had to
> cancel the build, boot the build-machine and started the build again.
>
> A solution could be a config option or default behavior which after
> failing to offload for n times gives up and uses the local builder.
>
> Is this desired at all? Setups like hydra could get problems, but for
> small setups with the same architecture there could be a solution beyond
> --no-build-hook?

Like you say, on Hydra-style setup this could be a problem: the
front-end machine may have --max-jobs=0, meaning that it cannot perform
builds on its own.

So I guess we would need a command-line option to select a different
behavior.  I’m not sure how to do that because ‘guix offload’ is
“hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
option.

In the meantime, you could also hack up your machines.scm: it would
return a list where unreachable machines have been filtered out.

Ludo’.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* bug#24496: offloading should fall back to local build after n tries
  2016-09-26  9:20 ` Ludovic Courtès
@ 2016-10-04 17:08   ` ng0
  2016-10-05 11:36     ` Ludovic Courtès
  2021-12-16 12:52   ` zimoun
  1 sibling, 1 reply; 9+ messages in thread
From: ng0 @ 2016-10-04 17:08 UTC (permalink / raw)
  To: Ludovic Courtès; +Cc: 24496

Ludovic Courtès <ludo@gnu.org> writes:

> Hello!
>
> ng0 <ngillmann@runbox.com> skribis:
>
>> When I forgot that my build machine is offline and I did not pass
>> --no-build-hook, the offloading keeps trying forever until I had to
>> cancel the build, boot the build-machine and started the build again.
>>
>> A solution could be a config option or default behavior which after
>> failing to offload for n times gives up and uses the local builder.
>>
>> Is this desired at all? Setups like hydra could get problems, but for
>> small setups with the same architecture there could be a solution beyond
>> --no-build-hook?
>
> Like you say, on Hydra-style setup this could be a problem: the
> front-end machine may have --max-jobs=0, meaning that it cannot perform
> builds on its own.
>
> So I guess we would need a command-line option to select a different
> behavior.  I’m not sure how to do that because ‘guix offload’ is
> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
> option.

Could the daemon run with --enable-hydra-style or --disable-hydra-style
and --disable-hydra-style would allow falling back to local build if
after a defined time - keeping slow connections in mind - the machine
did not reply.

> In the meantime, you could also hack up your machines.scm: it would
> return a list where unreachable machines have been filtered out.

How can I achieve this?

And to append to this bug: it seems to me that offloading requires 1
lsh-key for each
build-machine. (https://lists.gnu.org/archive/html/help-guix/2016-10/msg00007.html)
and that you can not directly address them (say I want to create some
system where I want to build on machine 1 AND machine 2. Having 2 x86_64
in machines.scm only selects one of them (if 2 were working, see linked
thread) and builds on the one which is accessible first. If however the
first machine is somehow blocked and it fails, therefore terminates lsh
connection, the build does not happen at all.

Leaving out the problems, what I want to do in short: How could I build
on both systems at the same time when I desire to do so?

> Ludo’.
>

-- 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* bug#24496: offloading should fall back to local build after n tries
  2016-10-04 17:08   ` ng0
@ 2016-10-05 11:36     ` Ludovic Courtès
  0 siblings, 0 replies; 9+ messages in thread
From: Ludovic Courtès @ 2016-10-05 11:36 UTC (permalink / raw)
  To: ng0; +Cc: 24496

ng0 <ngillmann@runbox.com> skribis:

> Ludovic Courtès <ludo@gnu.org> writes:

[...]

>> Like you say, on Hydra-style setup this could be a problem: the
>> front-end machine may have --max-jobs=0, meaning that it cannot perform
>> builds on its own.
>>
>> So I guess we would need a command-line option to select a different
>> behavior.  I’m not sure how to do that because ‘guix offload’ is
>> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
>> option.
>
> Could the daemon run with --enable-hydra-style or --disable-hydra-style
> and --disable-hydra-style would allow falling back to local build if
> after a defined time - keeping slow connections in mind - the machine
> did not reply.

That would be too ad-hoc IMO, and the problem mentioned above remains.

>> In the meantime, you could also hack up your machines.scm: it would
>> return a list where unreachable machines have been filtered out.
>
> How can I achieve this?

Something like:

  (define the-machine (build-machine …))

  (if (managed-to-connect-timely the-machine)
      (list the-machine)
      '())

… where ‘managed-to-connect-timely’ would try to connect to the
machine with a timeout.

> And to append to this bug: it seems to me that offloading requires 1
> lsh-key for each
> build-machine.

The main machine needs to be able to connect to each build machine over
SSH, so indeed, that requires proper SSH key registration (host keys and
authorized user keys).

> (https://lists.gnu.org/archive/html/help-guix/2016-10/msg00007.html)
> and that you can not directly address them (say I want to create some
> system where I want to build on machine 1 AND machine 2. Having 2
> x86_64 in machines.scm only selects one of them (if 2 were working,
> see linked thread) and builds on the one which is accessible first. If
> however the first machine is somehow blocked and it fails, therefore
> terminates lsh connection, the build does not happen at all.

The code that selects machines is in (guix scripts offload),
specifically ‘choose-build-machine’.  It tries to choose the “best”
machine, which means, roughly, the fastest and least loaded one.

HTH,
Ludo’.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* bug#24496: offloading should fall back to local build after n tries
  2016-09-26  9:20 ` Ludovic Courtès
  2016-10-04 17:08   ` ng0
@ 2021-12-16 12:52   ` zimoun
  2021-12-17 15:33     ` Ludovic Courtès
  1 sibling, 1 reply; 9+ messages in thread
From: zimoun @ 2021-12-16 12:52 UTC (permalink / raw)
  To: Ludovic Courtès; +Cc: 24496, ng0

Hi,

I am just hitting this old bug#24496 [1].

On Mon, 26 Sep 2016 at 18:20, ludo@gnu.org (Ludovic Courtès) wrote:
> ng0 <ngillmann@runbox.com> skribis:
>
>> When I forgot that my build machine is offline and I did not pass
>> --no-build-hook, the offloading keeps trying forever until I had to
>> cancel the build, boot the build-machine and started the build again.

[...]

> Like you say, on Hydra-style setup this could be a problem: the
> front-end machine may have --max-jobs=0, meaning that it cannot perform
> builds on its own.
>
> So I guess we would need a command-line option to select a different
> behavior.  I’m not sure how to do that because ‘guix offload’ is
> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
> option.

When the build machine used to offload is offline and the master daemon
is --max-jobs=0, I expect X tries (leading to timeout) and then just
fails with a hint, where X is defined by user.  WDYT?


> In the meantime, you could also hack up your machines.scm: it would
> return a list where unreachable machines have been filtered out.

Maybe, this could be done by “guix offload”.


Cheers,
simon


1: <http://issues.guix.gnu.org/issue/24496>




^ permalink raw reply	[flat|nested] 9+ messages in thread

* bug#24496: offloading should fall back to local build after n tries
  2021-12-16 12:52   ` zimoun
@ 2021-12-17 15:33     ` Ludovic Courtès
  2021-12-17 21:57       ` Maxim Cournoyer
  0 siblings, 1 reply; 9+ messages in thread
From: Ludovic Courtès @ 2021-12-17 15:33 UTC (permalink / raw)
  To: zimoun; +Cc: 24496, Maxim Cournoyer, ng0

Hi!

zimoun <zimon.toutoune@gmail.com> skribis:

> I am just hitting this old bug#24496 [1].
>
> On Mon, 26 Sep 2016 at 18:20, ludo@gnu.org (Ludovic Courtès) wrote:
>> ng0 <ngillmann@runbox.com> skribis:
>>
>>> When I forgot that my build machine is offline and I did not pass
>>> --no-build-hook, the offloading keeps trying forever until I had to
>>> cancel the build, boot the build-machine and started the build again.
>
> [...]
>
>> Like you say, on Hydra-style setup this could be a problem: the
>> front-end machine may have --max-jobs=0, meaning that it cannot perform
>> builds on its own.
>>
>> So I guess we would need a command-line option to select a different
>> behavior.  I’m not sure how to do that because ‘guix offload’ is
>> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
>> option.
>
> When the build machine used to offload is offline and the master daemon
> is --max-jobs=0, I expect X tries (leading to timeout) and then just
> fails with a hint, where X is defined by user.  WDYT?
>
>
>> In the meantime, you could also hack up your machines.scm: it would
>> return a list where unreachable machines have been filtered out.
>
> Maybe, this could be done by “guix offload”.

Prior to commit efbf5fdd01817ea75de369e3dd2761a85f8f7dd5, this was the
case: an unreachable machine would have ‘machine-load’ return +inf.0,
and so it would be discarded from the list of candidates.

However, I think this behavior was unintentionally lost in
efbf5fdd01817ea75de369e3dd2761a85f8f7dd5.  Maxim, WDYT?

Thanks,
Ludo’.




^ permalink raw reply	[flat|nested] 9+ messages in thread

* bug#24496: offloading should fall back to local build after n tries
  2021-12-17 15:33     ` Ludovic Courtès
@ 2021-12-17 21:57       ` Maxim Cournoyer
  2021-12-18  0:10         ` zimoun
  2021-12-21 14:28         ` Ludovic Courtès
  0 siblings, 2 replies; 9+ messages in thread
From: Maxim Cournoyer @ 2021-12-17 21:57 UTC (permalink / raw)
  To: Ludovic Courtès; +Cc: 24496, ng0

Hello Ludovic,

Ludovic Courtès <ludo@gnu.org> writes:

> Hi!
>
> zimoun <zimon.toutoune@gmail.com> skribis:
>
>> I am just hitting this old bug#24496 [1].
>>
>> On Mon, 26 Sep 2016 at 18:20, ludo@gnu.org (Ludovic Courtès) wrote:
>>> ng0 <ngillmann@runbox.com> skribis:
>>>
>>>> When I forgot that my build machine is offline and I did not pass
>>>> --no-build-hook, the offloading keeps trying forever until I had to
>>>> cancel the build, boot the build-machine and started the build again.
>>
>> [...]
>>
>>> Like you say, on Hydra-style setup this could be a problem: the
>>> front-end machine may have --max-jobs=0, meaning that it cannot perform
>>> builds on its own.
>>>
>>> So I guess we would need a command-line option to select a different
>>> behavior.  I’m not sure how to do that because ‘guix offload’ is
>>> “hidden” behind ‘guix-daemon’, so there’s no obvious place for such an
>>> option.
>>
>> When the build machine used to offload is offline and the master daemon
>> is --max-jobs=0, I expect X tries (leading to timeout) and then just
>> fails with a hint, where X is defined by user.  WDYT?
>>
>>
>>> In the meantime, you could also hack up your machines.scm: it would
>>> return a list where unreachable machines have been filtered out.
>>
>> Maybe, this could be done by “guix offload”.
>
> Prior to commit efbf5fdd01817ea75de369e3dd2761a85f8f7dd5, this was the
> case: an unreachable machine would have ‘machine-load’ return +inf.0,
> and so it would be discarded from the list of candidates.
>
> However, I think this behavior was unintentionally lost in
> efbf5fdd01817ea75de369e3dd2761a85f8f7dd5.  Maxim, WDYT?

I just reviewed this commit, and don't see anywhere where the behavior
would have changed.  The discarding happens here:

--8<---------------cut here---------------start------------->8---
-         (if (and node (< load 2.) (>= space %minimum-disk-space))
+         (if (and node
+                  (or (not threshold) (< load threshold))
+                  (>= space %minimum-disk-space))
--8<---------------cut here---------------end--------------->8---

previously load could be set to +inf.0.  Now it is a float between 0.0
and 1.0, with threshold defaulting to 0.6.

As far as I remember, this has always been a problem for me (busy
offload machines being forever retried with no fallback to the local
machine).

Thanks,

Maxim




^ permalink raw reply	[flat|nested] 9+ messages in thread

* bug#24496: offloading should fall back to local build after n tries
  2021-12-17 21:57       ` Maxim Cournoyer
@ 2021-12-18  0:10         ` zimoun
  2021-12-21 14:28         ` Ludovic Courtès
  1 sibling, 0 replies; 9+ messages in thread
From: zimoun @ 2021-12-18  0:10 UTC (permalink / raw)
  To: Maxim Cournoyer, Ludovic Courtès; +Cc: 24496, ng0

Hi,

I have not checked all the details, since the code of “guix offload” is
run by root, IIUC and so it is not as friendly as usual to debug. :-)

On Fri, 17 Dec 2021 at 16:57, Maxim Cournoyer <maxim.cournoyer@gmail.com> wrote:

>> However, I think this behavior was unintentionally lost in
>> efbf5fdd01817ea75de369e3dd2761a85f8f7dd5.  Maxim, WDYT?
>
> I just reviewed this commit, and don't see anywhere where the behavior
> would have changed.  The discarding happens here:

[...]

> previously load could be set to +inf.0.  Now it is a float between 0.0
> and 1.0, with threshold defaulting to 0.6.

My /etc/guix/machines.scm contains only one machine and --max-jobs=0.

Because the machine is unreachable, IIUC, ’node’ is (or should be) false
and ’load’ is thus not involved, I guess.  Indeed, ’report-load’
displays nothing, and instead I get:

--8<---------------cut here---------------start------------->8---
The following derivation will be built:
   /gnu/store/c1qicg17ygn1a0biq0q4mkprzy4p2x74-hello-2.10.drv
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
guix offload: error: failed to connect to 'x.x.x.x': Timeout connecting to x.x.x.x
waiting for locks or build slots...
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
guix offload: error: failed to connect to 'x.x.x.x': Timeout connecting to x.x.x.x
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
guix offload: error: failed to connect to 'x.x.x.x': Timeout connecting to x.x.x.x
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
guix offload: error: failed to connect to 'x.x.x.x': Timeout connecting to x.x.x.x
process 75621 acquired build slot '/var/guix/offload/x.x.x.x:22/0'
  C-c C-c
--8<---------------cut here---------------end--------------->8---


Well, if the machine is not reachable, then ’session’ is false, right?

--8<---------------cut here---------------start------------->8---
@@ -472,11 +480,15 @@ (define (machine-faster? m1 m2)
        (let* ((session (false-if-exception (open-ssh-session best
                                                              %short-timeout)))
               (node    (and session (remote-inferior session)))
-              (load    (and node (normalized-load best (node-load node))))
+              (load    (and node (node-load node)))
+              (threshold (build-machine-overload-threshold best))
               (space   (and node (node-free-disk-space node))))
+         (when load (report-load best load))
          (when node (close-inferior node))
          (when session (disconnect! session))
-         (if (and node (< load 2.) (>= space %minimum-disk-space))
+         (if (and node
+                  (or (not threshold) (< load threshold))
+                  (>= space %minimum-disk-space))
[...]
             (begin
               ;; BEST is unsuitable, so try the next one.
               (when (and space (< space %minimum-disk-space))
                 (format (current-error-port)
                         "skipping machine '~a' because it is low \
on disk space (~,2f MiB free)~%"
                         (build-machine-name best)
                         (/ space (expt 2 20) 1.)))
               (release-build-slot slot)
               (loop others)))))
--8<---------------cut here---------------end--------------->8---

Therefore, the ’else’ branch goes and so the codes does ’(loop others)’.

However, I miss why ’others’ is not empty (only one machine in
/etc/guix/machines.scm).  Well, the message «waiting for locks or build
slots...» suggests that something is restarted and it is not that ’loop’
we are observing but another one.

On daemon side, I do not know what this ’waitingForAWhile’ and
’lastWokenUp’ mean.

--8<---------------cut here---------------start------------->8---
    /* If we are polling goals that are waiting for a lock, then wake
       up after a few seconds at most. */
    if (!waitingForAWhile.empty()) {
        useTimeout = true;
        if (lastWokenUp == 0)
            printMsg(lvlError, "waiting for locks or build slots...");
        if (lastWokenUp == 0 || lastWokenUp > before) lastWokenUp = before;
        timeout.tv_sec = std::max((time_t) 1, (time_t) (lastWokenUp + settings.pollInterval - before));
    } else lastWokenUp = 0;
--8<---------------cut here---------------end--------------->8---


Bah it requires more investigations and I agree with Maxim that
efbf5fdd01817ea75de369e3dd2761a85f8f7dd5 is probably not the issue
there.

Cheers,
simon




^ permalink raw reply	[flat|nested] 9+ messages in thread

* bug#24496: offloading should fall back to local build after n tries
  2021-12-17 21:57       ` Maxim Cournoyer
  2021-12-18  0:10         ` zimoun
@ 2021-12-21 14:28         ` Ludovic Courtès
  1 sibling, 0 replies; 9+ messages in thread
From: Ludovic Courtès @ 2021-12-21 14:28 UTC (permalink / raw)
  To: Maxim Cournoyer; +Cc: 24496, ng0

Hi,

Maxim Cournoyer <maxim.cournoyer@gmail.com> skribis:

> I just reviewed this commit, and don't see anywhere where the behavior
> would have changed.  The discarding happens here:
>
> -         (if (and node (< load 2.) (>= space %minimum-disk-space))
> +         (if (and node
> +                  (or (not threshold) (< load threshold))
> +                  (>= space %minimum-disk-space))
>
> previously load could be set to +inf.0.  Now it is a float between 0.0
> and 1.0, with threshold defaulting to 0.6.

Ah alright, so we’re fine.

> As far as I remember, this has always been a problem for me (busy
> offload machines being forever retried with no fallback to the local
> machine).

OK, I guess I’m overlooking something.

Thanks,
Ludo’.




^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-12-21 14:29 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-21  9:39 bug#24496: offloading should fall back to local build after n tries ng0
2016-09-26  9:20 ` Ludovic Courtès
2016-10-04 17:08   ` ng0
2016-10-05 11:36     ` Ludovic Courtès
2021-12-16 12:52   ` zimoun
2021-12-17 15:33     ` Ludovic Courtès
2021-12-17 21:57       ` Maxim Cournoyer
2021-12-18  0:10         ` zimoun
2021-12-21 14:28         ` Ludovic Courtès

Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/guix.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).