* Proposal: Build timers
@ 2021-11-22 22:02 Jacob Hrbek
2021-11-23 1:06 ` zimoun
2021-11-23 20:09 ` Liliana Marie Prikler
0 siblings, 2 replies; 19+ messages in thread
From: Jacob Hrbek @ 2021-11-22 22:02 UTC (permalink / raw)
To: guix-devel@gnu.org
[-- Attachment #1.1.1: Type: text/plain, Size: 135 bytes --]
See the proposal in https://git.dotya.ml/guix.next/GUIX.next/issues/5
-- Jacob "Kreyren" Hrbek
Sent with ProtonMail Secure Email.
[-- Attachment #1.1.2.1: Type: text/html, Size: 485 bytes --]
[-- Attachment #1.2: publickey - kreyren@rixotstudio.cz - 0x1677DB82.asc --]
[-- Type: application/pgp-keys, Size: 737 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 249 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-22 22:02 Proposal: Build timers Jacob Hrbek
@ 2021-11-23 1:06 ` zimoun
2021-11-23 6:21 ` Jacob Hrbek
2021-11-23 20:09 ` Liliana Marie Prikler
1 sibling, 1 reply; 19+ messages in thread
From: zimoun @ 2021-11-23 1:06 UTC (permalink / raw)
To: Jacob Hrbek, guix-devel@gnu.org
Hi,
On Mon, 22 Nov 2021 at 22:02, Jacob Hrbek <kreyren@rixotstudio.cz> wrote:
> See the proposal in https://git.dotya.ml/guix.next/GUIX.next/issues/5
If I understand well your proposal, you are suggesting to attach a value
’build time’. While I understand it could be useful for monitoring;
especially CI (Cuirass or Build Coordinator) – but it appears useless
for your use case. Where do you want to attach such value? I think it
is not affordable to add another field (or ’properties’) for all the
packages.
When discussing the Cuirass revamp, it had been mentioned to grab the
Cuirass database and then try some analytics to infer heuristics helping
for a better scheduling strategy. However, the task is not as easy as
it appears at first. Some builds are blocked by unrelated IO
operations, e.g., [1], thus it is hard to distinguish between a
regression or something unexpected in the build farms. Something useful
for monitoring, but hard to exploit for local builds, IMHO.
Other said, the “telemetry“ you are suggesting require non-trivial
filtering to gain the robust feature you expect, again IMHO.
Last, build-time depends on the environment (how the machine is
stressed), and for instance, I do not want to stop a build because on
average people are building it using X time when my machine builds it
today using X+y time (because it is CPU stressed by something else or
whatever). Well, I am doubtful that the standard error would not be too
much compared to the mean; other said, my guess is a flat Gaussian
function because of heterogeneous hardware and various levels of stress
for the same build.
To be explicit, I do not think Guix should take care of this. From my
opinion, if the build farm does not have the substitute (guix weather),
it is a bad sign the package will build locally; therefore, if resources
are limited, before building locally, I would inspect the output of
build farms (ci.guix.gnu.org and bordeaux.guix.gnu.org). Obviously, it
depends your target architecture and some are poorly supported, sadly.
1: <http://issues.guix.gnu.org/issue/51787>
Cheers,
simon
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 1:06 ` zimoun
@ 2021-11-23 6:21 ` Jacob Hrbek
2021-11-23 11:56 ` zimoun
2021-11-23 12:05 ` Julien Lepiller
0 siblings, 2 replies; 19+ messages in thread
From: Jacob Hrbek @ 2021-11-23 6:21 UTC (permalink / raw)
To: zimoun; +Cc: guix-devel@gnu.org
[-- Attachment #1.1: Type: text/plain, Size: 4914 bytes --]
I think you are overcomplicating the implementation.. What I am proposing is to store the time value before and after the build and then log the subtraction of these two values per package (or even per package's phase).
For storage it can be either/both:
1. locally: Storing the value somewhere on the system and adding up to it each build to provide more accurate average.
**optionally** This local database can be shared across multiple systems that add values to it like simple listener waiting for POST requests.
- within the guix repo: Since we are already building the package we can take the time and then do the provided math in reverse to calculate the time:
Build took 100 sec on system with 8 threads at 2.4 Ghz max cpu frequency:
100 * (2.4 * 8) = 1920 (build time value per one thread at 1 Ghz)
Building the package on system with 2 threads at 2.4 Ghz max cpu frequency:
1920 / (2 * 2.4) = 400
We can then assume that the build will take 1920/400=4.8 -> 4.8 times longer on this system.
The math might need to be adjusted, but it seems to be sufficiently accurate through my testing, fwiw I see that `(max cpu frequency * cpu threads)` is an important component of the equasion using the analogy of a (possibly silly) "pokemon battle" assuming interpreting a mathematical equasion to define the Health Points of the package and damage per second of the CPU then simply substracting these two values to determine how long it will take to build alike package has 500 HP -> Needs a CPU that deals 100 HP to complete in 5 sec or CPU that deals 50 HP to finish in 10 sec.
About accuracy: I highly doubt that we need to worry about the system noise as that will be mitigated after enough systems submit average build time with their max CPU frequency and threads used.. we shouldn't really bother past that at the current stage and we can always add additional metadata if needed.
Either way if we decide to not implement the logging of the build time I would still argue that we should still consider just plain outputting of the time needed to complete the phase and package as I see it as a best practice for monitoring and assuming that it's very cheap (5~20 cpu cycles?) in terms of processing resources to do so.
-- Jacob "Kreyren" Hrbek
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, November 23rd, 2021 at 1:06 AM, zimoun <zimon.toutoune@gmail.com> wrote:
> Hi,
>
> On Mon, 22 Nov 2021 at 22:02, Jacob Hrbek kreyren@rixotstudio.cz wrote:
>
> > See the proposal in https://git.dotya.ml/guix.next/GUIX.next/issues/5
>
> If I understand well your proposal, you are suggesting to attach a value
>
> ’build time’. While I understand it could be useful for monitoring;
>
> especially CI (Cuirass or Build Coordinator) – but it appears useless
>
> for your use case. Where do you want to attach such value? I think it
>
> is not affordable to add another field (or ’properties’) for all the
>
> packages.
>
> When discussing the Cuirass revamp, it had been mentioned to grab the
>
> Cuirass database and then try some analytics to infer heuristics helping
>
> for a better scheduling strategy. However, the task is not as easy as
>
> it appears at first. Some builds are blocked by unrelated IO
>
> operations, e.g., [1], thus it is hard to distinguish between a
>
> regression or something unexpected in the build farms. Something useful
>
> for monitoring, but hard to exploit for local builds, IMHO.
>
> Other said, the “telemetry“ you are suggesting require non-trivial
>
> filtering to gain the robust feature you expect, again IMHO.
>
> Last, build-time depends on the environment (how the machine is
>
> stressed), and for instance, I do not want to stop a build because on
>
> average people are building it using X time when my machine builds it
>
> today using X+y time (because it is CPU stressed by something else or
>
> whatever). Well, I am doubtful that the standard error would not be too
>
> much compared to the mean; other said, my guess is a flat Gaussian
>
> function because of heterogeneous hardware and various levels of stress
>
> for the same build.
>
> To be explicit, I do not think Guix should take care of this. From my
>
> opinion, if the build farm does not have the substitute (guix weather),
>
> it is a bad sign the package will build locally; therefore, if resources
>
> are limited, before building locally, I would inspect the output of
>
> build farms (ci.guix.gnu.org and bordeaux.guix.gnu.org). Obviously, it
>
> depends your target architecture and some are poorly supported, sadly.
>
> 1: http://issues.guix.gnu.org/issue/51787
>
> Cheers,
>
> simon
[-- Attachment #1.2: publickey - kreyren@rixotstudio.cz - 0x1677DB82.asc --]
[-- Type: application/pgp-keys, Size: 737 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 249 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 6:21 ` Jacob Hrbek
@ 2021-11-23 11:56 ` zimoun
2021-11-23 14:39 ` Jacob Hrbek
2021-11-23 12:05 ` Julien Lepiller
1 sibling, 1 reply; 19+ messages in thread
From: zimoun @ 2021-11-23 11:56 UTC (permalink / raw)
To: Jacob Hrbek; +Cc: guix-devel@gnu.org
Hi,
On Tue, 23 Nov 2021 at 06:21, Jacob Hrbek <kreyren@rixotstudio.cz> wrote:
> 1. locally: Storing the value somewhere on the system and adding up to
> it each build to provide more accurate average.
Timing is already stored, see “guix build --log-file”. Give a look at
’/var/log/guix/drvs’. For instance,
--8<---------------cut here---------------start------------->8---
$ bzcat /var/log/guix/drvs/aq/abymi9yk7pv89614dcdfll3hh4i5mc-julia-1.5.3.drv.bz2 | grep phase | grep seconds
phase `set-SOURCE-DATE-EPOCH' succeeded after 0.0 seconds
phase `set-paths' succeeded after 0.0 seconds
phase `install-locale' succeeded after 0.0 seconds
phase `unpack' succeeded after 1.1 seconds
phase `use-system-libwhich' succeeded after 0.0 seconds
phase `disable-documentation' succeeded after 0.0 seconds
phase `prepare-deps' succeeded after 0.0 seconds
phase `bootstrap' succeeded after 0.0 seconds
phase `patch-usr-bin-file' succeeded after 0.0 seconds
phase `patch-source-shebangs' succeeded after 0.2 seconds
phase `patch-generated-file-shebangs' succeeded after 0.0 seconds
phase `fix-include-and-link-paths' succeeded after 0.0 seconds
phase `replace-default-shell' succeeded after 0.0 seconds
phase `fix-precompile' succeeded after 0.0 seconds
phase `build' succeeded after 354.3 seconds
phase `set-home' succeeded after 0.0 seconds
phase `disable-broken-tests' succeeded after 0.0 seconds
phase `check' succeeded after 7428.8 seconds
phase `install' succeeded after 16.0 seconds
phase `make-wrapper' succeeded after 0.0 seconds
phase `patch-shebangs' succeeded after 0.0 seconds
phase `strip' succeeded after 0.0 seconds
phase `validate-runpath' succeeded after 0.0 seconds
phase `validate-documentation-location' succeeded after 0.0 seconds
phase `delete-info-dir-file' succeeded after 0.0 seconds
phase `patch-dot-desktop-files' succeeded after 0.0 seconds
phase `install-license-files' succeeded after 0.0 seconds
phase `reset-gzip-timestamps' succeeded after 0.0 seconds
phase `compress-documentation' succeeded after 0.0 seconds
--8<---------------cut here---------------end--------------->8---
Therefore, you need to extract somehow that information.
> **optionally** This local database can be shared across multiple
> systems that add values to it like simple listener waiting for POST
> requests.
It should be better to use a content-addressed distribution such as IPFS
or GNUnet, IMHO.
> - within the guix repo: Since we are already building the package we
> can take the time and then do the provided math in reverse to
> calculate the time:
>
> Build took 100 sec on system with 8 threads at 2.4 Ghz max cpu frequency:
>
> 100 * (2.4 * 8) = 1920 (build time value per one thread at 1 Ghz)
>
> Building the package on system with 2 threads at 2.4 Ghz max cpu frequency:
>
> 1920 / (2 * 2.4) = 400
>
> We can then assume that the build will take 1920/400=4.8 -> 4.8
> times longer on this system.
Are you assuming here that the two machines are the same? Or are they
different?
This approximation would not even be accurate enough for the same
machine. For instance, the test suite of the julia package runs mainly
sequential using one thread. If you go back to numbers above,
build=354.3 seconds and check=7428.8 seconds, so the number of threads
only tweaks timing of build phase, which will not impact much the
overall timing.
Somehow, IIUC your proposal, you would like, based on timings from
machine A about a set of packages, and timings from machine B about the
same set of packages, knowing the timing of machine B for package foo,
then extrapolate timing for machine A of package foo. The maths for
that are not linear, IMHO, and require “complicated” heuristics. It is
not that complicated, it “just” require some statistical regression –
though it is not straightforward either. :-)
BTW, why not directly substitute package foo from machine B?
> The math might need to be adjusted, but it seems to be sufficiently
> accurate through my testing, fwiw I see that `(max cpu frequency * cpu
> threads)` is an important component of the equasion using the analogy
> of a (possibly silly) "pokemon battle" assuming interpreting a
> mathematical equasion to define the Health Points of the package and
> damage per second of the CPU then simply substracting these two values
> to determine how long it will take to build alike package has 500 HP
> -> Needs a CPU that deals 100 HP to complete in 5 sec or CPU that
> deals 50 HP to finish in 10 sec.
I will be happy if I am wrong. I guess this back-to-envelope would be
not accurate enough; for two reasons. As I said elsewhere, to your
example value of 100 seconds is attached a strong variability, depending
one on how the package itself scales at build time and more than often
this scaling is not linear versus the number of threads – from my
experience; and two on the stressed context where the build happens.
> About accuracy: I highly doubt that we need to worry about the system
> noise as that will be mitigated after enough systems submit average
> build time with their max CPU frequency and threads used.. we
> shouldn't really bother past that at the current stage and we can
> always add additional metadata if needed.
A average is not meaningful by itself. It provides a first-order
approximation and generally it is not sufficient; the second-order is
also required. Especially when drawing a model for prediction. From
what I remember about stats, and assuming the distribution is Gaussian,
the mean and standard error are required to capture that information.
My guess is because standard error, the mean would not provide useful
prediction shareable across heterogeneous machines.
I will be happy to be wrong and only numbers can answer to this
question. If you are interested by building a model or verify your
assumptions, I am sure it is possible to dump the current Cuirass
postgres database and then do some analytics. It would be a starting
point to evaluate if the effort implied by your proposal is worth.
I am not convinced such model would be doable for practical use across
heterogeneous machines, but it would help for monitoring CI.
Cheers,
simon
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 6:21 ` Jacob Hrbek
2021-11-23 11:56 ` zimoun
@ 2021-11-23 12:05 ` Julien Lepiller
2021-11-23 16:23 ` zimoun
1 sibling, 1 reply; 19+ messages in thread
From: Julien Lepiller @ 2021-11-23 12:05 UTC (permalink / raw)
To: Jacob Hrbek, zimoun; +Cc: guix-devel@gnu.org
Le 23 novembre 2021 01:21:06 GMT-05:00, Jacob Hrbek <kreyren@rixotstudio.cz> a écrit :
>I think you are overcomplicating the implementation.. What I am proposing is to store the time value before and after the build and then log the subtraction of these two values per package (or even per package's phase).
>
>For storage it can be either/both:
>1. locally: Storing the value somewhere on the system and adding up to it each build to provide more accurate average.
>
>**optionally** This local database can be shared across multiple systems that add values to it like simple listener waiting for POST requests.
We already log time on cuirass, but we don't use this information at all. If you could provide some wip code to show how you would implement the feature with this info, that would be great for this discussion.
For jocal logs, there is the store database (in /var/guix) for instance, though running the gc will also erase the info along with the store item.
>- within the guix repo: Since we are already building the package we can take the time and then do the provided math in reverse to calculate the time:
>
> Build took 100 sec on system with 8 threads at 2.4 Ghz max cpu frequency:
>
> 100 * (2.4 * 8) = 1920 (build time value per one thread at 1 Ghz)
>
> Building the package on system with 2 threads at 2.4 Ghz max cpu frequency:
>
> 1920 / (2 * 2.4) = 400
>
> We can then assume that the build will take 1920/400=4.8 -> 4.8 times longer on this system.
LFS has a notion of a Standard Build Unit (SBU), that is the build time of a particular package on your machine. Each package build time is estimated in SBU. However, they do not take threads into account, because the relation is not exactly proportional (some parts are linear, there is some overhead, …). SBUs change quite often with versions, so I don't think averaging on different versions/derivations would make a lot of sense… But I suppose this info could help determine how long it should take to build the same derivation or a similar one.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 11:56 ` zimoun
@ 2021-11-23 14:39 ` Jacob Hrbek
2021-11-24 11:35 ` zimoun
0 siblings, 1 reply; 19+ messages in thread
From: Jacob Hrbek @ 2021-11-23 14:39 UTC (permalink / raw)
To: zimoun; +Cc: guix-devel@gnu.org
[-- Attachment #1.1: Type: text/plain, Size: 9137 bytes --]
> Are you assuming here that the two machines are the same? Or are they different?
I am assuming that the two machines are the same with the exception of cpu frequency and threads that are used as a variable to assume the build time from known value.
> This approximation would not even be accurate enough for the same machine. For instance, the test suite of the julia package runs mainly sequential using one thread...
I am aware of this scenario and I adapted the equasion for it, but I recognize that this exponentially increases the inaccuracy with more threads and I don't believe that there is a mathematical way with the provided values to handle that scenario so we would have to adjust the calculation for those packages.
Alternative and more robust solution would be to build the package with `--jobs` set on "cpu threads - 1" and then compare the build time to a scenario without this set which would give us two coordinates to be interpreted in an exponential and eventually quadratic or even logarithmic function. As we already have to build the package multiple times to establish reproducibility so this shouldn't add any overhead just slightly slower reproduction build.
The exponential function would be less vulnerable to tasks that are unable to utilize all available processing resources, but it's more inaccurate so I assume that we would eventually have to use quadratic/logarithmic definition that uses the previously provided equasion on both coordinates to determine build on different system which to me seems as a robust in my current testing, but I am willing to give it more time if we (me and other GNU Guix devs/contributors) agree that this would be a mergable effort as it would basically require a one value preferably stored in the guix repo that is used to calculate the build time.
-- Jacob "Kreyren" Hrbek
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, November 23rd, 2021 at 11:56 AM, zimoun <zimon.toutoune@gmail.com> wrote:
> Hi,
>
> On Tue, 23 Nov 2021 at 06:21, Jacob Hrbek kreyren@rixotstudio.cz wrote:
>
> > 1. locally: Storing the value somewhere on the system and adding up to
> >
> > it each build to provide more accurate average.
>
> Timing is already stored, see “guix build --log-file”. Give a look at
>
> ’/var/log/guix/drvs’. For instance,
>
> --8<---------------cut here---------------start------------->8---
>
> $ bzcat /var/log/guix/drvs/aq/abymi9yk7pv89614dcdfll3hh4i5mc-julia-1.5.3.drv.bz2 | grep phase | grep seconds
>
> phase `set-SOURCE-DATE-EPOCH' succeeded after 0.0 seconds phase` set-paths' succeeded after 0.0 seconds
>
> phase `install-locale' succeeded after 0.0 seconds phase` unpack' succeeded after 1.1 seconds
>
> phase `use-system-libwhich' succeeded after 0.0 seconds phase` disable-documentation' succeeded after 0.0 seconds
>
> phase `prepare-deps' succeeded after 0.0 seconds phase` bootstrap' succeeded after 0.0 seconds
>
> phase `patch-usr-bin-file' succeeded after 0.0 seconds phase` patch-source-shebangs' succeeded after 0.2 seconds
>
> phase `patch-generated-file-shebangs' succeeded after 0.0 seconds phase` fix-include-and-link-paths' succeeded after 0.0 seconds
>
> phase `replace-default-shell' succeeded after 0.0 seconds phase` fix-precompile' succeeded after 0.0 seconds
>
> phase `build' succeeded after 354.3 seconds phase` set-home' succeeded after 0.0 seconds
>
> phase `disable-broken-tests' succeeded after 0.0 seconds phase` check' succeeded after 7428.8 seconds
>
> phase `install' succeeded after 16.0 seconds phase` make-wrapper' succeeded after 0.0 seconds
>
> phase `patch-shebangs' succeeded after 0.0 seconds phase` strip' succeeded after 0.0 seconds
>
> phase `validate-runpath' succeeded after 0.0 seconds phase` validate-documentation-location' succeeded after 0.0 seconds
>
> phase `delete-info-dir-file' succeeded after 0.0 seconds phase` patch-dot-desktop-files' succeeded after 0.0 seconds
>
> phase `install-license-files' succeeded after 0.0 seconds phase` reset-gzip-timestamps' succeeded after 0.0 seconds
>
> phase `compress-documentation' succeeded after 0.0 seconds
>
> --8<---------------cut here---------------end--------------->8---
>
> Therefore, you need to extract somehow that information.
>
> > optionally This local database can be shared across multiple
> >
> > systems that add values to it like simple listener waiting for POST
> >
> > requests.
>
> It should be better to use a content-addressed distribution such as IPFS
>
> or GNUnet, IMHO.
>
> > - within the guix repo: Since we are already building the package we
> >
> > can take the time and then do the provided math in reverse to
> >
> > calculate the time:
> >
> > Build took 100 sec on system with 8 threads at 2.4 Ghz max cpu frequency:
> >
> > 100 * (2.4 * 8) = 1920 (build time value per one thread at 1 Ghz)
> >
> > Building the package on system with 2 threads at 2.4 Ghz max cpu frequency:
> >
> > 1920 / (2 * 2.4) = 400
> >
> > We can then assume that the build will take 1920/400=4.8 -> 4.8
> >
> > times longer on this system.
> >
>
> Are you assuming here that the two machines are the same? Or are they
>
> different?
>
> This approximation would not even be accurate enough for the same
>
> machine. For instance, the test suite of the julia package runs mainly
>
> sequential using one thread. If you go back to numbers above,
>
> build=354.3 seconds and check=7428.8 seconds, so the number of threads
>
> only tweaks timing of build phase, which will not impact much the
>
> overall timing.
>
> Somehow, IIUC your proposal, you would like, based on timings from
>
> machine A about a set of packages, and timings from machine B about the
>
> same set of packages, knowing the timing of machine B for package foo,
>
> then extrapolate timing for machine A of package foo. The maths for
>
> that are not linear, IMHO, and require “complicated” heuristics. It is
>
> not that complicated, it “just” require some statistical regression –
>
> though it is not straightforward either. :-)
>
> BTW, why not directly substitute package foo from machine B?
>
> > The math might need to be adjusted, but it seems to be sufficiently
> >
> > accurate through my testing, fwiw I see that `(max cpu frequency * cpu threads)` is an important component of the equasion using the analogy
> >
> > of a (possibly silly) "pokemon battle" assuming interpreting a
> >
> > mathematical equasion to define the Health Points of the package and
> >
> > damage per second of the CPU then simply substracting these two values
> >
> > to determine how long it will take to build alike package has 500 HP
> >
> > -> Needs a CPU that deals 100 HP to complete in 5 sec or CPU that
> >
> > deals 50 HP to finish in 10 sec.
>
> I will be happy if I am wrong. I guess this back-to-envelope would be
>
> not accurate enough; for two reasons. As I said elsewhere, to your
>
> example value of 100 seconds is attached a strong variability, depending
>
> one on how the package itself scales at build time and more than often
>
> this scaling is not linear versus the number of threads – from my
>
> experience; and two on the stressed context where the build happens.
>
> > About accuracy: I highly doubt that we need to worry about the system
> >
> > noise as that will be mitigated after enough systems submit average
> >
> > build time with their max CPU frequency and threads used.. we
> >
> > shouldn't really bother past that at the current stage and we can
> >
> > always add additional metadata if needed.
>
> A average is not meaningful by itself. It provides a first-order
>
> approximation and generally it is not sufficient; the second-order is
>
> also required. Especially when drawing a model for prediction. From
>
> what I remember about stats, and assuming the distribution is Gaussian,
>
> the mean and standard error are required to capture that information.
>
> My guess is because standard error, the mean would not provide useful
>
> prediction shareable across heterogeneous machines.
>
> I will be happy to be wrong and only numbers can answer to this
>
> question. If you are interested by building a model or verify your
>
> assumptions, I am sure it is possible to dump the current Cuirass
>
> postgres database and then do some analytics. It would be a starting
>
> point to evaluate if the effort implied by your proposal is worth.
>
> I am not convinced such model would be doable for practical use across
>
> heterogeneous machines, but it would help for monitoring CI.
>
> Cheers,
>
> simon
[-- Attachment #1.2: publickey - kreyren@rixotstudio.cz - 0x1677DB82.asc --]
[-- Type: application/pgp-keys, Size: 737 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 249 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 12:05 ` Julien Lepiller
@ 2021-11-23 16:23 ` zimoun
0 siblings, 0 replies; 19+ messages in thread
From: zimoun @ 2021-11-23 16:23 UTC (permalink / raw)
To: Julien Lepiller, Jacob Hrbek; +Cc: guix-devel@gnu.org
Hi,
On Tue, 23 Nov 2021 at 07:05, Julien Lepiller <julien@lepiller.eu> wrote:
> LFS has a notion of a Standard Build Unit (SBU), that is the build
> time of a particular package on your machine. Each package build time
> is estimated in SBU. However, they do not take threads into account,
> because the relation is not exactly proportional (some parts are
> linear, there is some overhead, …). SBUs change quite often with
> versions, so I don't think averaging on different versions/derivations
> would make a lot of sense… But I suppose this info could help
> determine how long it should take to build the same derivation or a
> similar one.
Thanks for the SBU pointer. This webpage [1] provides some details.
But I do not find the mentioned table. I would see more numbers about
averages and standard derivation.
Moreover, I am confused, here [2] GCC@11.2.00 requires 26+56 SBU. And
there [3] it says 164 SBU. Why is it so different?
I am not convinced such SBUs makes sense for x86_64 because for instance
my laptop (4 cores+SSD) is different from my desktop (4 cores+HDD) and
really different from my workstation (64 cores+mix). From my
understanding, this SBU makes sense in the LFS context because machines
are, from my opinion, not so much heterogeneous.
However, other architectures are generally more homogeneous machines.
And because substitutes are less available, maybe such unit could be
worth as a rough indicator.
1: <https://www.linuxfromscratch.org/~bdubbs/about.html>
2: <https://www.linuxfromscratch.org/blfs/view/svn/general/gcc.html>
3: <https://www.linuxfromscratch.org/lfs/view/stable/chapter08/gcc.html>
Cheers,
simon
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-22 22:02 Proposal: Build timers Jacob Hrbek
2021-11-23 1:06 ` zimoun
@ 2021-11-23 20:09 ` Liliana Marie Prikler
2021-11-23 21:31 ` Jacob Hrbek
2021-11-23 21:35 ` Jacob Hrbek
1 sibling, 2 replies; 19+ messages in thread
From: Liliana Marie Prikler @ 2021-11-23 20:09 UTC (permalink / raw)
To: Jacob Hrbek, guix-devel@gnu.org
Am Montag, den 22.11.2021, 22:02 +0000 schrieb Jacob Hrbek:
> See the proposal in https://git.dotya.ml/guix.next/GUIX.next/issues/5
>
> -- Jacob "Kreyren" Hrbek
>
> Sent with ProtonMail Secure Email.
Your Pokémon analogy is extremely flawed. The same CPU at a different
clockrate does not perform the same task in the same amount of cycles
[1, 2].
[1] Kotla, Ramakrishna & Devgan, Anirudh & Ghiasi, Soraya & Keller, Tom
& Rawson, Freeman. (2004). Characterizing the impact of different
memory-intensity levels. 3 - 10. 10.1109/WWC.2004.1437388.
[2] Snowdon, David & Sueur, Etienne & Petters, Stefan & Heiser, Gernot.
(2009). Koala a platform for OS-level power management. Proceedings of
the 4th ACM European Conference on Computer Systems, EuroSys'09. 289-
302. 10.1145/1519065.1519097.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 20:09 ` Liliana Marie Prikler
@ 2021-11-23 21:31 ` Jacob Hrbek
2021-11-23 21:35 ` Jacob Hrbek
1 sibling, 0 replies; 19+ messages in thread
From: Jacob Hrbek @ 2021-11-23 21:31 UTC (permalink / raw)
To: Liliana Marie Prikler; +Cc: guix-devel@gnu.org
[-- Attachment #1.1: Type: text/plain, Size: 2283 bytes --]
> Your Pokémon analogy is extremely flawed. The same CPU at a different clockrate does not perform the same task in the same amount of cycles [1, 2]. -- lily
The theory is that the measurements could be taken after X amount of time to adjust the DPS that the CPU does to the package to get the build time, See 'theory for real time measurements' in https://git.dotya.ml/guix.next/GUIX.next/issues/5
using the previous theory using the reproduction build with limited system resources to define the package complexity and to provide coordinates for XY plane this could each be define per variable that influences the build and then define the relation to determine the DPS which so far seem as the most accurate in my testing, but I am still experimenting with it
e.g:
Base damage: 1
- CPU at 1 Ghz -> Multiplier 0.24
- CPU at 2 Ghz -> Multiplier 0.31
But the current question should be: Do we want these timers implemented assuming that it can be done without overhead and in a cheap way? and if so how do we want to store and redistribute the logged data as those have a major impact on the calculation and methods used.
-- Jacob "Kreyren" Hrbek
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, November 23rd, 2021 at 8:09 PM, Liliana Marie Prikler <liliana.prikler@gmail.com> wrote:
> Am Montag, den 22.11.2021, 22:02 +0000 schrieb Jacob Hrbek:
>
> > See the proposal in https://git.dotya.ml/guix.next/GUIX.next/issues/5
> >
> > -- Jacob "Kreyren" Hrbek
> >
> > Sent with ProtonMail Secure Email.
>
> Your Pokémon analogy is extremely flawed. The same CPU at a different
>
> clockrate does not perform the same task in the same amount of cycles
>
> [1, 2].
>
> [1] Kotla, Ramakrishna & Devgan, Anirudh & Ghiasi, Soraya & Keller, Tom
>
> & Rawson, Freeman. (2004). Characterizing the impact of different
>
> memory-intensity levels. 3 - 10. 10.1109/WWC.2004.1437388.
>
> [2] Snowdon, David & Sueur, Etienne & Petters, Stefan & Heiser, Gernot.
>
> (2009). Koala a platform for OS-level power management. Proceedings of
>
> the 4th ACM European Conference on Computer Systems, EuroSys'09. 289-
>
> 302. 10.1145/1519065.1519097.
[-- Attachment #1.2: publickey - kreyren@rixotstudio.cz - 0x1677DB82.asc --]
[-- Type: application/pgp-keys, Size: 737 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 249 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 20:09 ` Liliana Marie Prikler
2021-11-23 21:31 ` Jacob Hrbek
@ 2021-11-23 21:35 ` Jacob Hrbek
2021-11-23 23:50 ` Julien Lepiller
1 sibling, 1 reply; 19+ messages in thread
From: Jacob Hrbek @ 2021-11-23 21:35 UTC (permalink / raw)
To: Liliana Marie Prikler; +Cc: guix-devel@gnu.org
[-- Attachment #1.1: Type: text/plain, Size: 1615 bytes --]
Skimming through the research that lily provided our builds are reproducible so the changes in cpu cycles requirements should be same with any post-build implementation disabled, but i recognize that different CPUs might use different configuration that influences the calculation and it will be a complicated task to account for all variables that influence the build across systems so instead of accurate measurements we should work with a sane tolerance for accuracy.
-- Jacob "Kreyren" Hrbek
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, November 23rd, 2021 at 8:09 PM, Liliana Marie Prikler <liliana.prikler@gmail.com> wrote:
> Am Montag, den 22.11.2021, 22:02 +0000 schrieb Jacob Hrbek:
>
> > See the proposal in https://git.dotya.ml/guix.next/GUIX.next/issues/5
> >
> > -- Jacob "Kreyren" Hrbek
> >
> > Sent with ProtonMail Secure Email.
>
> Your Pokémon analogy is extremely flawed. The same CPU at a different
>
> clockrate does not perform the same task in the same amount of cycles
>
> [1, 2].
>
> [1] Kotla, Ramakrishna & Devgan, Anirudh & Ghiasi, Soraya & Keller, Tom
>
> & Rawson, Freeman. (2004). Characterizing the impact of different
>
> memory-intensity levels. 3 - 10. 10.1109/WWC.2004.1437388.
>
> [2] Snowdon, David & Sueur, Etienne & Petters, Stefan & Heiser, Gernot.
>
> (2009). Koala a platform for OS-level power management. Proceedings of
>
> the 4th ACM European Conference on Computer Systems, EuroSys'09. 289-
>
> 302. 10.1145/1519065.1519097.
[-- Attachment #1.2: publickey - kreyren@rixotstudio.cz - 0x1677DB82.asc --]
[-- Type: application/pgp-keys, Size: 737 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 249 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 21:35 ` Jacob Hrbek
@ 2021-11-23 23:50 ` Julien Lepiller
2021-11-24 11:31 ` zimoun
0 siblings, 1 reply; 19+ messages in thread
From: Julien Lepiller @ 2021-11-23 23:50 UTC (permalink / raw)
To: Jacob Hrbek, Liliana Marie Prikler; +Cc: guix-devel@gnu.org
[-- Attachment #1: Type: text/plain, Size: 2178 bytes --]
Do we even care that much about accuracy? I don't really care that the build takes 30 or 31 seconds, or even 1 minute, but I certainly care whether it takes 30s or 3h. I think this is also what SBUs give you: a rough estimate of which build is longer than the other. I think a simple proportionality relation would work well enough in most common cases. It might be quite off on a super computer, but who cares, really?
Le 23 novembre 2021 16:35:24 GMT-05:00, Jacob Hrbek <kreyren@rixotstudio.cz> a écrit :
>Skimming through the research that lily provided our builds are reproducible so the changes in cpu cycles requirements should be same with any post-build implementation disabled, but i recognize that different CPUs might use different configuration that influences the calculation and it will be a complicated task to account for all variables that influence the build across systems so instead of accurate measurements we should work with a sane tolerance for accuracy.
>
>-- Jacob "Kreyren" Hrbek
>
>Sent with ProtonMail Secure Email.
>
>‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>
>On Tuesday, November 23rd, 2021 at 8:09 PM, Liliana Marie Prikler <liliana.prikler@gmail.com> wrote:
>
>> Am Montag, den 22.11.2021, 22:02 +0000 schrieb Jacob Hrbek:
>>
>
>> > See the proposal in https://git.dotya.ml/guix.next/GUIX.next/issues/5
>> >
>
>> > -- Jacob "Kreyren" Hrbek
>> >
>
>> > Sent with ProtonMail Secure Email.
>>
>
>> Your Pokémon analogy is extremely flawed. The same CPU at a different
>>
>
>> clockrate does not perform the same task in the same amount of cycles
>>
>
>> [1, 2].
>>
>
>> [1] Kotla, Ramakrishna & Devgan, Anirudh & Ghiasi, Soraya & Keller, Tom
>>
>
>> & Rawson, Freeman. (2004). Characterizing the impact of different
>>
>
>> memory-intensity levels. 3 - 10. 10.1109/WWC.2004.1437388.
>>
>
>> [2] Snowdon, David & Sueur, Etienne & Petters, Stefan & Heiser, Gernot.
>>
>
>> (2009). Koala a platform for OS-level power management. Proceedings of
>>
>
>> the 4th ACM European Conference on Computer Systems, EuroSys'09. 289-
>>
>
>> 302. 10.1145/1519065.1519097.
[-- Attachment #2: Type: text/html, Size: 4746 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 23:50 ` Julien Lepiller
@ 2021-11-24 11:31 ` zimoun
2021-11-24 20:23 ` Vagrant Cascadian
0 siblings, 1 reply; 19+ messages in thread
From: zimoun @ 2021-11-24 11:31 UTC (permalink / raw)
To: Julien Lepiller, Jacob Hrbek, Liliana Marie Prikler; +Cc: guix-devel@gnu.org
Hi,
On Tue, 23 Nov 2021 at 18:50, Julien Lepiller <julien@lepiller.eu> wrote:
> Do we even care that much about accuracy? I don't really care that the
> build takes 30 or 31 seconds, or even 1 minute, but I certainly care
> whether it takes 30s or 3h. I think this is also what SBUs give you: a
> rough estimate of which build is longer than the other. I think a
> simple proportionality relation would work well enough in most common
> cases. It might be quite off on a super computer, but who cares,
> really?
What if it takes 3h and the prediction says 2h?
Which build is longer than the other is already provided by data in the
build farm. I agree it is hard to find and one improvement could be
that Cuirass or Build Coordinator exposes this data (I think it is not
so easy because for instance Cuirass knows derivation and package,
somehow and IIUC :-)).
Who cares to know how longer it would locally take if the substitute is
available*? ;-) Because at some point, this SBU should be computed.
Anyway, what would be the typical error?
For instance, I propose that we collectively send the timings of
packages: bash, gmsh, julia, emacs, vim; or any other 5 packages for
x86_64 architecture. Then we can compare typical errors between
prediction and real, i.e., evaluate “accuracy“ for SBU and then decide
if it is acceptable or not. :-)
*available: which is not the case for LFS, though.
Cheers,
simon
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-23 14:39 ` Jacob Hrbek
@ 2021-11-24 11:35 ` zimoun
2021-11-25 4:00 ` Jacob Hrbek
0 siblings, 1 reply; 19+ messages in thread
From: zimoun @ 2021-11-24 11:35 UTC (permalink / raw)
To: Jacob Hrbek, Julien Lepiller; +Cc: guix-devel@gnu.org
Hi,
On Tue, 23 Nov 2021 at 14:39, Jacob Hrbek <kreyren@rixotstudio.cz> wrote:
>> This approximation would not even be accurate enough for the same
>> machine. For instance, the test suite of the julia package runs
>> mainly sequential using one thread...
>
> I am aware of this scenario and I adapted the equasion for it, but I
> recognize that this exponentially increases the inaccuracy with more
> threads and I don't believe that there is a mathematical way with the
> provided values to handle that scenario so we would have to adjust the
> calculation for those packages.
What I am trying to explain is that the model cannot work to be
predictable enough with what I consider a meaningful accuracy.
Obviously, relaxing the precision, it is easy to infer a rule of thumb;
a simple cross-multiplication fits the job. ;-)
The “pokémon-battle” model is a simple linear model
(cross-multiplication); using Jacob’s “notation”:
- HP: time to build on machine A
- DPS = nthread * cpufreq : “power” of machine
Then it is expected to evaluate ’a’ and ’b’ on average such that:
HP = a * DPS + b
based on some experiments. Last, on machine B, knowing both nthread'
and cpufreq' for that machine B, you are expecting to evaluate HP' for
that machine B applying the formula:
HP' = a * nthread' * cpufreq' + b
Jacob, do I correctly understand the model?
In any case, that’s what LFS is doing, instead HP is named SBU. And
instead DPS, they use a reference package. And this normalization is
better, IMHO. Other said, for one specific package considered as
reference, they compute HP1 (resp. HP2) for machine A (resp. B), then
for machine A, they know HP for another package and they deduce,
HP' = HP2/HP1 * HP
All this is trivial. :-) The key is the accuracy, i.e., the error
between the prediction HP' and the real time. Here, the issue is that
HP1 and HP2 capture for one specific package the overall time; which
depends on hidden parameters as nthread, cpufreq, IO, and other
parameters from hardware. But that a strong assumption when considering
these hidden parameters (evaluated for one specific package) are equally
the same for any other package.
It is a strong assumption because the hidden parameters depends on
hardware specifications (nthread, cpufreq, etc.) *and* how the package
itself exploits them.
Therefore, the difference between the prediction and the real time is
highly variable, and thus personally I am not convince the effort is
worth; for local build. That’s another story. ;-)
LSF is well-aware of the issue and it is documented [1,2].
The root of the issue is the model based on a strong assumption; both
(model and assumption) do not fit how the reality concrete works, IMHO.
One straightforward way — requiring some work though – for improving the
accuracy is to use statistical regressions. We cannot do really better
to capture the hardware specification – noticing that the machine stress
(what the machine is currently doing when the build happens) introduces
a variability hard to estimate beforehand. However, it is possible to
do better when dealing with packages. Other said, exploit the data from
the build farms.
Well, I stop here because it rings a bell: model could be discussed at
length if it is never applied to concrete numbers. :-)
Let keep it pragmatic! :-)
Using the simple LFS model and SBU, what would be the typical error?
For instance, I propose that we collectively send the timings of
packages: bash, gmsh, julia, emacs, vim; or any other 5 packages for
x86_64 architecture. Then we can compare typical errors between
prediction and real, i.e., evaluate “accuracy“ for SBU and then decide
if it is acceptable or not. :-)
Cheers,
simon
1: <https://www.linuxfromscratch.org/lfs/view/stable/chapter04/aboutsbus.html>
2: <https://www.linuxfromscratch.org/~bdubbs/about.html>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-24 11:31 ` zimoun
@ 2021-11-24 20:23 ` Vagrant Cascadian
2021-11-24 21:50 ` zimoun
2021-11-25 4:03 ` Jacob Hrbek
0 siblings, 2 replies; 19+ messages in thread
From: Vagrant Cascadian @ 2021-11-24 20:23 UTC (permalink / raw)
To: zimoun, Julien Lepiller, Jacob Hrbek, Liliana Marie Prikler
Cc: guix-devel@gnu.org
[-- Attachment #1: Type: text/plain, Size: 1584 bytes --]
On 2021-11-24, zimoun wrote:
> On Tue, 23 Nov 2021 at 18:50, Julien Lepiller <julien@lepiller.eu> wrote:
>> Do we even care that much about accuracy? I don't really care that the
>> build takes 30 or 31 seconds, or even 1 minute, but I certainly care
>> whether it takes 30s or 3h. I think this is also what SBUs give you: a
>> rough estimate of which build is longer than the other. I think a
>> simple proportionality relation would work well enough in most common
>> cases. It might be quite off on a super computer, but who cares,
>> really?
>
> What if it takes 3h and the prediction says 2h?
Those sound about "the same" for any kind of reasonable expectation...
I would guess you only want the correct order of magnitude... hours,
minutes, days, weeks, months, years... or maybe quick, fast, slow,
painful.
I do this soft of fuzzy estimation all the time when working on
Reproducible Builds in Debian; look at the past test history to get a
*rough* estimate of how long I might expect a build to take. This helps
me decide if I should start a build and get a $COFFEE, do some
$SWORDFIGHTING on the $OFFICECHAIRS, or sit and watch the progress bar
so I don't loose the mental state working on the problem becuase it will
be done $SOON.
Make it clear it's an estimate, or maybe even abstract away the time
units so that there is no expectation of any particular time.
I know there are people who would love to get a a value that was
consistently right but to be *useful* it only needs an estimate to be
mostly not completely wrong. At least to me. :)
live well,
vagrant
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-24 20:23 ` Vagrant Cascadian
@ 2021-11-24 21:50 ` zimoun
2021-11-25 4:03 ` Jacob Hrbek
1 sibling, 0 replies; 19+ messages in thread
From: zimoun @ 2021-11-24 21:50 UTC (permalink / raw)
To: Vagrant Cascadian, Julien Lepiller, Jacob Hrbek,
Liliana Marie Prikler
Cc: guix-devel@gnu.org
Hi Vagrant,
On Wed, 24 Nov 2021 at 12:23, Vagrant Cascadian <vagrant@debian.org> wrote:
> On 2021-11-24, zimoun wrote:
>> What if it takes 3h and the prediction says 2h?
>
> Those sound about "the same" for any kind of reasonable expectation...
Ah, we are not speaking about the same thing thus. :-)
> I would guess you only want the correct order of magnitude... hours,
> minutes, days, weeks, months, years... or maybe quick, fast, slow,
> painful.
Well, an order of magnitude is relative to an expectation. My engineer
side is fine with a low expectation: it should take 2h and it
effectively takes 6h (probably unhappy) or the contrary it should take
6h and it effectively takes 2h (really happy). My scientist side is
less fine about this poorly defined expectation. Anyway! :-)
I think it is easier to set quick, fast, slow, courage based on timings
from Berlin. Similarly with master, staging, core-updates which set a
rough number of packages for package impact, why not have:
- fast: t < X
- quick: X < t < 3X
- fast: 3X < t < 6X
- slow: 6X < t < 36X
- courage: 36X < t
where X could be arbitrarily picked as 10min on Berlin or Bayfront.
This data could be exposed with the package, displayed by Cuirass or the
Data Service or the website [1]. Well, all require some work though.
(fast = less couples of minutes, quick = less than half-hour, fast =
less than hour, slow = less than six hours, courage = wait for it; the
reference is Bayfront or Berlin, with a clear hardware specifications =
number of core/thread, cpufreq and probably couple of other relevant
parameters)
1: <https://guix.gnu.org/en/packages/>
> I do this soft of fuzzy estimation all the time when working on
> Reproducible Builds in Debian; look at the past test history to get a
> *rough* estimate of how long I might expect a build to take. This helps
> me decide if I should start a build and get a $COFFEE, do some
> $SWORDFIGHTING on the $OFFICECHAIRS, or sit and watch the progress bar
> so I don't loose the mental state working on the problem becuase it will
> be done $SOON.
Yeah, me too. :-) I, more than often, do back-to-envelope computations
to estimate something. From my point of view, there is a difference
between my personal estimation and a public official estimation.
> Make it clear it's an estimate, or maybe even abstract away the time
> units so that there is no expectation of any particular time.
I agree. My point is: if the estimation providing a (even rough)
duration is not accurate enough, then it is not trusted (by user), i.e.,
not used; and all the effort does not worth it, IMHO.
Let back this claim by the example of ’relevance’ from «guix
search». :-) Because the accuracy, regarding the user expectations from
a query, is highly variable, the major behaviour I see is: iteration
over “guix package -A | grep” trying various package name.
Cheers,
simon
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-24 11:35 ` zimoun
@ 2021-11-25 4:00 ` Jacob Hrbek
0 siblings, 0 replies; 19+ messages in thread
From: Jacob Hrbek @ 2021-11-25 4:00 UTC (permalink / raw)
To: zimoun; +Cc: guix-devel@gnu.org
[-- Attachment #1.1: Type: text/plain, Size: 7687 bytes --]
> The “pokémon-battle” model is a simple linear model
(cross-multiplication); using Jacob’s “notation” -- zimoun
It's not designed to be linear as the HP variable could be adjusted in real time e.g. recalculating it every X amount of time as needed which can include calculations for noise that influences the task.
It currently seems as linear as I developed it to be a workable platform on which we can develop more robust solution in case the simple linear calculations are not sufficiently accurate (which i think that it will be if we get sufficient amount of data to calculate it).
> - HP: time to build on machine A -- zimoun
Not time, but **COMPLEXITY** of the package as I see that as an important destiction since it's by design never meant to store time, but "time value" **that is converted in time**.
> based on some experiments. Last, on machine B, knowing both nthread' and cpufreq' for that machine B, you are expecting to evaluate HP' for that machine B applying the formula:
> HP' = a * nthread' * cpufreq' + b -- zimoun
In this context I would describe it as:
CPU strenght = nthread * cpufreq * "other things that make the CPU to deal more damage"
HP = "CPU strenght" * "time it took to build in sec"
Which is linear, but the components used to figure out this linear function are non-linear e.g. "RAM memory" will most likely appear as exponential, but it will eventually hit constant when the CPU's requirement for the memory are satisfied.
Also the calculation should never contain systems values from systems a,b,c,.. , rather interpreting the hardware resources into an equasion that should be integrated to calculate unknowns
where the issue in that theory is figuring out the "time it took to build" and "CPU strenght" which i think can be mitigated by determining how the hardware affects the build by changing it's variables in two builds to determine e.g.
4 thread = 5 min
3 threads = 10 min
2 threads = 15 min
-> 1 threads will take 20 min.
So literally following a pokemon battle system alike:
Pokemon with 100 HP and you dealing 10 HP per turn -> it will take you 10 turns to win the battle.
---
Btw. The components capable of bottleneck such as the amount of RAM memory should be probably calculated as <0.0,1.0> so that they can be applied as **multipliers** to the "CPU strenght" since (following "the rule of compilation" using 2 GB of RAM per 1 thread) the CPU will function in a scenario of 4 threads with 4 GB of RAM function at half the efficiency (0.5) if it's requirements for fast memory are not satisfied.
-- Jacob "Kreyren" Hrbek
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, November 24th, 2021 at 11:35 AM, zimoun <zimon.toutoune@gmail.com> wrote:
> Hi,
>
> On Tue, 23 Nov 2021 at 14:39, Jacob Hrbek kreyren@rixotstudio.cz wrote:
>
> > > This approximation would not even be accurate enough for the same
> > >
> > > machine. For instance, the test suite of the julia package runs
> > >
> > > mainly sequential using one thread...
> >
> > I am aware of this scenario and I adapted the equasion for it, but I
> >
> > recognize that this exponentially increases the inaccuracy with more
> >
> > threads and I don't believe that there is a mathematical way with the
> >
> > provided values to handle that scenario so we would have to adjust the
> >
> > calculation for those packages.
>
> What I am trying to explain is that the model cannot work to be
>
> predictable enough with what I consider a meaningful accuracy.
>
> Obviously, relaxing the precision, it is easy to infer a rule of thumb;
>
> a simple cross-multiplication fits the job. ;-)
>
> The “pokémon-battle” model is a simple linear model
>
> (cross-multiplication); using Jacob’s “notation”:
>
> - HP: time to build on machine A
> - DPS = nthread * cpufreq : “power” of machine
>
> Then it is expected to evaluate ’a’ and ’b’ on average such that:
>
> HP = a * DPS + b
>
> based on some experiments. Last, on machine B, knowing both nthread'
>
> and cpufreq' for that machine B, you are expecting to evaluate HP' for
>
> that machine B applying the formula:
>
> HP' = a * nthread' * cpufreq' + b
>
> Jacob, do I correctly understand the model?
>
> In any case, that’s what LFS is doing, instead HP is named SBU. And
>
> instead DPS, they use a reference package. And this normalization is
>
> better, IMHO. Other said, for one specific package considered as
>
> reference, they compute HP1 (resp. HP2) for machine A (resp. B), then
>
> for machine A, they know HP for another package and they deduce,
>
> HP' = HP2/HP1 * HP
>
> All this is trivial. :-) The key is the accuracy, i.e., the error
>
> between the prediction HP' and the real time. Here, the issue is that
>
> HP1 and HP2 capture for one specific package the overall time; which
>
> depends on hidden parameters as nthread, cpufreq, IO, and other
>
> parameters from hardware. But that a strong assumption when considering
>
> these hidden parameters (evaluated for one specific package) are equally
>
> the same for any other package.
>
> It is a strong assumption because the hidden parameters depends on
>
> hardware specifications (nthread, cpufreq, etc.) and how the package
>
> itself exploits them.
>
> Therefore, the difference between the prediction and the real time is
>
> highly variable, and thus personally I am not convince the effort is
>
> worth; for local build. That’s another story. ;-)
>
> LSF is well-aware of the issue and it is documented [1,2].
>
> The root of the issue is the model based on a strong assumption; both
>
> (model and assumption) do not fit how the reality concrete works, IMHO.
>
> One straightforward way — requiring some work though – for improving the
>
> accuracy is to use statistical regressions. We cannot do really better
>
> to capture the hardware specification – noticing that the machine stress
>
> (what the machine is currently doing when the build happens) introduces
>
> a variability hard to estimate beforehand. However, it is possible to
>
> do better when dealing with packages. Other said, exploit the data from
>
> the build farms.
>
> Well, I stop here because it rings a bell: model could be discussed at
>
> length if it is never applied to concrete numbers. :-)
>
> Let keep it pragmatic! :-)
>
> Using the simple LFS model and SBU, what would be the typical error?
>
> For instance, I propose that we collectively send the timings of
>
> packages: bash, gmsh, julia, emacs, vim; or any other 5 packages for
>
> x86_64 architecture. Then we can compare typical errors between
>
> prediction and real, i.e., evaluate “accuracy“ for SBU and then decide
>
> if it is acceptable or not. :-)
>
> Cheers,
>
> simon
>
> 1: https://www.linuxfromscratch.org/lfs/view/stable/chapter04/aboutsbus.html
>
> 2: https://www.linuxfromscratch.org/~bdubbs/about.html
[-- Attachment #1.2: publickey - kreyren@rixotstudio.cz - 0x1677DB82.asc --]
[-- Type: application/pgp-keys, Size: 737 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 249 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-24 20:23 ` Vagrant Cascadian
2021-11-24 21:50 ` zimoun
@ 2021-11-25 4:03 ` Jacob Hrbek
2021-11-25 5:21 ` Liliana Marie Prikler
2021-11-25 10:23 ` zimoun
1 sibling, 2 replies; 19+ messages in thread
From: Jacob Hrbek @ 2021-11-25 4:03 UTC (permalink / raw)
To: Vagrant Cascadian; +Cc: guix-devel@gnu.org, Liliana Marie Prikler
[-- Attachment #1.1: Type: text/plain, Size: 2419 bytes --]
Make it clear it's an estimate, or maybe even abstract away the time units so that there is no expectation of any particular time. -- Vagrant
My theory is designed with tolerance of <5 min with max tolerance of =10 min with methods that I am confident will get us within <30 sec assuming sufficient amount of data to construct the variables.
-- Jacob "Kreyren" Hrbek
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, November 24th, 2021 at 8:23 PM, Vagrant Cascadian <vagrant@debian.org> wrote:
> On 2021-11-24, zimoun wrote:
>
> > On Tue, 23 Nov 2021 at 18:50, Julien Lepiller julien@lepiller.eu wrote:
> >
> > > Do we even care that much about accuracy? I don't really care that the
> > >
> > > build takes 30 or 31 seconds, or even 1 minute, but I certainly care
> > >
> > > whether it takes 30s or 3h. I think this is also what SBUs give you: a
> > >
> > > rough estimate of which build is longer than the other. I think a
> > >
> > > simple proportionality relation would work well enough in most common
> > >
> > > cases. It might be quite off on a super computer, but who cares,
> > >
> > > really?
> >
> > What if it takes 3h and the prediction says 2h?
>
> Those sound about "the same" for any kind of reasonable expectation...
>
> I would guess you only want the correct order of magnitude... hours,
>
> minutes, days, weeks, months, years... or maybe quick, fast, slow,
>
> painful.
>
> I do this soft of fuzzy estimation all the time when working on
>
> Reproducible Builds in Debian; look at the past test history to get a
>
> rough estimate of how long I might expect a build to take. This helps
>
> me decide if I should start a build and get a $COFFEE, do some
>
> $SWORDFIGHTING on the $OFFICECHAIRS, or sit and watch the progress bar
>
> so I don't loose the mental state working on the problem becuase it will
>
> be done $SOON.
>
> Make it clear it's an estimate, or maybe even abstract away the time
>
> units so that there is no expectation of any particular time.
>
> I know there are people who would love to get a a value that was
>
> consistently right but to be useful it only needs an estimate to be
>
> mostly not completely wrong. At least to me. :)
>
> live well,
>
> vagrant
[-- Attachment #1.2: publickey - kreyren@rixotstudio.cz - 0x1677DB82.asc --]
[-- Type: application/pgp-keys, Size: 737 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 249 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-25 4:03 ` Jacob Hrbek
@ 2021-11-25 5:21 ` Liliana Marie Prikler
2021-11-25 10:23 ` zimoun
1 sibling, 0 replies; 19+ messages in thread
From: Liliana Marie Prikler @ 2021-11-25 5:21 UTC (permalink / raw)
To: Jacob Hrbek, Vagrant Cascadian; +Cc: guix-devel@gnu.org
Am Donnerstag, den 25.11.2021, 04:03 +0000 schrieb Jacob Hrbek:
> Make it clear it's an estimate, or maybe even abstract away the time
> units so that there is no expectation of any particular time. --
> Vagrant
>
> My theory is designed with tolerance of <5 min with max tolerance of
> =10 min with methods that I am confident will get us within <30 sec
> assuming sufficient amount of data to construct the variables.
You are courageous to assume a variance of 5 minutes, let alone 30
seconds in everyday systems. Aside from the variables I've already
pointed out, you'd have to consider that certain builds outside of CI
will enter a state of constant swapping at some point, yet still finish
building if not hitting OOM limits. Also the fact that Guix might not
be the only process hogging resources on some particular machine.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Proposal: Build timers
2021-11-25 4:03 ` Jacob Hrbek
2021-11-25 5:21 ` Liliana Marie Prikler
@ 2021-11-25 10:23 ` zimoun
1 sibling, 0 replies; 19+ messages in thread
From: zimoun @ 2021-11-25 10:23 UTC (permalink / raw)
To: Jacob Hrbek, Vagrant Cascadian; +Cc: guix-devel@gnu.org, Liliana Marie Prikler
Hi,
On Thu, 25 Nov 2021 at 04:03, Jacob Hrbek <kreyren@rixotstudio.cz> wrote:
> My theory is designed with tolerance of <5 min with max tolerance of
> =10 min with methods that I am confident will get us within <30 sec
> assuming sufficient amount of data to construct the variables.
Please back this claims with concrete numbers.
Because I am sure it is not the case. For an instance, consider the
package julia@1.6.3 built on Berlin: 1061.9+11122.3=12184.2 seconds [1]
vs 1183.4+10081.1=11264.5 [2] vs 1040.7+9559.4=10600.1 [3], therefore,
the worst case is:
(12184.2 - 10600.1)/60 = 26.4016666667 min
The variance here is obvious. And it totally depends on the value
itself; the error is relative.
LFS are using *exactly* what you are proposing and they document the
deviation. Please follow the 2 links there [4].
I am available to share my overall timings for some packages.
1: <https://ci.guix.gnu.org/build/1654363/log/raw>
2: <https://ci.guix.gnu.org/build/1740157/log/raw>
3: <https://ci.guix.gnu.org/build/1793062/log/raw>
4: <https://lists.gnu.org/archive/html/guix-devel/2021-11/msg00157.html>
All the best,
simon
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2021-11-25 10:25 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-11-22 22:02 Proposal: Build timers Jacob Hrbek
2021-11-23 1:06 ` zimoun
2021-11-23 6:21 ` Jacob Hrbek
2021-11-23 11:56 ` zimoun
2021-11-23 14:39 ` Jacob Hrbek
2021-11-24 11:35 ` zimoun
2021-11-25 4:00 ` Jacob Hrbek
2021-11-23 12:05 ` Julien Lepiller
2021-11-23 16:23 ` zimoun
2021-11-23 20:09 ` Liliana Marie Prikler
2021-11-23 21:31 ` Jacob Hrbek
2021-11-23 21:35 ` Jacob Hrbek
2021-11-23 23:50 ` Julien Lepiller
2021-11-24 11:31 ` zimoun
2021-11-24 20:23 ` Vagrant Cascadian
2021-11-24 21:50 ` zimoun
2021-11-25 4:03 ` Jacob Hrbek
2021-11-25 5:21 ` Liliana Marie Prikler
2021-11-25 10:23 ` zimoun
Code repositories for project(s) associated with this external index
https://git.savannah.gnu.org/cgit/guix.git
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.