* ‘core-updates’ is gone; long live ‘core-packages-team’!
@ 2024-08-31 13:03 Ludovic Courtès
2024-09-01 16:34 ` Steve George
2024-09-04 12:58 ` Simon Tournier
0 siblings, 2 replies; 22+ messages in thread
From: Ludovic Courtès @ 2024-08-31 13:03 UTC (permalink / raw)
To: guix-devel
Hi again!
Over the years, consensus emerged that ‘core-updates’, as a branch where
we lump together all sorts of rebuild-the-world changes, is no longer
sustainable. Those of us who were at the Guix Days in February 2023
came to the conclusion that (correct me if I’m wrong) we should keep
branches focused, with a specific team responsible for taking care of
each branch and getting it merged.
There’s now a ‘core-packages’ team, so there will be soon a
‘core-packages-team’ branch focusing exclusively on what’s in its scope,
as specified in ‘etc/teams.scm’. There’s already a lot of work to do
actually: upgrading glibc (again!), coreutils, grep, etc., and switching
to a newer GCC as the default compiler. That branch won’t be special;
it will follow the conventions that were adopted last year:
https://guix.gnu.org/manual/devel/en/html_node/Managing-Patches-and-Branches.html
If you’d like to help with these things, you’re very welcome, and you
can consider joining the ‘core-packages’ team to help coordinate these
efforts in the longer run.
To reduce world rebuilds, perhaps we’ll sometimes create “merge trains”,
whereby we’ll merge, say, the branch upgrading CMake and that ungrafting
ibus on top of ‘core-packages-team’, and then merge this combination in
‘master’. The key being: these branches will have been developed and
tested independently of one another by dedicated teams, and the merge
train will be a mere formality.
Recently, Christopher Baines further suggested that, as much as
possible, branches should be “stateless” in the sense that their changes
can be rebased anytime on top of ‘master’. This is what we’ve been
doing for the past couple of months with ‘core-updates’; that sometimes
made it hard to follow IMO, because there were too many changes, but for
more focused branches, that should work well.
Thoughts?
Ludo’.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-08-31 13:03 ‘core-updates’ is gone; long live ‘core-packages-team’! Ludovic Courtès
@ 2024-09-01 16:34 ` Steve George
2024-09-01 17:06 ` Christopher Baines
2024-09-06 9:01 ` Ludovic Courtès
2024-09-04 12:58 ` Simon Tournier
1 sibling, 2 replies; 22+ messages in thread
From: Steve George @ 2024-09-01 16:34 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: guix-devel
Hi,
I have a question on one part of the workflow, and would like to propose an addition to the 'stateless' branches Chris suggested:
On 31 Aug, Ludovic Courtès wrote:
> Hi again!
>
> Over the years, consensus emerged that ‘core-updates’, as a branch where
> we lump together all sorts of rebuild-the-world changes, is no longer
> sustainable. Those of us who were at the Guix Days in February 2023
> came to the conclusion that (correct me if I’m wrong) we should keep
> branches focused, with a specific team responsible for taking care of
> each branch and getting it merged.
>
> There’s now a ‘core-packages’ team, so there will be soon a
> ‘core-packages-team’ branch focusing exclusively on what’s in its scope,
> as specified in ‘etc/teams.scm’. There’s already a lot of work to do
> actually: upgrading glibc (again!), coreutils, grep, etc., and switching
> to a newer GCC as the default compiler. That branch won’t be special;
> it will follow the conventions that were adopted last year:
>
> https://guix.gnu.org/manual/devel/en/html_node/Managing-Patches-and-Branches.html
>
(...)
> To reduce world rebuilds, perhaps we’ll sometimes create “merge trains”,
> whereby we’ll merge, say, the branch upgrading CMake and that ungrafting
> ibus on top of ‘core-packages-team’, and then merge this combination in
> ‘master’. The key being: these branches will have been developed and
> tested independently of one another by dedicated teams, and the merge
> train will be a mere formality.
Under the 'patches and branches' workflow, what should happen to packages that are *not* part of any team, but do cause a rebuild of more than 300 dependent packages?
Andy Tai gave an example of ffmpeg [0]. There aren't enough contributors or committers for every package to be covered by a team, so this seems like a permanent constraint even if more teams do grow over time.
The manual currently says it goes to 'staging' [1], and that this will be merged within six weeks. Is this actually true? I don't see any sign of it on Guix' git [2], and an unsure if the manual is out of sync with the branches workflow.
While 'staging' seems like it could have similar difficulties to core-updates if it gets out of hand. The alternative choice of each time someone making a branch 'ffmpeg-and-stuff-i-collected-with-over-300-rebuilds' doesn't seem like a better choice ;-)
> Recently, Christopher Baines further suggested that, as much as
> possible, branches should be “stateless” in the sense that their changes
> can be rebased anytime on top of ‘master’. This is what we’ve been
> doing for the past couple of months with ‘core-updates’; that sometimes
> made it hard to follow IMO, because there were too many changes, but for
> more focused branches, that should work well.
(...)
Long-lived branches and ones that don't cleanly apply onto master cause lots of difficulties from what I've seen. Perhaps a lesson is that branches should both be stateless *and* should not exist for more than 3 months. We already have a rule that encourages atomic changes within any patch in order to make things faster/easier to review. By extension, lets do the same with branches - merge them more often.
I would propose a patch to the managing patches/branches sections of the manual depending on what the consensus is here.
Steve / Futurile
[0] https://lists.gnu.org/archive/html/guix-devel/2024-08/msg00202.html
[1] https://guix.gnu.org/devel/manual/en/guix.html#Submitting-Patches
[2] https://git.savannah.gnu.org/cgit/guix.git/refs/heads
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-01 16:34 ` Steve George
@ 2024-09-01 17:06 ` Christopher Baines
2024-09-03 14:02 ` Christopher Baines
2024-09-06 9:01 ` Ludovic Courtès
1 sibling, 1 reply; 22+ messages in thread
From: Christopher Baines @ 2024-09-01 17:06 UTC (permalink / raw)
To: Steve George; +Cc: Ludovic Courtès, guix-devel
[-- Attachment #1: Type: text/plain, Size: 4142 bytes --]
Steve George <steve@futurile.net> writes:
> On 31 Aug, Ludovic Courtès wrote:
>> Hi again!
>>
>> Over the years, consensus emerged that ‘core-updates’, as a branch where
>> we lump together all sorts of rebuild-the-world changes, is no longer
>> sustainable. Those of us who were at the Guix Days in February 2023
>> came to the conclusion that (correct me if I’m wrong) we should keep
>> branches focused, with a specific team responsible for taking care of
>> each branch and getting it merged.
>>
>> There’s now a ‘core-packages’ team, so there will be soon a
>> ‘core-packages-team’ branch focusing exclusively on what’s in its scope,
>> as specified in ‘etc/teams.scm’. There’s already a lot of work to do
>> actually: upgrading glibc (again!), coreutils, grep, etc., and switching
>> to a newer GCC as the default compiler. That branch won’t be special;
>> it will follow the conventions that were adopted last year:
>>
>> https://guix.gnu.org/manual/devel/en/html_node/Managing-Patches-and-Branches.html
>>
> (...)
>> To reduce world rebuilds, perhaps we’ll sometimes create “merge trains”,
>> whereby we’ll merge, say, the branch upgrading CMake and that ungrafting
>> ibus on top of ‘core-packages-team’, and then merge this combination in
>> ‘master’. The key being: these branches will have been developed and
>> tested independently of one another by dedicated teams, and the merge
>> train will be a mere formality.
>
> Under the 'patches and branches' workflow, what should happen to
> packages that are *not* part of any team, but do cause a rebuild of
> more than 300 dependent packages?
>
> Andy Tai gave an example of ffmpeg [0]. There aren't enough
> contributors or committers for every package to be covered by a team,
> so this seems like a permanent constraint even if more teams do grow
> over time.
The "Managing Patches and Branches" section deliberately doesn't mention
anything about teams as there's no requirement for branches to be
associated with teams.
Grouping related changes together is good for a few reasons, but it's
absolutely fine to have a branch which updates a single package, not
related to any team.
As noted on the page as well, if you don't have commit access (which is
required for creating branches), you should just open the issue then
hopefully someone with access will create the branch for you.
> The manual currently says it goes to 'staging' [1], and that this will
> be merged within six weeks. Is this actually true? I don't see any
> sign of it on Guix' git [2], and an unsure if the manual is out of
> sync with the branches workflow.
>
> While 'staging' seems like it could have similar difficulties to
> core-updates if it gets out of hand. The alternative choice of each
> time someone making a branch
> 'ffmpeg-and-stuff-i-collected-with-over-300-rebuilds' doesn't seem
> like a better choice ;-)
That page needs updating I think.
>> Recently, Christopher Baines further suggested that, as much as
>> possible, branches should be “stateless” in the sense that their changes
>> can be rebased anytime on top of ‘master’. This is what we’ve been
>> doing for the past couple of months with ‘core-updates’; that sometimes
>> made it hard to follow IMO, because there were too many changes, but for
>> more focused branches, that should work well.
> (...)
>
> Long-lived branches and ones that don't cleanly apply onto master
> cause lots of difficulties from what I've seen. Perhaps a lesson is
> that branches should both be stateless *and* should not exist for more
> than 3 months. We already have a rule that encourages atomic changes
> within any patch in order to make things faster/easier to review. By
> extension, lets do the same with branches - merge them more often.
Initially the documentation on branches said to create an issue when you
want to merge a branch, but this was changed to when you create a branch
to try and avoid situations like this, where a branch sits around and
gets stale for many months.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 987 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-01 17:06 ` Christopher Baines
@ 2024-09-03 14:02 ` Christopher Baines
0 siblings, 0 replies; 22+ messages in thread
From: Christopher Baines @ 2024-09-03 14:02 UTC (permalink / raw)
To: Steve George; +Cc: Ludovic Courtès, guix-devel
[-- Attachment #1: Type: text/plain, Size: 760 bytes --]
Christopher Baines <mail@cbaines.net> writes:
>> The manual currently says it goes to 'staging' [1], and that this will
>> be merged within six weeks. Is this actually true? I don't see any
>> sign of it on Guix' git [2], and an unsure if the manual is out of
>> sync with the branches workflow.
>>
>> While 'staging' seems like it could have similar difficulties to
>> core-updates if it gets out of hand. The alternative choice of each
>> time someone making a branch
>> 'ffmpeg-and-stuff-i-collected-with-over-300-rebuilds' doesn't seem
>> like a better choice ;-)
>
> That page needs updating I think.
I went looking to fix this, but it turns out this was just a NGinx
issue. I've fixed it on bayfront but I can't seem to push to Savannah at
the moment.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 987 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-08-31 13:03 ‘core-updates’ is gone; long live ‘core-packages-team’! Ludovic Courtès
2024-09-01 16:34 ` Steve George
@ 2024-09-04 12:58 ` Simon Tournier
2024-09-05 8:39 ` Marek Paśnikowski
2024-09-06 9:11 ` Ludovic Courtès
1 sibling, 2 replies; 22+ messages in thread
From: Simon Tournier @ 2024-09-04 12:58 UTC (permalink / raw)
To: Ludovic Courtès, guix-devel
Hi Ludo, all,
On Sat, 31 Aug 2024 at 15:03, Ludovic Courtès <ludo@gnu.org> wrote:
> To reduce world rebuilds, perhaps we’ll sometimes create “merge trains”,
> whereby we’ll merge, say, the branch upgrading CMake and that ungrafting
> ibus on top of ‘core-packages-team’, and then merge this combination in
> ‘master’.
I do not see how the world rebuild would be reduced.
Correct me if my understanding is misleading, the branch ’core-updates’
had been extended to more than core packages because the project had
(has?) not enough CPU power for continuously building any change with a
massive impact.
Therefore, this ’core-updates’ branch was built only every X months in
order to reduce the workload. As you perfectly know, all these grouped
changes is a mess and it requires a lot of effort to stabilize. Hence,
a very boring task… i.e., an arbitrary X. ;-)
For sure, that’s a good idea to have a core-packages-team that focuses
only on core packages – the initial aim. :-)
However, it does not address the issue with changes that have a massive
impact. This new core-packages-team just becomes one of other branches
that will also contain packages triggering world rebuilds.
And the question is how do we manage that?
> The key being: these branches will have been developed and
> tested independently of one another by dedicated teams, and the merge
> train will be a mere formality.
In this picture of “merge train”, the CI workload and world rebuilds
will increase, no?
Consider the two teams: science and this new core-packages. Then
science takes care of openblas (6000+ dependent packages) and
core-packages of grep (6000+ dependent packages).
When science team wants to merge to master, they ask for a rebuild.
When core-packages team wants to merge to master, they ask for a
rebuild.
Therefore, we have two world rebuilds. Or the both teams need to
synchronize and thus the branches are indeed developed independently but
not tested independently. Are they?
Well, my understanding of “merge train” is an automatic synchronization
workflow: science team does not ask for a rebuild but ask for a merge
and core-packages team idem – so they cannot evaluate their impact and
what needs to be fixed by their changes – then both merges are the two
wagons of the train. Well, it’s not clear for me what CI builds: first
one wagon then the two wagons? Or first the two wagons and then what?
Aside, the core-packages or science merges might have an impact to
packages tracked by other teams. Other said, fixes unrelated to their
scope. Somehow, it requires that other teams also follow – it means a
kind of synchronization.
That’s why core-updates became the big catch-all and complex mess.
That’s just a poor man way to synchronize: first it minimizes the number
of world rebuilds because they are more or less manually triggered, and
second it eases to have fixes unrelated to core changes.
For sure, I agree that having more branches will help to reduce the
complexity of merging core-updates-like changes. However, it’s not
clear how it will work in practise. Especially, the number of world
rebuilds will increase, IMHO.
Well, all in all, I understand the “merge train” story with “regular”
branches. But I do not see how it would work for different branches
that trigger world rebuilds.
Could you detail a bit your perspective with “merge train”? Or the
workflow for merging branches that contains world rebuild changes.
I know about [1] but I am lacking imagination for applying it to our
situation. Because, if we have 3 merge requests (A, B and C), the
“merge train” workflow implies the pipeline is run 3 times:
with A on top of master,
with A+B (B on the top of A on the top of master),
with A+B+C (C on the top of B on the top of A on the top of master).
If A+B fails then B is removed from the train and the pipeline runs A+C.
And you can also consider if A fails then B and B+C is built.
The “merge train” workflow makes sens when the number of changes in A, B
and C is high and thus it helps to keep the merges under control.
However, one implicit assumption is a low-cost pipeline, IMHO.
Bah the question how to merge different branches containing world
rebuilds without the big catch-all old core-updates branch is not
addressed under the constraint of reducing as much as possible the
number of world rebuilds, for what my opinion is worth.
Cheers,
simon
1: https://docs.gitlab.com/ee/ci/pipelines/merge_trains.html
PS: My understanding of Gitlab Merge Train selling point is that several
pipelines are run in parallel. How to waste power for increasing the
throughput?
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-04 12:58 ` Simon Tournier
@ 2024-09-05 8:39 ` Marek Paśnikowski
2024-09-05 9:40 ` Ricardo Wurmus
2024-09-06 9:11 ` Ludovic Courtès
1 sibling, 1 reply; 22+ messages in thread
From: Marek Paśnikowski @ 2024-09-05 8:39 UTC (permalink / raw)
To: Simon Tournier; +Cc: Ludovic Courtès, guix-devel
Good morning everyone. I would like to share my thoughts on the topic,
as I am personally frustrated with the current state and I considered
ways to improve it.
> Summary 1: not enough computing time
>
Compute less. This can be achieved by selecting a subset of packages to
be built, leaving the rest to be compiled by users. A simple model
would be to focus on the packages most downloaded by users, but that
would exclude some packages that are very inconvenient to compile, like
hardware support for the smallest computers. My suggestion is to define
the build system as the sum of the core system and of the most
downloaded other packages.
The problem of scaling of computing time is very real and should not be
dismissed easily. Ultimately, it is an instance of resource allocation
problem. As the number of packages increases (lim -> inf), the Guix
project will not be able to build /everything/ ever more often.
Inspired by the axioms of lambda calculus, I suggest to set up a
recursive structure of delegation of computing to other entities.
An example of a delegated build could be High Performance Computing
users. The number of actual computers running the software is vastly
smaller than the number of typical laptops and desktops, so the impact
of the software /not/ being built is much smaller. Conversely, I think
that the HPC people could gather funding for a build farm dedicated to
their specialized needs much more efficiently, rather than having to
contribute to a broad project with non-measurable results.
> Summary 2: doubts about merge trains
>
I was initially unsure about what exactly is the problem with merge
trains, so I read the GitLab document linked earlier in the thread. I
have come to understand the concept as a way to continuously integrate
small changes. While it may have merit for small projects, its
simplicity causes failure to scale. I have come up with a different
analogy, which I share below. I would also like to take this as an
opportunity to experiment — I will explain the following image later,
together with answers to questions about it.
“Building complex systems of software is like building cities along a
shore line of undiscovered land. The cities are built one after
another, by teams of various competences on ships of varied shape and
weight. Each ship could take a different route to the next city
location because of reefs and rocks. At any point in time, one ship is
ahead of the others — it claims the right to settle the newest city.
While striving to keep up with the leader, all the others must take
anchor in an existing port.”
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-05 8:39 ` Marek Paśnikowski
@ 2024-09-05 9:40 ` Ricardo Wurmus
0 siblings, 0 replies; 22+ messages in thread
From: Ricardo Wurmus @ 2024-09-05 9:40 UTC (permalink / raw)
To: Marek Paśnikowski; +Cc: Simon Tournier, Ludovic Courtès, guix-devel
Marek Paśnikowski <marek@marekpasnikowski.pl> writes:
>> Summary 1: not enough computing time
>>
>
> Compute less. This can be achieved by selecting a subset of packages to
> be built, leaving the rest to be compiled by users.
I don't think we should do that.
We have some useful information, though, to reduce the number of
rebuilds. For example, we know the position of packages in the graph,
and we know the number of dependents.
--
Ricardo
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-01 16:34 ` Steve George
2024-09-01 17:06 ` Christopher Baines
@ 2024-09-06 9:01 ` Ludovic Courtès
2024-09-09 15:30 ` Simon Tournier
1 sibling, 1 reply; 22+ messages in thread
From: Ludovic Courtès @ 2024-09-06 9:01 UTC (permalink / raw)
To: Steve George; +Cc: guix-devel
Hey Steve,
Steve George <steve@futurile.net> skribis:
>> To reduce world rebuilds, perhaps we’ll sometimes create “merge trains”,
>> whereby we’ll merge, say, the branch upgrading CMake and that ungrafting
>> ibus on top of ‘core-packages-team’, and then merge this combination in
>> ‘master’. The key being: these branches will have been developed and
>> tested independently of one another by dedicated teams, and the merge
>> train will be a mere formality.
>
> Under the 'patches and branches' workflow, what should happen to packages that are *not* part of any team, but do cause a rebuild of more than 300 dependent packages?
>
> Andy Tai gave an example of ffmpeg [0]. There aren't enough contributors or committers for every package to be covered by a team, so this seems like a permanent constraint even if more teams do grow over time.
As Chris noted, there’s no requirement for a branch to be associated
with a team; we won’t have teams for every possible package or package
set.
In the case of ffmpeg, I would suggest creating a “feature branch”
changing ffmpeg and only that (or possibly packages directly related to
ffmpeg). We’d get that branch built so those working on it and ensure
it does not lead to any regression.
Once it’s “known good”, I see two possibilities:
• Merge into ‘master’, especially if it turns out that binaries are
already available on the build farms.
• Attach to a “merge train”. For instance, assume there’s a branch
changing ‘gsl’: this is totally unrelated to ‘ffmpeg’ but it also
triggers a lot of rebuilds. We could tack that second branch on top
of the known-good ‘ffmpeg’ branch, and, once it’s all good, merge
that “train” into ‘master’.
(To be clear, the term “merge train” originates from GitLab-CI and
similar CI tool, which use it as a merge scheduling strategy:
<https://docs.gitlab.com/ee/ci/pipelines/merge_trains.html>. GitLab-CI
can create merge trains automatically I believe, but in our case we’d do
that manually, at least for now.)
> Long-lived branches and ones that don't cleanly apply onto master cause lots of difficulties from what I've seen. Perhaps a lesson is that branches should both be stateless *and* should not exist for more than 3 months. We already have a rule that encourages atomic changes within any patch in order to make things faster/easier to review. By extension, lets do the same with branches - merge them more often.
Agreed. And I agree with what Chris wrote: the current wording (create
request to merge when the branch is created) should help reduce the risk
of having long-lived branches.
> I would propose a patch to the managing patches/branches sections of the manual depending on what the consensus is here.
I think this is a work-in-progress, so any improvement here is welcome
IMO.
Thanks,
Ludo’.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-04 12:58 ` Simon Tournier
2024-09-05 8:39 ` Marek Paśnikowski
@ 2024-09-06 9:11 ` Ludovic Courtès
2024-09-06 10:09 ` Andreas Enge
` (2 more replies)
1 sibling, 3 replies; 22+ messages in thread
From: Ludovic Courtès @ 2024-09-06 9:11 UTC (permalink / raw)
To: Simon Tournier; +Cc: guix-devel
Hi,
Simon Tournier <zimon.toutoune@gmail.com> skribis:
> In this picture of “merge train”, the CI workload and world rebuilds
> will increase, no?
>
> Consider the two teams: science and this new core-packages. Then
> science takes care of openblas (6000+ dependent packages) and
> core-packages of grep (6000+ dependent packages).
>
> When science team wants to merge to master, they ask for a rebuild.
>
> When core-packages team wants to merge to master, they ask for a
> rebuild.
>
> Therefore, we have two world rebuilds. Or the both teams need to
> synchronize and thus the branches are indeed developed independently but
> not tested independently. Are they?
I don’t have a clear answer to that.
The way I see it, one of the branches would be tested independently.
The second one would also be tested independently, but on a limited
scope—e.g., x86_64-only, because (1) we usually have more build power
for that architecture, and (2) perhaps we know the problems with those
branches are unlikely to be architecture-specific.
Then we’d rebase that second branch on top of the first one, and build
the combination for all architectures.
In the end, we do end up testing the combination of branches, but it’s
more structured than ‘core-updates’: each team has tested its own thing
and we got a better understanding of the impact of each change
independently.
I also think we shouldn’t be afraid of triggering rebuilds more
frequently than now, as long as build farms can keep up. So there are
some changes that we’d previously lump together in ‘core-updates’ that I
would nowadays suggest having in a dedicated branch, merged
independently.
In the end, perhaps we’ll have to negotiate on a case-by-case basis.
The important thing to me is: independent testing as much as possible,
and well-defined responsibilities and scope for the people/teams
engaging in such changes.
Ludo’.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-06 9:11 ` Ludovic Courtès
@ 2024-09-06 10:09 ` Andreas Enge
2024-09-06 11:35 ` Marek Paśnikowski
` (2 more replies)
2024-09-06 17:44 ` Vagrant Cascadian
2024-09-09 17:28 ` Naming “build train” instead of “merge train”? Simon Tournier
2 siblings, 3 replies; 22+ messages in thread
From: Andreas Enge @ 2024-09-06 10:09 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: Simon Tournier, guix-devel
Hello,
Am Fri, Sep 06, 2024 at 11:11:14AM +0200 schrieb Ludovic Courtès:
> The way I see it, one of the branches would be tested independently.
> The second one would also be tested independently, but on a limited
> scope—e.g., x86_64-only, because (1) we usually have more build power
> for that architecture, and (2) perhaps we know the problems with those
> branches are unlikely to be architecture-specific.
> Then we’d rebase that second branch on top of the first one, and build
> the combination for all architectures.
concurring with Simon, following this description, I also do not understand
what this concept of merge trains improves as long as it is not automated
(and we have lots of build power to subsequently build several combinations
of branches).
Once the first branch is good, why not simply merge it to master and then
rebase the second branch on master and test it, instead of postponing the
merge? After all, building is costly, not merging.
Notice that with QA, the concept is that the packages will be available
on the build farm once the branch has been built, so postponing a merge
has no advantage.
Andreas
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-06 10:09 ` Andreas Enge
@ 2024-09-06 11:35 ` Marek Paśnikowski
2024-09-06 13:25 ` Andreas Enge
2024-09-06 13:17 ` indieterminacy
2024-09-26 12:52 ` Ludovic Courtès
2 siblings, 1 reply; 22+ messages in thread
From: Marek Paśnikowski @ 2024-09-06 11:35 UTC (permalink / raw)
To: Andreas Enge; +Cc: Ludovic Courtès, Simon Tournier, guix-devel
[-- Attachment #1: Type: text/plain, Size: 2425 bytes --]
Andreas Enge <andreas@enge.fr> writes:
> Hello,
>
> Am Fri, Sep 06, 2024 at 11:11:14AM +0200 schrieb Ludovic Courtès:
>> The way I see it, one of the branches would be tested independently.
>> The second one would also be tested independently, but on a limited
>> scope—e.g., x86_64-only, because (1) we usually have more build power
>> for that architecture, and (2) perhaps we know the problems with those
>> branches are unlikely to be architecture-specific.
>> Then we’d rebase that second branch on top of the first one, and build
>> the combination for all architectures.
>
> Once the first branch is good, why not simply merge it to master and then
> rebase the second branch on master and test it, instead of postponing the
> merge? After all, building is costly, not merging.
* What if an unrelated branch gets merged before the two considered in the
example?
I suggest to generalize the process like this:
1. branch based on master, passes QA -> merge
2. branch based on master, fails QA -> fix QA, go to 1
3. branch not based on master, passes QA -> rebase, go to 1
4. branch not based on master, fails QA -> fix QA, go to 3
* What if a branch is worked on for a long time and the rebase itself
becomes non-trivial?
The rebase process could be split into multiple
steps, each corresponding to successive merge commits in the master.
The process could be further made easier by introducing a policy where
each merge commit to master must be tagged with a unique identifier. By
a “merge commit” I mean any commit that brings a branch back to master,
including fast-forwards.
Thanks to the unique tags, not only could other branches rebase without
having to resort to commit hashes, but also users could confidently
point at specific points in the history to base their systems on. For
example, if a new version of an application removes an important
functionality, they could pin the guix channel to the merge tag before
and take their time to implement a solution.
> Notice that with QA, the concept is that the packages will be available
> on the build farm once the branch has been built, so postponing a merge
> has no advantage.
I think that merging reviewed code often is a good practice, because it
could make rebasing in other branches easier to solve in case of
incompatibilities by decreasing the scope of change at any particular step.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 869 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-06 10:09 ` Andreas Enge
2024-09-06 11:35 ` Marek Paśnikowski
@ 2024-09-06 13:17 ` indieterminacy
2024-09-26 12:52 ` Ludovic Courtès
2 siblings, 0 replies; 22+ messages in thread
From: indieterminacy @ 2024-09-06 13:17 UTC (permalink / raw)
To: Andreas Enge; +Cc: Ludovic Courtès, Simon Tournier, guix-devel
On 2024-09-06 10:09, Andreas Enge wrote:
> Hello,
>
> Am Fri, Sep 06, 2024 at 11:11:14AM +0200 schrieb Ludovic Courtès:
>> The way I see it, one of the branches would be tested independently.
>> The second one would also be tested independently, but on a limited
>> scope—e.g., x86_64-only, because (1) we usually have more build power
>> for that architecture, and (2) perhaps we know the problems with those
>> branches are unlikely to be architecture-specific.
>> Then we’d rebase that second branch on top of the first one, and build
>> the combination for all architectures.
>
> concurring with Simon, following this description, I also do not
> understand
> what this concept of merge trains improves as long as it is not
> automated
> (and we have lots of build power to subsequently build several
> combinations
> of branches).
>
> Once the first branch is good, why not simply merge it to master and
> then
> rebase the second branch on master and test it, instead of postponing
> the
> merge? After all, building is costly, not merging.
>
Well, if anybody wants a Friday digression, here is a parable about
'guaranteed connections':
https://yewtu.be/watch?v=vHEsKAefAzk
YMMV
> Notice that with QA, the concept is that the packages will be available
> on the build farm once the branch has been built, so postponing a merge
> has no advantage.
>
> Andreas
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-06 11:35 ` Marek Paśnikowski
@ 2024-09-06 13:25 ` Andreas Enge
0 siblings, 0 replies; 22+ messages in thread
From: Andreas Enge @ 2024-09-06 13:25 UTC (permalink / raw)
To: Marek Paśnikowski
Am Fri, Sep 06, 2024 at 01:35:07PM +0200 schrieb Marek Paśnikowski:
> * What if an unrelated branch gets merged before the two considered in the
> example?
That should not happened since branches are queued up in QA, see the
paragraph marked "Branches" here:
https://qa.guix.gnu.org/
> * What if a branch is worked on for a long time and the rebase itself
> becomes non-trivial?
And this is a situation we intend to avoid with the smaller branches
(of course it depends on your definition of "long time").
Andreas
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-06 9:11 ` Ludovic Courtès
2024-09-06 10:09 ` Andreas Enge
@ 2024-09-06 17:44 ` Vagrant Cascadian
2024-09-06 18:06 ` Leo Famulari
2024-09-06 19:49 ` ‘core-updates’ is gone; long live ‘core-packages-team’! Christopher Baines
2024-09-09 17:28 ` Naming “build train” instead of “merge train”? Simon Tournier
2 siblings, 2 replies; 22+ messages in thread
From: Vagrant Cascadian @ 2024-09-06 17:44 UTC (permalink / raw)
To: Ludovic Courtès, Simon Tournier; +Cc: guix-devel
[-- Attachment #1: Type: text/plain, Size: 1715 bytes --]
On 2024-09-06, Ludovic Courtès wrote:
> Simon Tournier <zimon.toutoune@gmail.com> skribis:
>
>> In this picture of “merge train”, the CI workload and world rebuilds
>> will increase, no?
>>
>> Consider the two teams: science and this new core-packages. Then
>> science takes care of openblas (6000+ dependent packages) and
>> core-packages of grep (6000+ dependent packages).
>>
>> When science team wants to merge to master, they ask for a rebuild.
>>
>> When core-packages team wants to merge to master, they ask for a
>> rebuild.
>>
>> Therefore, we have two world rebuilds. Or the both teams need to
>> synchronize and thus the branches are indeed developed independently but
>> not tested independently. Are they?
>
> I don’t have a clear answer to that.
>
> The way I see it, one of the branches would be tested independently.
> The second one would also be tested independently, but on a limited
> scope—e.g., x86_64-only, because (1) we usually have more build power
> for that architecture, and (2) perhaps we know the problems with those
> branches are unlikely to be architecture-specific.
>
> Then we’d rebase that second branch on top of the first one, and build
> the combination for all architectures.
Is it just me, or is rebasing branches disconcerting, as it likely means
the person signing the commit is not necessarily the original person
pushing the commit? This is worst for the now deprecated core-updates
branch with many rebased commits... are people still updating the
signed-off-by tags or whatnot?
Though, of course, there are problems with merges as well I recall being
discussed on the list in the past...
live well,
vagrant
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-06 17:44 ` Vagrant Cascadian
@ 2024-09-06 18:06 ` Leo Famulari
2024-09-06 20:29 ` Rebasing commits and re-signing before mergeing (Was: ‘core-updates’ is gone; long live ‘core-packages-team’!) Vagrant Cascadian
2024-09-06 19:49 ` ‘core-updates’ is gone; long live ‘core-packages-team’! Christopher Baines
1 sibling, 1 reply; 22+ messages in thread
From: Leo Famulari @ 2024-09-06 18:06 UTC (permalink / raw)
To: Vagrant Cascadian; +Cc: Ludovic Courtès, Simon Tournier, guix-devel
On Fri, Sep 06, 2024 at 10:44:54AM -0700, Vagrant Cascadian wrote:
> Is it just me, or is rebasing branches disconcerting, as it likely means
> the person signing the commit is not necessarily the original person
> pushing the commit? This is worst for the now deprecated core-updates
> branch with many rebased commits... are people still updating the
> signed-off-by tags or whatnot?
In Guix, the "signed-off-by" tag gives credit to the reviewer of the
patch, but doesn't indicate anything about authority to push to
guix.git.
In all cases, a commit that is pushed to guix.git will be signed by an
authorized committer. The signature system ensures that.
If we are concerned about long-running branches being rebased and
commits losing their "original" signatures, I think it's not really
something to worry about. That's because the signature *only* tells us
that that the commit was signed by someone who is authorized, and it
tells us *nothing* else. The code-signing authorization is extremely
limited in scope. It doesn't tell us that the code works, is freely
licensed, is not malicious, etc. So, it doesn't matter who signs a
commit, as long as it is signed by an authorized person.
Does this respond to your concerns? Or have I misunderstood?
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-06 17:44 ` Vagrant Cascadian
2024-09-06 18:06 ` Leo Famulari
@ 2024-09-06 19:49 ` Christopher Baines
1 sibling, 0 replies; 22+ messages in thread
From: Christopher Baines @ 2024-09-06 19:49 UTC (permalink / raw)
To: Vagrant Cascadian; +Cc: Ludovic Courtès, guix-devel
[-- Attachment #1: Type: text/plain, Size: 2574 bytes --]
Vagrant Cascadian <vagrant@debian.org> writes:
> On 2024-09-06, Ludovic Courtès wrote:
>> Simon Tournier <zimon.toutoune@gmail.com> skribis:
>>
>>> In this picture of “merge train”, the CI workload and world rebuilds
>>> will increase, no?
>>>
>>> Consider the two teams: science and this new core-packages. Then
>>> science takes care of openblas (6000+ dependent packages) and
>>> core-packages of grep (6000+ dependent packages).
>>>
>>> When science team wants to merge to master, they ask for a rebuild.
>>>
>>> When core-packages team wants to merge to master, they ask for a
>>> rebuild.
>>>
>>> Therefore, we have two world rebuilds. Or the both teams need to
>>> synchronize and thus the branches are indeed developed independently but
>>> not tested independently. Are they?
>>
>> I don’t have a clear answer to that.
>>
>> The way I see it, one of the branches would be tested independently.
>> The second one would also be tested independently, but on a limited
>> scope—e.g., x86_64-only, because (1) we usually have more build power
>> for that architecture, and (2) perhaps we know the problems with those
>> branches are unlikely to be architecture-specific.
>>
>> Then we’d rebase that second branch on top of the first one, and build
>> the combination for all architectures.
>
> Is it just me, or is rebasing branches disconcerting, as it likely means
> the person signing the commit is not necessarily the original person
> pushing the commit? This is worst for the now deprecated core-updates
> branch with many rebased commits... are people still updating the
> signed-off-by tags or whatnot?
Are you finding something specific about that disconcerting?
Personally I think having the ability to rebase branches should lead to
a cleaner Git history (which is more readable and therefore hopefully
more secure). I dislike the idea of treating branches as stateful as
that makes them much harder to manage and use, it should be possible to
push some commits to a non-master branch, then decide that's a bad idea
and remove them without fuss.
Maybe it also comes down to whether committers are interchangeable or
not. If one persons signature on a commit is viewed differently to
someone elses, then maybe there's an issue.
I don't believe the Signed-off lines are being added to when rebasing,
that still reflects the person or people who took the patch and
initially pushed it to a branch. There should still be the extra
information about the committer and signature key though.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 987 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Rebasing commits and re-signing before mergeing (Was: ‘core-updates’ is gone; long live ‘core-packages-team’!)
2024-09-06 18:06 ` Leo Famulari
@ 2024-09-06 20:29 ` Vagrant Cascadian
2024-09-07 17:45 ` Leo Famulari
0 siblings, 1 reply; 22+ messages in thread
From: Vagrant Cascadian @ 2024-09-06 20:29 UTC (permalink / raw)
To: Leo Famulari; +Cc: Ludovic Courtès, Simon Tournier, guix-devel
[-- Attachment #1: Type: text/plain, Size: 3402 bytes --]
On 2024-09-06, Leo Famulari wrote:
> On Fri, Sep 06, 2024 at 10:44:54AM -0700, Vagrant Cascadian wrote:
>> Is it just me, or is rebasing branches disconcerting, as it likely means
>> the person signing the commit is not necessarily the original person
>> pushing the commit? This is worst for the now deprecated core-updates
>> branch with many rebased commits... are people still updating the
>> signed-off-by tags or whatnot?
>
> In Guix, the "signed-off-by" tag gives credit to the reviewer of the
> patch, but doesn't indicate anything about authority to push to
> guix.git.
That sounds more like a Reviewed-by tag.
from doc/contributing.texi:
When pushing a commit on behalf of somebody else, please add a
@code{Signed-off-by} line at the end of the commit log message---e.g.,
with @command{git am --signoff}. This improves tracking of who did
what.
...
@cindex Reviewed-by, git trailer
When you deem the proposed change adequate and ready for inclusion
within Guix, the following well understood/codified
@samp{Reviewed-by:@tie{}Your@tie{}Name@tie{}<your-email@@example.com>}
@footnote{The @samp{Reviewed-by} Git trailer is used by other projects
such as Linux, and is understood by third-party tools such as the
@samp{b4 am} sub-command, which is able to retrieve the complete
submission email thread from a public-inbox instance and add the Git
trailers found in replies to the commit patches.} line should be used to
sign off as a reviewer, meaning you have reviewed the change and that it
looks good to you:
> In all cases, a commit that is pushed to guix.git will be signed by an
> authorized committer. The signature system ensures that.
>
> If we are concerned about long-running branches being rebased and
> commits losing their "original" signatures, I think it's not really
> something to worry about. That's because the signature *only* tells us
> that that the commit was signed by someone who is authorized, and it
> tells us *nothing* else. The code-signing authorization is extremely
> limited in scope. It doesn't tell us that the code works, is freely
> licensed, is not malicious, etc. So, it doesn't matter who signs a
> commit, as long as it is signed by an authorized person.
My understanding of what properly signed commits tell me, at least in
the context of Guix, is that the person who has signed a given commit
has made reasonable efforts to ensure the code works, is freely
licensed, and is not malicious, etc.
That they agree to do those sorts of things and have a history doing
those things is why some people are trusted (e.g. authorized) to push
commits.
Mistakes happen, and that is fine, but having the signatures allows some
way to review who did what when unfortunate things inevitably happen, to
try and come to understanding of what to do better in the future.
What concerns me, is with rebasing hundreds (thousands?) of commits
(e.g. recent core-updates rebase & merge), many of which were originally
reviewed by someone other than the person signing the commit, and
re-signing them reduces the confidence that the signature indicates
processes were appropriately followed...
guix pull does protect against moving to unrelated histories, so
probably the worst dangers of rebasing will at least trigger some
warning!
live well,
vagrant
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Rebasing commits and re-signing before mergeing (Was: ‘core-updates’ is gone; long live ‘core-packages-team’!)
2024-09-06 20:29 ` Rebasing commits and re-signing before mergeing (Was: ‘core-updates’ is gone; long live ‘core-packages-team’!) Vagrant Cascadian
@ 2024-09-07 17:45 ` Leo Famulari
2024-09-08 2:33 ` Vagrant Cascadian
0 siblings, 1 reply; 22+ messages in thread
From: Leo Famulari @ 2024-09-07 17:45 UTC (permalink / raw)
To: Vagrant Cascadian; +Cc: Ludovic Courtès, Simon Tournier, guix-devel
On Fri, Sep 06, 2024 at 01:29:11PM -0700, Vagrant Cascadian wrote:
> > In Guix, the "signed-off-by" tag gives credit to the reviewer of the
> > patch, but doesn't indicate anything about authority to push to
> > guix.git.
>
> That sounds more like a Reviewed-by tag.
>
> from doc/contributing.texi:
>
> When pushing a commit on behalf of somebody else, please add a
> @code{Signed-off-by} line at the end of the commit log message---e.g.,
> with @command{git am --signoff}. This improves tracking of who did
> what.
We used the signed-off-by tag for years before we started signing
commits, so in Guix it has also indicated the person who performed the
primary review of the patch / commit.
> My understanding of what properly signed commits tell me, at least in
> the context of Guix, is that the person who has signed a given commit
> has made reasonable efforts to ensure the code works, is freely
> licensed, and is not malicious, etc.
I see. That's a misconception. The commit signature can only be used as
a code-signing authorization tool, to control access to the
authoritative copy of the codebase and, transitively, to control access
to users' computers.
The project leadership does aim to only authorize people they believe
will make the efforts you describe above.
But in Guix, the requirement to make those efforts is only enforced
socially.
There are no mechanisms to ensure that the build is not broken on the
master branch, etc.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Rebasing commits and re-signing before mergeing (Was: ‘core-updates’ is gone; long live ‘core-packages-team’!)
2024-09-07 17:45 ` Leo Famulari
@ 2024-09-08 2:33 ` Vagrant Cascadian
0 siblings, 0 replies; 22+ messages in thread
From: Vagrant Cascadian @ 2024-09-08 2:33 UTC (permalink / raw)
To: Leo Famulari; +Cc: Ludovic Courtès, Simon Tournier, guix-devel
[-- Attachment #1: Type: text/plain, Size: 2776 bytes --]
On 2024-09-07, Leo Famulari wrote:
> On Fri, Sep 06, 2024 at 01:29:11PM -0700, Vagrant Cascadian wrote:
>> > In Guix, the "signed-off-by" tag gives credit to the reviewer of the
>> > patch, but doesn't indicate anything about authority to push to
>> > guix.git.
>>
>> That sounds more like a Reviewed-by tag.
>>
>> from doc/contributing.texi:
>>
>> When pushing a commit on behalf of somebody else, please add a
>> @code{Signed-off-by} line at the end of the commit log message---e.g.,
>> with @command{git am --signoff}. This improves tracking of who did
>> what.
>
> We used the signed-off-by tag for years before we started signing
> commits, so in Guix it has also indicated the person who performed the
> primary review of the patch / commit.
Well, guix documentation mentions both Signed-off-by and Reviewed-by,
even if historically there was different practice in use...
Given that "pushing a commit on behalf of someone else" also necessarily
requires for all practical purposes "signing" the commit with a valid
key, I read that as the two going together. Although there might be a
Signed-off-by by someone other than the signer.
Not a huge deal, really, in any case.
>> My understanding of what properly signed commits tell me, at least in
>> the context of Guix, is that the person who has signed a given commit
>> has made reasonable efforts to ensure the code works, is freely
>> licensed, and is not malicious, etc.
>
> I see. That's a misconception. The commit signature can only be used as
> a code-signing authorization tool, to control access to the
> authoritative copy of the codebase and, transitively, to control access
> to users' computers.
>
> The project leadership does aim to only authorize people they believe
> will make the efforts you describe above.
>
> But in Guix, the requirement to make those efforts is only enforced
> socially.
>
> There are no mechanisms to ensure that the build is not broken on the
> master branch, etc.
I do not see the distinction between social and tehnical mechanisms here
as... meaningful?
The code-signing authorization tool (e.g. technical) is useful way to
track that social agreements of the project are being respected
(e.g. social) or not, and a mechanism to maintain those agreements. That
it also tracks the authoritative codebase seems a desireable
side-effect... which has both social and technical elements.
I have no illusions that someone could push a broken commit or otherwise
imperfect commit; I have even done so myself at least once or twice! The
question is more what to do when that happens, or repeatedly happens,
which has various technical measures to enforce the social norms.
live well,
vagrant
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-06 9:01 ` Ludovic Courtès
@ 2024-09-09 15:30 ` Simon Tournier
0 siblings, 0 replies; 22+ messages in thread
From: Simon Tournier @ 2024-09-09 15:30 UTC (permalink / raw)
To: Ludovic Courtès, Steve George; +Cc: guix-devel
Hi Ludo,
On Fri, 06 Sep 2024 at 11:01, Ludovic Courtès <ludo@gnu.org> wrote:
> Once it’s “known good”, I see two possibilities:
[...]
> • Attach to a “merge train”. For instance, assume there’s a branch
> changing ‘gsl’: this is totally unrelated to ‘ffmpeg’ but it also
> triggers a lot of rebuilds. We could tack that second branch on top
> of the known-good ‘ffmpeg’ branch, and, once it’s all good, merge
> that “train” into ‘master’.
As Andreas pointed out:
Once the first branch is good, why not simply merge it to master and then
rebase the second branch on master and test it, instead of postponing the
merge? After all, building is costly, not merging.
Somehow, I have the same question: if “gsl” branch is “known good”, why
not directly merge it to master. As the other possibility suggests…
> • Merge into ‘master’, especially if it turns out that binaries are
> already available on the build farms.
…However, in this case, if the branch changing ’ffmpeg’ is “known good”
because it had been built, then the 521 rebuilds are wasted because the
branch “gsl” is cooking and also triggers these same 521 rebuilds.
Therefore, it would be wiser to merge the ’ffmpeg’ branch into the ’gsl’
branch and rebuild only once. (I am not pushing the button “please save
the planet” but I am thinking about it very strongly. ;-))
Somehow, a tool is missing, IMHO.
How to know which branch needs to be rebased onto which other one? How
to know which rebuilds from one specific branch are not independent to
some other branch?
Maybe it would help as a first step to have the intersection list of
“guix refresh” applied to two sets of packages.
Assuming two 2 branches are not continuously built but only when ready
to merge, I still have the same question [1]:
Bah the question how to merge different branches containing world
rebuilds without the big catch-all old core-updates branch is not
addressed under the constraint of reducing as much as possible the
number of world rebuilds, for what my opinion is worth.
Cheers,
simon
--8<---------------cut here---------------start------------->8---
$ guix refresh -l gsl
| cut -d':' -f2 | tr ' ' '\n' | tail -n +2 | sort | uniq > gsl.deps
$ for i in $(seq 2 7); do guix refresh -l ffmpeg@$i \
| cut -d':' -f2 | tr ' ' '\n' | tail -n +2 ;done | sort | uniq > ffmpeg.deps
$ wc -l gsl.deps ffmpeg.deps
1473 gsl.deps
521 ffmpeg.deps
1994 total
$ for line in $(cat ffmpeg.deps); do grep -n ^$line gsl.deps ;done | wc -l
521
--8<---------------cut here---------------end--------------->8---
1: Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
Simon Tournier <zimon.toutoune@gmail.com>
Wed, 04 Sep 2024 14:58:37 +0200
id:87v7zby3r6.fsf@gmail.com
https://lists.gnu.org/archive/html/guix-devel/2024-09
https://yhetil.org/guix/87v7zby3r6.fsf@gmail.com
^ permalink raw reply [flat|nested] 22+ messages in thread
* Naming “build train” instead of “merge train”?
2024-09-06 9:11 ` Ludovic Courtès
2024-09-06 10:09 ` Andreas Enge
2024-09-06 17:44 ` Vagrant Cascadian
@ 2024-09-09 17:28 ` Simon Tournier
2 siblings, 0 replies; 22+ messages in thread
From: Simon Tournier @ 2024-09-09 17:28 UTC (permalink / raw)
To: Ludovic Courtès; +Cc: guix-devel
Hi Ludo, all
On Fri, 06 Sep 2024 at 11:11, Ludovic Courtès <ludo@gnu.org> wrote:
> In the end, perhaps we’ll have to negotiate on a case-by-case basis.
> The important thing to me is: independent testing as much as possible,
> and well-defined responsibilities and scope for the people/teams
> engaging in such changes.
I agree.
My main concern is about the potential (excessive) unnecessary wasted
rebuilds.
We need a policy that clarifies how to start large rebuilds, which
implies cross the final line with a merge.
Let take the example from this another message [1] as one case.
On Fri, 06 Sep 2024 at 11:01, Ludovic Courtès <ludo@gnu.org> wrote:
> • Attach to a “merge train”. For instance, assume there’s a branch
> changing ‘gsl’: this is totally unrelated to ‘ffmpeg’ but it also
> triggers a lot of rebuilds. We could tack that second branch on top
> of the known-good ‘ffmpeg’ branch, and, once it’s all good, merge
> that “train” into ‘master’.
>
> (To be clear, the term “merge train” originates from GitLab-CI and
> similar CI tool, which use it as a merge scheduling strategy:
> <https://docs.gitlab.com/ee/ci/pipelines/merge_trains.html>. GitLab-CI
> can create merge trains automatically I believe, but in our case we’d do
> that manually, at least for now.)
If we consider this specific case, the “merge train” workflow means:
a) the branch changing “gsl” is built, so 1473 rebuilds.
b) the branch changing “gsl” and “ffmpeg” is also built in parallel, so
521 rebuilds.
The “merge train” workflow also reads:
i) the branch changing “ffmpeg” is built, so 521 rebuilds.
ii) the branch changing “gsl” and “ffmpeg” is also built in parallel, so
1473 rebuilds including all the 521 rebuilds of i).
Therefore, for each scenario, 521 builds are “wasted“, compared to the
optimal 1473 in this case.
I do not think it’s what you have in mind when speaking about “merge
train”. Is it?
Maybe it points that “merge train” as described by Gitlab is not what we
need. The “merge train” workflow runs *in parallel* the builds of
several branches and I am not convinced this is wanted for heavy
branches as it might happens.
Therefore in order to avoid some confusion, maybe we could avoid the
name “merge train” and use another name for the strategy we are
discussing and we would like to implement. For instance “rebase train”
or “build train”. Or yet another name. :-)
Let try to not waste resource. I think we could have a policy when a
branch contains “heavy” rebuilds. Because we cannot continuously
rebuild the world. ;-)
Somehow, the team of this “heavy” rebuild branch says: “hey build train
of branch X starts soon”, meaning the team is ready and the state of the
branch X looks good so it require to set up the CI, i.e., the team is
going to trigger more than 1000 rebuilds. The message “build train of
branch X starts soon” is sent to guix-devel and one week is let to the
other teams to rebase their own branch onto branch X, if they have
something to build, are ready, etc.
The team who asked the “build train” is responsible for crossing the
final line (the merge to master) and also responsible to synchronize
with the wagons, e.g., by sending some updates about how is going the
world rebuild, when the final merge is planned, etc.
If no one takes this train, no big deal because a next train is probably
going to start soon. ;-)
WDYT?
Cheers,
simon
--8<---------------cut here---------------start------------->8---
$ guix refresh -l gsl
| cut -d':' -f2 | tr ' ' '\n' | tail -n +2 | sort | uniq > gsl.deps
$ for i in $(seq 2 7); do guix refresh -l ffmpeg@$i \
| cut -d':' -f2 | tr ' ' '\n' | tail -n +2 ;done | sort | uniq > ffmpeg.deps
$ wc -l gsl.deps ffmpeg.deps
1473 gsl.deps
521 ffmpeg.deps
1994 total
$ for line in $(cat ffmpeg.deps); do grep -n ^$line gsl.deps ;done | wc -l
521
--8<---------------cut here---------------end--------------->8---
PS: Please note the number here are not true the number of rebuilds but
the number of packages that triggers all the necessary rebuilds.
1: Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
Ludovic Courtès <ludo@gnu.org>
Fri, 06 Sep 2024 11:01:46 +0200
id:87h6at2m11.fsf@gnu.org
https://lists.gnu.org/archive/html/guix-devel/2024-09
https://yhetil.org/guix/87h6at2m11.fsf@gnu.org
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
2024-09-06 10:09 ` Andreas Enge
2024-09-06 11:35 ` Marek Paśnikowski
2024-09-06 13:17 ` indieterminacy
@ 2024-09-26 12:52 ` Ludovic Courtès
2 siblings, 0 replies; 22+ messages in thread
From: Ludovic Courtès @ 2024-09-26 12:52 UTC (permalink / raw)
To: Andreas Enge; +Cc: Simon Tournier, guix-devel
Hi,
Andreas Enge <andreas@enge.fr> skribis:
> Am Fri, Sep 06, 2024 at 11:11:14AM +0200 schrieb Ludovic Courtès:
>> The way I see it, one of the branches would be tested independently.
>> The second one would also be tested independently, but on a limited
>> scope—e.g., x86_64-only, because (1) we usually have more build power
>> for that architecture, and (2) perhaps we know the problems with those
>> branches are unlikely to be architecture-specific.
>> Then we’d rebase that second branch on top of the first one, and build
>> the combination for all architectures.
[...]
> Once the first branch is good, why not simply merge it to master and then
> rebase the second branch on master and test it, instead of postponing the
> merge? After all, building is costly, not merging.
>
> Notice that with QA, the concept is that the packages will be available
> on the build farm once the branch has been built, so postponing a merge
> has no advantage.
Maybe you’re right, I don’t know. We’ll have to give it a spin and see
what works best.
My main concern is the build cost of small but unrelated changes that
really ought to be tested separately but trigger lots of rebuilds (or
redownloads, once substitutes are available).
Ludo’.
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2024-09-26 19:26 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-31 13:03 ‘core-updates’ is gone; long live ‘core-packages-team’! Ludovic Courtès
2024-09-01 16:34 ` Steve George
2024-09-01 17:06 ` Christopher Baines
2024-09-03 14:02 ` Christopher Baines
2024-09-06 9:01 ` Ludovic Courtès
2024-09-09 15:30 ` Simon Tournier
2024-09-04 12:58 ` Simon Tournier
2024-09-05 8:39 ` Marek Paśnikowski
2024-09-05 9:40 ` Ricardo Wurmus
2024-09-06 9:11 ` Ludovic Courtès
2024-09-06 10:09 ` Andreas Enge
2024-09-06 11:35 ` Marek Paśnikowski
2024-09-06 13:25 ` Andreas Enge
2024-09-06 13:17 ` indieterminacy
2024-09-26 12:52 ` Ludovic Courtès
2024-09-06 17:44 ` Vagrant Cascadian
2024-09-06 18:06 ` Leo Famulari
2024-09-06 20:29 ` Rebasing commits and re-signing before mergeing (Was: ‘core-updates’ is gone; long live ‘core-packages-team’!) Vagrant Cascadian
2024-09-07 17:45 ` Leo Famulari
2024-09-08 2:33 ` Vagrant Cascadian
2024-09-06 19:49 ` ‘core-updates’ is gone; long live ‘core-packages-team’! Christopher Baines
2024-09-09 17:28 ` Naming “build train” instead of “merge train”? Simon Tournier
Code repositories for project(s) associated with this external index
https://git.savannah.gnu.org/cgit/guix.git
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.