On Tue, 28 Sep 2021 10:43:20 +0200 zimoun wrote: > Hi, > > On Mon, 27 Sep 2021 at 17:46, Jason Self wrote: > > [...] > > > > Yes. In gen6. They have been moved, not deleted. > > > > The versioning and locations in terms of gnuN and genN are knowable > > and predictable in advance. I wonder if there is, or could be made, > > a way to leverage that so that future moving of files can be done > > without causing problems, as long as the files themselves remain > > otherwise identical. As an example, the current cleanup scripts > > might be found in old/gen7 in the future. Although using git would > > probably be a better choice as it would seem to eliminate URL > > hunting. > > Guix has the availability to transparently build any old version using > “guix time-machine”, i.e., > > guix time-machine --commit=0c7c84407d65f3d03ad1fe3984ae4d524992f498 > \ -- build linux-libre > > should build the Linux (libre) kernel as it was on 2020, 25th May. > > If the user allow substitutes, then the necessary materials is fetch > from machines hosted in Berlin and maintain by Guix folk. > > However, if the user does not allow substitutes, then the source are > first fetched from upstream. Here several cases of origin. Upstream > is still up, everything is fine. Upstream disappeared in the > meantime, it depends on the “type” of the origin and the core issue > is the mapping between the information at package time (e.g., 2020, > 25th May) and the servers providing a fallback at request time for > this missing source. > > When the upstream source is a Git repo, this map is a simple > contend-addressed lookup by a (almost) straightforward resolver. > > When the upstream source is not Git repo, this map becomes harder and > requires – in addition to a fallback server – an external resolver: > something that maps from the information at package time (2020, 25th > May) to the fallback server. > > If the package linux-libre defined on 2020, 25th May (written on > stone) points to an URL source which disappears, this Guix > time-machine feature becomes doomed because URL is a really bad > contend-addressed system as all the broken internet shows us. > > For sure, the infrastructure needs to evolve for a better future; > easier maintainability for instance. However, please consider the > archivist point of view and help to not break the past. :-) It's not really breaking the past if this is how the past worked in reality: That previous generations of scripts are moved to old/genN, but more of Guix's representation of how the past worked which says that they not move, which doesn't reflect the actual reality of the past. The two don't seem equivalent. It seems that Guix can handle multiple download locations already, either from the main location or from others so why is the old/gen7 location not already in the kernel build recipe? If a new freedom problem were found that resulted in the need to come up with an 8th generation, the current ones will be findable in old/gen7. Is Guix build machinery currently aware of that and ready to check old/gen7 now for whenever that future move happens? If not, then this would seem to create future breakage when that happens. This move is 100% knowable and predictable in advance so why not have it ready for now and put old/gen7 into the recipe for the kernel, even if it's just an additional hardcoded URL and not something dynamically computed? If not, using git would seem to be a better choice. I'm not sure why it's not used already.