I've heard that theory before. From observation on my late armhf server (two cores): - it takes just below 2GB to build one of the derivations - It doesn't swap a single byte - whether with two or a single core, it takes roughly the same amount of memory - substitution is nice, it doesn't require lots of memory (and then --cores is useless) I think it's because we load all the files in a batch before we build them. The biggest amount of memory required is not for running the compiler on a thread, but for loading files and keeping them in memory for the whole duration of the build. With more threads, we still don't load each file more than once (twice to build it), so there's no reason it should take more memory. Or maybe the process of loading and building is inherently single-threaded? I don't think so, but maybe? Or it doesn't honor --cores. Le 13 juillet 2022 18:58:58 GMT+02:00, Csepp a écrit : > >Maxim Cournoyer writes: > >> Hi Danny, >> >> Danny Milosavljevic writes: >> >>> Hi, >>> >>> I just got a report that with Guix in a virtual Machine (like described in the >>> manual in 8.16), guix pull does not actually work[1] with 1 GB of RAM. >>> It does work fine with 4 GB of RAM. >> >> I don't see any reference of 1 GiB being enough in our current version >> of the manual. If you do, please let me know. >> >> Closing for now. >> >> Thanks, >> >> Maxim > >I think it's enough if you only use a single core. >If any guix operations goes out of memory, add --cores=1. >So: guix pull --cores=1 > > >