Hi Ludo, Ludovic Courtès writes: > Well, ‘guix publish’ would first need to create multi-member archives, > right? Correct, but it's trivial once the bindings have been implemented. > Also, lzlib (which is what we use) does not implement parallel > decompression, AIUI. Yes it does, multi-member archives is a non-optional part of the Lzip specs, and lzlib implemetns all the specs. > Even if it did, would we be able to take advantage of it? Currently > ‘restore-file’ expects to read an archive stream sequentially. Yes it works, I just tried this: --8<---------------cut here---------------start------------->8--- cat big-file.lz | plzip -d -o big-file - --8<---------------cut here---------------end--------------->8--- Decompression happens in parallel. > Even if I’m wrong :-), decompression speed would at best be doubled on > multi-core machines (wouldn’t help much on low-end ARM devices), and > that’s very little compared to the decompression speed achieved by zstd. Why doubled? If the archive has more than CORE-NUMBER segments, then the decompression duration can be divided by CORE-NUMBER. All that said, I think we should have both: - Parallel lzip support is the easiest to add at this point. It's the best option for people with low bandwidth. This can benefit most of the planet I suppose. - zstd is best for users with high bandwidth (or with slow hardware). We need to write the necessary bindings though, so it will take a bit more time. Then the users can choose which compression they prefer, mostly depending on their hardware and bandwidth. -- Pierre Neidhardt https://ambrevar.xyz/