Mark H Weaver skribis: > ludo@gnu.org (Ludovic Courtès) writes: > >> Mark H Weaver skribis: >> >>> I think we should sort the entire directory using merge sort backed to >>> disk files. If we load chunks of the directory, sort them and process >>> them individually, I expect that this will increase the amount of I/O >>> required by a non-trivial factor. In each pass, we would load blocks of >>> inodes from disk, almost all of which are likely to be present in the >>> store and thus linked from the directory, but in this scheme we will >>> process only a small number of them and drop the rest on the floor to be >>> read again in the next pass. Given that even my fairly optimal >>> implementation takes about 35 minutes to run on Hydra, I'd prefer to >>> avoid multiplying that by a non-trivial factor. >> >> Sure, though it’s not obvious to me how much of a difference it makes; >> my guess is that processing in large chunks is already a win, but we’d >> have to measure. > > I agree, it would surely be a win. Given that it currently takes on the > order of a day to run this phase on Hydra, if your proposed method takes > 2 hours, that would be a huge win, but still not good, IMO. Even 35 > minutes is slower than I'd like. Of course. I did some measurements with the attached program on chapters, which is a Xen VM with spinning disks underneath, similar to hydra.gnu.org. It has 600k entries in /gnu/store/.links. Here’s a comparison of the “optimal” mode (bulk stats after we’ve fetched all the dirents) vs. the “semi-interleaved” mode (doing bulk stats every 100,000 dirents): --8<---------------cut here---------------start------------->8--- ludo@guix:~$ gcc -std=gnu99 -Wall links-traversal.c -DMODE=3 ludo@guix:~$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' ludo@guix:~$ time ./a.out 603858 dir_entries, 157 seconds stat took 1 seconds real 2m38.508s user 0m0.324s sys 0m1.824s ludo@guix:~$ gcc -std=gnu99 -Wall links-traversal.c -DMODE=2 ludo@guix:~$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' ludo@guix:~$ time ./a.out 3852 dir_entries, 172 seconds (including stat) real 2m51.827s user 0m0.312s sys 0m1.808s --8<---------------cut here---------------end--------------->8--- Semi-interleaved is ~12% slower here (not sure how reproducible that is though). >>> Why not just use GNU sort? It already exists, and does exactly what we >>> need. >> >> Does ‘sort’ manage to avoid reading whole files in memory? > > Yes, it does. I monitored the 'sort' process when I first ran my > optimized pipeline. It created about 10 files in /tmp, approximately 70 > megabytes each as I recall, and then read them all concurrently while > writing the sorted output. > > My guess is that it reads a manageable chunk of the input, sorts it in > memory, and writes it to a temporary file. I guess it repeats this > process, writing multiple temporary files, until the entire input is > consumed, and then reads all of those temporary files, merging them > together into the output stream. OK. That seems to be that the comment above ‘sortlines’ in sort.c describes. >>> If you object to using an external program for some reason, I would >>> prefer to re-implement a similar algorithm in the daemon. >> >> Yeah, I’d rather avoid serializing the list of file names/inode number >> pairs just to invoke ‘sort’ on that. > > Sure, I agree that it would be better to avoid that, but IMO not at the > cost of using O(N) memory instead of O(1) memory, nor at the cost of > multiplying the amount of disk I/O by a non-trivial factor. Understood. sort.c in Coreutils is very big, and we surely don’t want to duplicate all that. Yet, I’d rather not shell out to ‘sort’. Do you know how many entries are in .links on hydra.gnu.org? If it performs comparably to chapters, the timings suggests it should have around 10.5M entries. Thanks! Ludo’.