Hey, I've been putting some more time and money in to trying to get the QA data service (data.qa.guix.gnu.org) to perform better recently, but unfortunately I haven't been having much success. I've been trying to parallelise more and while I think this should speed things up, butI'm having to reduce the actual parallelism due to lack of memory (the machine I rent for data.qa.guix.gnu.org just has 32G). One of the memory problems I'm having relates to the Guix inferior processes that the data service uses when computing derivations. The data serivce goes through the list of systems (x86_64-linux, aarch64-linux, ...) and because the data cached for x86_64-linux probably doesn't relate to aarch64-linux, there's some code that attempts to clear the caches [1]. 1: https://git.savannah.gnu.org/cgit/guix/data-service.git/tree/guix-data-service/jobs/load-new-guix-revision.scm#n1970 Unfortunately this code has to reach in to Guix internals to try and do this, and it does reduce the heap usage significantly, but this doesn't result in stable memory usage. Each system processed seems to add about 250MiB of data to the Guile heap that isn't cleared out. To me that sounds like a lot of memory, but there's also a lot of systems/targets, so overall this leads to the inferior process using with around 6GiB of data in the heap after processing all the systems/targets. This peak memory usage really limits how much the machine can do. These numbers come from this specific job that ran with a parallelism of 1 to get clear data [2]. 2: https://data.qa.guix.gnu.org/job/60896 I've tried using the heap profiler that Ludo wrote, but nothing jumps out at me about what this extra 250MiB of stuff in the heap relates to. I'm also aware that my current cache cleanup doesn't actually remove references to the hash tables themselves, but I doubt they take up this much space. Does anyone have any suggestions as to what might be taking up this space on the heap, or how to try and find out? Thanks, Chris