Hi Linus, Linus Björnstam writes: > I had a look and there is quite a lot you can do. Just out of > curiosity: how much is GC time? You are generating a lot of > intermediate data. Instead of vector-append! maybe use something like > the new vectorlist srfi? If you show me a flame graph and give me some > test data I can have a go! Thank you! I expect GC time to be roughly half during the trust calculation (much more during import), because this fills up two cores. I can’t easily give you a flame graph (except if you can tell me how to generate it), but I now pushed a version in pure Scheme (instead of wisp) that runs a huge test: hg clone https://hg.sr.ht/~arnebab/wispwot cd wispwot ./run-wispwot.scm --test This imports 200k trust edges (which takes a lot of time) and then does three full trust calculations (which each take around 70s). The most important limitation for optimization is that the memory use must stay reasonable with 64k IDs which each have 1000 trust edges. This memory use is what the current code is optimized for (that’s why there are all those u8vectors and u16vectors). For the math: at 64 million trust edges, this should currently require (in theory) less than 200MiB of memory for the actual trust structure (2 byte for the id, 1 byte for the trust). Going up to 300MiB for time-optimization would still be viable, but it shouldn’t go beyond that (there are algorithmic options to reduce the amount of work done per update, so the runtime of the current code is important but it is not a life-and-death situation). Best wishes, Arne -- Unpolitisch sein heißt politisch sein ohne es zu merken