Further developments are now in, features are (guided somewhat by how the memory in linux/unix is managed and current guile strings) 1. chunked vectors e.g. we dived the vector in areas with different properties 2. optionally typed e.g. we consider a vector of u32 etc of data so there is a notion of object length and byte length 3. unaligned data transfers can be mitigated (controlled by a depth parameter) by considering a tree e.g. the chunk is a supervector itself with the same size as the old chunk 4. growing and shrinking can be effective 5. we have a pre side and a post side so queues and stacks are well modeled by this 6. fast reverse operation (useful for queues) 7. supervectors can be read only 8. ro super vectors allow cow (copy on write) sharing 9. types can grow e.g. we can start with a u8 general layout and have some chunks in u16 and some in u32 all tree like and controlled by an allowed depth 10. we can shar data that means that a shared copy can write to the original vector it copied from 11. zero chunks 12. all in scheme, but can take advantage of superfast c-code that mayby uses some advance processor op not possible in guile etc for hot paths, the advanantage is that i) the chunks means better fibers integration with no long stalls because it repeats frequently to scheme and ii) the supervector library have an API that makes the programming experience pleasant to implement the operations. 13. good default sizes of chunks to make sure that they are large enough to amortise the overhead of the chunks 14. fast append append! 15. pretty fast indexing, lookup of a position where the speed goes from basically a vector lookup to tree lookup controlled by the level parameter 16. auto aligns supervectors chunks to allow fast operations. Next a) testing testing testing and debugging b) making a string implementation with guiles string interface that will guide some util functions for the general supervector. c) try find a regexp library that can work with chunked vectors. d) make the supervector work with ports so that we do not need to go via strings to create a large data structure and instead of the chunked one which is built by smaller moderate sized pages e) enable user defined data on chunks f) some data transfers can be much faster in C when e.g. one copy from u8 to u64 g) allow supervectors with SCM objects as elements h) allow chunked bitvectors and chunked bitvector operations i) docs like readme Project: https://gitlab.com/tampe/stis-supervectors/-/tree/main On Thu, Sep 2, 2021 at 5:54 PM Stefan Israelsson Tampe < stefan.itampe@gmail.com> wrote: > Oh I just created the project, you can follow it here: > > https://gitlab.com/tampe/stis-supervectors > > On Thu, Sep 2, 2021 at 5:45 PM Stefan Israelsson Tampe < > stefan.itampe@gmail.com> wrote: > >> Hi guilers! >> >> My next project is to explore a more advanced bytevector structure than >> today's bytevectors. >> I think that having this infrastructure in guile and core code taken >> advantage of it like having strings otop of it and allowing our string >> library to use those (the difficult case is probably to get regexps working >> properly) >> >> Anyhow this is my initial comment in the code: >> >> #| >> The idea of this data structure is to note that when we employ full >> large datastructure scans we can allow for a much more rich and >> featurefull >> datastructure then a simple bytevector. The reason is that we can divide >> the data in byte chunks and spend most of the time scanning copying maping >> those areas with usual methis, even optimized C - code taken advantage of >> advanced cpu opcodes are possible here. ANd by dividing it in chunks we get >> a lot of new features with essentiually no cose with more than complexity >> which we manage mostly in scheme. We gain many things, >> >> 1. Many new features that can vary on the pages >> >> 2. less memory hogs as >> a. it can use copy ion write semantics >> b. it does not need to fins 1GB continuous blocks >> >> 3. potentially faster operations as we can fast foorward the zeroe on >> write >> pages compared to pure bytevectors >> >> 4. we can allow for swaping and refferential datastructures to speed up >> copying >> even further >> >> 5. can get better fiber features of C programs that spends seconds or >> minutes on >> performing an operation because it will just spend a microsecond or >> such >> in C-land and then swap back to Scheme. CTRL-C will also work nicely. >> >> 6. We could potentially build a string library untop of these >> datastructures and >> also we could add features to pages that lead to a much richer >> interface. >> >> 7. resizing is much faster end efficient >> >> 8. reversing is super quick >> >> 9. queues and stacks of byte data can have a very efficient >> implementations >> >> Drawback: >> 1. More complex as we need to consider boudaries >> 2. Slower one off operations like bytevector-u8-get as guile compiles the >> core operatoins to quite effective JIT CPU encodings. But maybe we can >> disign some caching to make those operations much faster and even have >> suporting JIT operations. >> >> |# >> >> WDYT ? >> >>