unofficial mirror of guile-devel@gnu.org 
 help / color / mirror / Atom feed
From: Stefan Israelsson Tampe <stefan.itampe@gmail.com>
To: guile-devel <guile-devel@gnu.org>
Subject: Re: more advanced bytevector => supervectors
Date: Wed, 8 Sep 2021 04:04:06 +0200	[thread overview]
Message-ID: <CAGua6m1Dd-3DmQsyv_UU99JbbP6qboaxXh4Tf3nSUTK_FGnYiQ@mail.gmail.com> (raw)
In-Reply-To: <CAGua6m0_2c_g_4sR3gRx5Wnc18eVHEXPGpL-K7VDnKyC=56W3w@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 7751 bytes --]

Today I added structure to support supervectors of scheme values and
refactored the code with a nice macrology. So now we
have an infrastructure that the compiler can compile to very efficient
code. All loop must be dispatched to the type of ref and set
operations. But the good part is that this selection is done only once on
each bin. which can be amortised with all the work on the
loop. This means that there can be an explosion in the number of
combinations. The design is such that one can select the set
of variants used in a nice interface. So let's discuss the setup to
illustrate this point afet a small detour to algebra

After investigating the structure of bit like vectors today I turned to
algebraic operations and wonder about what structure we can have
regarding supervectors. First to note is that storing an offset in the bin
can be of value if we only consider + and - operations so for a bin
with values X we store it as a + X, now adding (a1 + X1) + (a2 + X2) lead
to the final value (a1 + a2) + (X1 + X2) which means that
a1 and a2 can be general values and we could try to compress the X1 and X2
to a smaller byte structure

For multiplication we can have a scaling operation and if we only conisder
* / we have
(b1 * X1)  * (b2 * X2) = (b1*b2) (X1*X2)

Mixing multiplications will lead to problems but essentially if you add
number you can gain if the scaling is the same e.g.
(a1 + bX1) + (a2 + b*X2) = (a1 + a2) + b (X1 + X2).

complexity wise one do not win here, but the X1 and X2 could be one type of
number and the scale and offset of a larger
type.

All operations + - * /  of a constant bin with a fat bin can be fast if we
store a scaling and an offset.

This is the basic structure when considering basic algebra and one could
possibly benefit by storing and offset and a scale
factor in the bin information. Not sure of the use cases though.

Now over to the macrology ...

The besic tool to make find out about a ref or set is defined as

(define-syntax-rule (mk-get-setter get-setter do-inv? AS a b c d)
  (define-inlinable (get-setter do-inv2? bits bits1 loop )
    (lambda* (#:key (is-typed? #f)
         (is-scm    #f)
         (is-fixed    #f)
         (is-float  #f)
         (is-integer  #f)
         (is-single #f)
         (is-unsigned #f)
         (is-endian #f)
         (is-m      #f)
         (is-inv    #f))

,,,,)))

AS can be AR,AS,ARS    (ref set! ref-set!) e.g depending on the type of the
bin we will find a ref if AR is used, a set! if AS is used and both
the ref and set for the ARS version. do-inv? can if false skip lognot
stored info in the bin. get-setter is the name we define of the new setter
and a b c  id all of the form
AR:   ((1 ref-u8) (2 ref-u16)) ...
AS:   ((1 set-s8) (2 set-u16)) ...
ARS: ((1 ref-u8 set-s8) (2 ref-u16 set-u16)) ...

so a is the signed versions used, then b is the unsigned versions to use
and c is teh floating operations used and finally d is the scm values used

it will define an inlanable function like
(define-inlinable (get-setter do-inv? bits bits1  loop) ...)

So using get-setter typically means
((get-setter #f bin1 #f
   (lambda (set) (set v 2 val)))

   #:is-endian 'little          ;; only consider little endian setters like
I know
   #:is-unsigned  #t         ;; only use unsigned
   #:is-integer      #t         ;; only use integer representations
   #:is-fixed          #t        ;; do not use the scm value vector versions
)
So a version where we only consider handling nonegative integers of up to
64bit. The gain is faster compilation as this ideom will dispatch
between 4 different versions of the the loop lambda and the compiler could
inline all of them or be able to detect the one that are used and hot
compile that version
(a feature we do not have yet in guile) now whe you select between a ref
and a set you will similarly end up with 4*4 versions = 16 different loops
that. full versions
is very large and a double loop with all featurs consists of (2*2 + 3*2*2*2
+ 4 + 1)**2 = 33*33 ~ 1000 versions of the loop which is crazy if we should
expand the loop
for all cases in the compilation. Now guile would just use a functional
approach and not expand the loop everywhere. We will have parameterised
versions of
libraries so that one can select which versions to compile for. for example
the general functions that performs transform form one supervector to
another is a general
ideom that would use the full dispatc which is not practical,

So for e.g. bitwise operations one could do
(define-values (l-not l-and l-or -xor)
    (generate-bit-ops
       #:is-endian 'little          ;; only consider little endian setters
like I know
       #:is-unsigned  #t         ;; only use unsigned
       #:is-integer      #t         ;; only use integer representations
       #:is-fixed         #t        ;; do not use the scm value vector
versions
     ))

As you realise although the compiler ca innline stuff, the complexity is
too big for the compiler to expand all the loops. In the end this facility
should not be needed, but currently I think itäs a good idea.

That's the plan!



































On Thu, Sep 2, 2021 at 5:45 PM Stefan Israelsson Tampe <
stefan.itampe@gmail.com> wrote:

> Hi guilers!
>
> My next project is to explore a more advanced bytevector structure than
> today's bytevectors.
> I think that having this infrastructure in guile and core code taken
> advantage of it like having strings otop of it and allowing our string
> library to use those (the difficult case is probably to get regexps working
> properly)
>
> Anyhow this is my initial comment in the code:
>
> #|
> The idea of this data structure is to note that when we employ full
> large datastructure scans we can allow for a much more rich and featurefull
> datastructure then a simple bytevector. The reason is that we can divide
> the data in byte chunks and spend most of the time scanning copying maping
> those areas with usual methis, even optimized C - code taken advantage of
> advanced cpu opcodes are possible here. ANd by dividing it in chunks we get
> a lot of new features with essentiually no cose with more than complexity
> which we manage mostly in scheme. We gain many things,
>
> 1. Many new features that can vary on the pages
>
> 2. less memory hogs as
>    a. it can use copy ion write semantics
>    b. it does not need to fins 1GB continuous blocks
>
> 3. potentially faster operations as we can fast foorward the zeroe on write
>    pages compared to pure bytevectors
>
> 4. we can allow for swaping and refferential datastructures to speed up
> copying
>    even further
>
> 5. can get better fiber features of C programs that spends seconds or
> minutes on
>    performing an operation because it will just spend a microsecond or such
>    in C-land and then swap back to Scheme. CTRL-C will also work nicely.
>
> 6. We could potentially build a string library untop of these
> datastructures and
>    also we could add features to pages that lead to a much richer
> interface.
>
> 7. resizing is much faster end efficient
>
> 8. reversing is super quick
>
> 9. queues and stacks of byte data can have a very efficient
> implementations
>
> Drawback:
> 1. More complex as we need to consider boudaries
> 2. Slower one off operations like bytevector-u8-get as guile compiles the
>    core operatoins to quite effective JIT CPU encodings. But maybe we can
>    disign some caching to make those operations much faster and even have
>    suporting JIT operations.
>
> |#
>
> WDYT ?
>
>

[-- Attachment #2: Type: text/html, Size: 9660 bytes --]

  parent reply	other threads:[~2021-09-08  2:04 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-02 15:45 more advanced bytevector => supervectors Stefan Israelsson Tampe
2021-09-02 15:54 ` Stefan Israelsson Tampe
2021-09-06  2:10   ` Stefan Israelsson Tampe
2021-09-02 19:11 ` Matt Wette
2021-09-02 19:47   ` tomas
2021-09-08  2:04 ` Stefan Israelsson Tampe [this message]
2021-09-08  7:18   ` lloda
2021-09-08 11:32     ` Stefan Israelsson Tampe
2021-09-11 17:03     ` Stefan Israelsson Tampe
2021-09-11 18:21       ` lloda
2021-09-15  0:12 ` Stefan Israelsson Tampe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/guile/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGua6m1Dd-3DmQsyv_UU99JbbP6qboaxXh4Tf3nSUTK_FGnYiQ@mail.gmail.com \
    --to=stefan.itampe@gmail.com \
    --cc=guile-devel@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).