By default, this build will provide the Lisp function 'jit-compile’
which takes a lambda expression or a symbol.
A new byte code instruction has been added Bjitcall. When a function is
JIT compiled, it’s code vector is replaced by the single Bjitcall
instruction followed by the JIT compiled code block.
I'm
hesitant to call this work a "JIT compiler", since it's not doing any "compilation" --- CFG construction, register allocation, machine-code generation.
Technically, there is compilation into a linear series of function calls to the byte code interpreter code. That part is handled by libjit. No, I didn’t write the entire JIT compiler that it uses, but it does compile the interpreter loop into a more efficient
dispatch structure, giving the speed improvemet. It’s a simple technique and easy to see where the improvement comes from, but yes, this implementation does cause maintenance problems by duplicating the code from the original interpreter loop.
we're
still executing the same bits of the interpreter code ---- just reaching them more efficiency. (It's a step in that direction though.) Since each function pointer (four or eight bytes) is much larger than the corresponding bytecode instruction, for cache efficiency
reasons, I'd apply this optimization only to hot functions.
That’s the intention with the stub jit_hotspot_compile, but of course, getting this correct is obviously more important first.
Four
years ago, GCC's JIT API was unavailable. I suggest taking a close look at it. It will deliver far greater speedups to computation than the techniques in this work can, and it's much lower-maintenance to boot.
I didn’t realize GCC had a JIT API now. Is that also linked in
Noted, thanks.
By showing this ida/work I’m hoping not to get it included in emacs proper, but to show a relatively simple way to speed things up. I’m sure there are better/alternate implementations that would both be cleaner and give better speedups, but this was as
far as this POC went so far.