On Fri, 22 Dec 2017 15:05:39 -050 Stefan Monnier informs:

I think Emacs should evolve (and is evolving) towards a model where .elc
files are handled completely automatically, so there's no need to
preserve backward compatibility at all, because we can just recompile
the source file.

If you mean always keep the source code around in the bytecode file, I'm all for that!
 
If not, we're back to that discussion on how to find the source text for a given bytecode file and failing that (or in addition to that) having decent decompilers for bytecode.

[ Modulo supporting enough backward compatibility for bootstrapping
  purposes, since I also think we should get rid of the interpreter.  ]

> My understanding of how this work in a more rational way would be that
> there shouldn't be incompatible changes between major releases.  So I would
> hope that incompatible macro changes wouldn't happen within a major release
> but between major releases, the same as I hope would be the case for
> bytecode changes.

In theory, that's what we aim for, yes.

Good. If that's the case then most of the cases you report, such as where the macro expansion is incompatible,  could be detected just by checking if the compiler used in compilation has the same major number as the bytecode interpreter.


> Maybe this could be incorporated into a "safe-load-file" function.

Define "safe"

Okay. Let me call it then "safer" then. And I will define that: detecting problems that can be reasonably detected in advance of hitting them instead of giving a ¯\_(ツ)_/¯ traceback.
Recently have come to learn it can be worse because checks are not done on bytecode...

Want to crash emacs immediately without a traceback? Run
emacs -batch -Q --eval '(print (#[0 "\300\207" [] 0]))'

How many times this year have I run into the problem this year, also seen by others judging by reports on the Internet, of Emacs blithely running probably an incompatible version of cl-lib.

The bytecode file for cl-lib no doubt had in it "Hey, I'm emacs 24." and I probably ran that on Emacs 25 where there was an incompatibility that can happen between major releases.
If that were the case (and although probably it is not the only scenario case)  how much nicer would it have been if a safer-load-file  warned me about running version 24 bytecode.
And if such a safer-load-file package were in ELPA or something where packages are updated much more frequently than Emacs, when such conditions arise, the safer-load-file could add a check for this particular cl-lib incompatibility between the particular major releases



¯
>> FWIW, I think Emacs deserves a new Elisp compilation system (either
>> a new kind of bytecode (maybe using something like vmgen), or a JIT or
>> something): the bytecode we use is basically identical to the one we had
>> 20 years ago, yet the tradeoffs have changed substantially in the
>> mean time.
> I would  be interested in elaboration here about what specific trade offs
> you mean.

Obviously, the performance characteristics of computers has changed
drastically, e.g. in terms of memory available, in terms of relative
costs of ALU instructions vs memory accesses, etc...

But more importantly, the kind of Elisp code run is quite different from
when the bytecode was introduced.  E.g. it's odd to have a byte-code for
`skip_chars_forward` but not for `apply`.  This said, I haven't done any
real bytecode profiling to say how much deserves to change.

There are free opcode space available. "apply" could be added is someone chooses to add it.


> From what I've seen of Emacs Lisp bytecode, I think it would be a bit
> difficult to use something like vmgen without a lot of effort.  In the
> interpreter for vmgen the objects are basically C kinds of objects,
> not Lisp Objects.  Perhaps that could be negotiated, but it would not
> be trivial.

I haven't looked closely enough to be sure, but I didn't see anything
problematic: Lisp_Object in the C source code is very much a C object,
and that's what the current bytecode manipulates.

There may be some glibness here. The benefits of using a lower-level general-purpose intermediate language like LLVM IR or vmgen is that because it a lower level, working with registers and pointers, understands some structure layouts, and is more statically typed. So efficiency can be gained by specialization.  But if one doesn't break down Lisp_Object and uses that in the same way the C interpreter currently does, then I don't see why vmgen will be any faster than the current interpreter. (Other than the benefit that would also be had by rewriting the interpreter without the bloat and compatibility overhead)


> As for JITing bytecode, haven't there been a couple of efforts in that
> direction already?  Again, this is probably hard.

It's a significant effort, yes, but the speed up could be significant
(the kind of JITing attempts so far haven't tried to optimize the code
at all, so it just removes some of the bytecode interpreter overhead,
whereas there is a lot more opportunity if you try to eliminate the type
checks included in each operation).

There are many fairly good experimental JITs for Javascript, so it's not
*that* hard.  It'd probably take an MSc thesis to get a prototype working.

> I'm not saying it shouldn't be done. Just that these are very serious
> projects requiring a lot of effort that would take a bit of time, and might
> cause instability in the interim. All while  Emacs is moving forward on its
> own.

Indeed.  Note that Emacs's bytecode hasn't been moving very much, so the
"parallel" development shouldn't be a problem.

> But in any event, a prerequisite for considering doing this is to
> understand what we got right now. That's why I'm trying to document that
> more people at least have an understanding of what we are talking about in
> the replacing or modifying the existing system.

I agree that documenting the current bytecode is a very good idea, and
I thank you for undertaking such an effort.

Thanks for the kind words. It's not something I feel all that knowledgeable or qualified to do.