Although I agree with several of the comment by both Richard and Ian, most of what I would have to say is in my comments on subversion found at http://cisr.nps.edu/downloads/papers/04paper_subversion.pdf Although written a decade ago, it is still current today.
Roger Schell
From: Ian Grant [mailto:ian.a.n.grant@googlemail.com]
Sent: Friday, September 05, 2014 6:40 PM
To: Richard Stallman; guile-devel@gnu.org; lightning; Markus Kuhn; Theo deRaadt; Linus Torvalds; schellr@ieee.org
Subject: Re: GNU Thunder
On Thu, Sep 4, 2014 at 9:50 PM, Richard Stallman <rms@gnu.org> wrote:
There are limits to how much effort we should make to deal with the
imponderable possibilities of sabotage. Especially since there is so
much else we know that we need to do. To throw away all our software
because of these possibilities would not make sense.
At no point did I say you _have_ to throw away your software. On the contrary, I have explained to you precisely how you could have a means by which you can gain assurance, whenever you need it and at very little effort, that in fact the software has very probably not been sabotaged. So instead of putting up what are quite frankly feeble arguments as to the unlikelihood of such an attack, you could have assurance that it was _in fact_ improbable, and you would be able to put a figure on the improbability, and you would be able to explain to anyone who cared to listen, how you had arrived at that figure.
But you think this is a research project.
Well I am telling you now, for free, that it is not a research project, it is something that we can actually implement. Just because _you_ don't see how it could be done, does not mean it's impossible, because it may just be the case that there is something about semantics that _you_ don't know.
Now it is clear to everyone that you do not yet understand the problem because you write
> I think our community's distributed build practices would
> make it difficult for such a sabotage to hit the whole
> community. Many GCC developers and redistributors
> have been bootstrapping for decades using their old versions.
The GCC developers don't ship object code, do they? So the compiler the GCC developers use is irrelevant. This is an object code trap door, it does not need to be in the source code.
But I sincerely doubt that the GCC developers have in fact bootstrapped from their old binaries for decades. What reason would they have to do that? They are just like me: I use whatever compiler binary is on the distribution, as long as it is capable of compiling my source, and their source can be compiled by any old gcc binary. But as I say, this is irrelevant.
The problem is this: it is impossible to bootstrap the GNU tool-chain from scratch because it's all written in C, and C is too complex a language to use for a definitional interpreter. So we always rely on the semantics determined by some C compiler binary, which semantics is what you call an imponderable.
What we need is a language with a simple semantics for which we can write interpreters from scratch. It will be slow, but that doesn't matter. All we need it for is to generate the reference compiler that we know is secure, and the reference tools that we use to verify that the object code produced by the full 740 MB of GCC source when compiled by the 74MB gcc binaries, is the same object code our reference compiler produces.
Now I think we can do this with GNU Guile and GNU Lightning. What we need is the semantics of lightning to be defined as program data: so we define the expansions of the lightning primitives as assembler in the various target architectures, and we formally define the mapping the instruction encoders for the various architectures use to map assembler to binary machine code instructions. Then we can automatically generate lightning assemblers in any source language for which we need it.
So we can have interpreters for languages that are capable of compiling their own bytecode interpreters and primitives. For example, Guile will be able to JIT compile all of what it now ships as C code, from scheme. So all it will need to bootstrap is a simple scheme interpreter. It could in in fact write that scheme interpreter itself, just by shipping out lightning codes implementing its bytecode interpreter, to a lightning compiler. Of course this scheme interpreter would be susceptible to sabotage if the lightning compiler were written in C.
The thing is A LIGHTNING COMPILER COULD BE TRIVIAL TO WRITE FROM SCRATCH IF WE ONLY HAD A MACHINE READABLE DEFINITION OF LIGHTNING SEMANTICS.
I explained all of this in the PDF I sent to you two weeks ago, and which I sent to these lists.
All we need to do to implement this is to extract the data defining the instruction encoding that the assembler carries out for the various architectures, and get Paulo to write a data file formally defining the process by which lightning translates the lightning opcodes into assembler in the various architectures it supports. I explained in a post to that list that this need not be too hard to do: we only need a core subset of primitives with universal architecture support.
Now do you think this is pie in the sky, and "an interesting subject for research." If so, then why don't you explain to us all _why_ it is that you think that?
Ian