> That hack recognized specific syntax. Any change in the wrong > place would break it. Which hack was that? The one Thompson is reported to have actually implemented in Unix? You are assuming what you are trying to prove: you are assuming there has only ever been one instance of this class of attack, and you are trying to prove that this class of attack is unlikely. That used to be called "Begging the Question" but nowadays the general level of understanding of logic is so poor that most uses of that phrase are not in this sense. It may *seem* unlikely, but to anyone who has given serious thought to the possibilities of such an attack it seems more than wildly probable. I suppose everyone knows that Roger Schell spent several years in the office of deputy director of the NSA's National Computer Security Centre? If he did not alert the NSA to the possibility of this sort of attack then he was not doing his job properly. Having read some of the Computer Security History Project interview with him, I do not think Roger Schell is the sort of person who doesn't do his job properly. Thompson wrote that paper in 1984, and I don't think that was a coincidence. What he shows is that if you control the semantics of a language, that is if you control the meaning of what people say, then you control what they *see,* and so you also control what they think. And that was a theme in Orwell's book "1984." By controlling the meaning of what people say, Big Brother controlled their thought. In programming terms, if you control the semantics of the compiler, then you can control what people see. For example, you can insert code into libc.so and ld.so that looks for certain signatures and then changes the data that system calls like read and stat return to certain programs, such as sha256sum and objdump for example, according to some predicate. You can also monitor the behaviour of other programs. If you see that there is a program that reads mainly C source and writes mainly a.out executables, then you know those executables should contain a certain signature, and if they don't then you know you have a C compiler on the system which is not bugged, at least, one which has not got *your* bug (it may have any number of other such bugs however, because this semantics generalises.) So you can call for help, or you can even insert code to call for help into the binaries that program creates. Basically, your power over the system appears to be total. Of course it's not, because there are any number of other such bugs in there with you. In the end the only person who is guaranteed not to have control over what the system does is the program source code. Now it may seem unlikely to some that this has been done. But it is surely obvious to *everyone* that this is *possible,* and since the advantage an attacker accrues if he can pull this off effectively is incalculable, it should also be obvious to *everyone* that if this has not yet been done, then it will soon be done. Perhaps as a direct result of people reading what I am writing right now. So I hope people will focus on this problem, in spite of what Richard says. He will change his mind in due course, quite shortly I think :-) Focussing on free source code is pointless, we need to focus on free semantics. Of course this negates certain fairly fundamental principles of the Free Software Foundation. One of these is the idea of "Copyleft." By taking concrete representation of algorithms as the stock-in-trade of computer programmers, it is able to use the copyright laws to effect a kind of viral copyright status which automatically infects any program which uses that particular source code representation. The problem is that once one concentrates on free semantics rather than free source code, there is no longer any recourse to the copyright laws: the copyright laws protect only one particular concrete representation of an idea. The only legal protection sematics have is through patent law. So the Free Software Foundation, if it's to 'own' anything at all anymore, will have to register and defend its assets as patents. Ian On Wed, Sep 3, 2014 at 8:50 AM, Richard Stallman wrote: > [[[ To any NSA and FBI agents reading my email: please consider ]]] > [[[ whether defending the US Constitution against all enemies, ]]] > [[[ foreign or domestic, requires you to follow Snowden's example. ]]] > > It's surprisingly hard to fundamentally change a program that big. > Most changes are fairly minor and leave the basic structure unchanged. > > That hack recognized specific syntax. Any change in the wrong place > would break it. > > So a trap door could look at the large-scale structure using > unification to do pattern matching, Then it would be able to adapt > automatically to many localised changes. > > Who knows. It is an imponderable. > > The reason I am not interested in focusing on this problem, which is > conceivable, is that (1) it seems unlikely and (2) we face other > problems that are just as bad and that are real for certain. > > -- > Dr Richard Stallman > President, Free Software Foundation > 51 Franklin St > Boston MA 02110 > USA > www.fsf.org www.gnu.org > Skype: No way! That's nonfree (freedom-denying) software. > Use Ekiga or an ordinary phone call. > >