Panicz Maciej Godek <godek.maciek@gmail.com> writes:
> [...] the back doors can be implemented in the hardware, not in the
> software, and you will never be able to guarantee that no one is able
> to access your system.
Hopefully hardware will be addressed as well sooner or later. On the
meanwhile, we can plug a couple holes on the software layer.
Also, if the hardware doesn't know enough about the software's workings,
it will have a hard time exploiting it. Just like in the Thompson hack
case: if you use an infected C compiler to compile a *new* C compiler
codebase instead of the infected family, you will get a clean compiler,
because the infection doesn't know how to infect your new source code.
Similarly, if some hardware infection hasn't been fine-tuned for a piece
of software, it will just see a bunch of CPU instructions which it has
no idea how to modify to make an exploit.
Which I think brings us back to the "semantic fixpoint" thingy. If we
define some semantics that can be automatically turned into very
different series of CPU instructions which nevertheless do the same
thing in the end, it will get increasingly difficult for the infection
algorithm to understand the semantics behind the CPU instructions and
inject itself into this series of CPU instructions.