unofficial mirror of guile-devel@gnu.org 
 help / color / mirror / Atom feed
* Dijkstra's Methodology for Secure Systems Development
@ 2014-09-19 15:29 Ian Grant
  2014-09-19 22:22 ` Panicz Maciej Godek
  0 siblings, 1 reply; 21+ messages in thread
From: Ian Grant @ 2014-09-19 15:29 UTC (permalink / raw)
  To: guile-devel, gcc-/MQLu3FmUzdAfugRpC6u6w@public.gmane.org,
	Markus Kuhn, Linus Torvalds, Theo deRaadt, schellr,
	Richard Stallman, lightning, John Harrison

I hope this will provoke some new ideas about how to develop secure
systems software.

Thesis: The Software Development Process Itself is THE Major Security
Vulnerability in Current Computing and Communications Practices

This is because feasibility of the process depends entirely on re-use
of concrete implementations of algorithms, and so software developers
must routinely download and run code, because it is absolutely
necessary if they are to do their work. The quantity of code that they
download and run is far, far greater than they could possibly inspect,
even cursorily. Therefore software developers are constantly running
unknown code. Any such code they run as part of their work typically
has both read and write access to all the other code they work on,
because they typically use the same machine user account for all their
work. On Unix systems, this is practically mandated by the fact that
account setups are not easy to duplicate, and switching from one
account to another is not easy, because it is almost impossible to
transfer the 'desktop' setups and running applications from one user
account to another, without stopping and restarting them. Of course,
if this switching of running programs from one account to another were
made automatic, then there would be no point in switching accounts
when working on different projects.

The security situation is further aggravated by the fact that software
developers have a "trusted status" amongst the computer-using
community, on account of their specialist skills. Because of this
perception, there is a greater tendency to _actually trust_ the words
and deeds of software developers. Thus the general public are on the
whole _far less_ likely to question the actions, intentional or
unintentional, of software developers. This tendency is inversely
proportional to the perceived status of the software developer, which
is in turn proportional to the extent to which the results of their
work are distributed. So this is contrary (doubly so) to the healthier
purely rational attitude, which would be to treat the actions of _all_
software developers with the utmost suspicion, and the better known
they are, the more so! Since the products of software developers are
frequently duplicated _en masse_ and then widely distributed, the
developers own vulnerability is yet further exaggerated when it
extends transitively to the consumers of their software.

Problem:

Clearly then, the fewer software developers there are, the better, and
also, the the less those few developers do, the better. But many
people seem to _enjoy_ developing software. Should they be deprived of
that right?

The first question is whether it is really software development itself
that they find enjoyable, and not just some particular aspects of it
which are not so much evident in any other activity? The fact that
software developers often have other interests in common indicates
that this is probably not so.

The second question is related to the first. We must ask _why_ they
enjoy software development. If software development were practised
differently, would they still enjoy it? For example, if software
developers had to pay constant attention to issues of security, in
order to ameliorate the vulnerability we have identified, would they
find it equally enjoyable? Every indication is that most would
not. Software projects which have high-security criteria are not as
popular with developers as others which are typically more relaxed,
and where developers perceive they have far greater freedom, and
therefore many more opportunities to be creative. This accords with
our tentative answer to the first question, because the interests that
software developers have most in common seem to do with creative use
of imagination in areas where people can exercise intelligence.

So if we could only find a way to re-employ all the people who enjoy
"coding," in an area where they have far more freedom and opportunity
to express creative intelligence, then everyone will be better off,
except those people who rely on the products of software developers,
because they need software in order to be able to express _their_
creative intelligence! Therefore we need at the same time to find a
way to produce software without using human "coders".

Solution:

It is widely held that "human coders" are absolutely necessary,
because machines simply aren't intelligent enough to write
programs. This is probably true: machines cannot be programmed to be
creative, because creativity is, almost by definition, NOT
mechanical. But this is an error, and that it is so is clear as soon
as one recalls that the difficulty we identified, that people have
with producing _secure_ software, is that there are _fewer_
opportunities to creatively express intelligence: production of secure
software requires a formal discipline, and the results must be in some
sense more reproducible and predictable than those of imaginative and
creative expression of intelligence would be. Therefore it is very
likely that machines _could_ in fact be used to carry out predictable,
disciplined coding, controlled by formal rules which could be shown to
result, when followed very strictly, in the production of secure,
reliable software.

Some may object that this is all very well for communications
protocols and device drivers, but how could software produced by
rigidly following formal rules be used to express, say, artistic
creativity? For example, what sort of a computer game could be
produced by a program following formal rules? What sort of music could
be produced by a computer following formal rules which determined the
notes that were played. This is again an error, one of confusing the
medium with the message: just because the rules are rigid, doesn't
mean the results are not creative expression. One need only listen to
the music of J.S.Bach, or read a Shakespeare sonnet to see this.

Of course the same applies to the process of generating
programs. Software written by a formal process following carefully
constructed rules, as well as being more correct, is in fact far more
beautiful than that constructed _ad hoc._ This is because there is far
more opportunity to change the expression of a formally specified
process: one simply changes the specification of the generating
process itself, and then re-generates the program. And because the
process is controlled by a computation, the result is typically far
more elaborate than any human mind could conceive.

The evidence for this is in the quite spectacular results which have
been achieved using mechanical processes to generate Fast Fourier
Transform implementations. The process used by the FFTW program's
elaboration of formal specifications has produced several elegant
optimizations of the basic algorithm for special cases, which cases
had not hitherto been published in the technical literature. The same
process also produced some of the published algorithms, which until
then had very probably been considered to be the results of an
essential intelligence. But the intelligence is not in the formal
system, it is being expressed _through_ the _use_ of the system, not
_by_ the system itself. This is probably not very different to the
intelligent Human Being's use of the formal system of mathematics to
express intelligent thought. In highly specialised mathematical
fields, people use well-understood _methods_ to _derive_ results. For
example, in [1] which is available from:

   https://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/EWD1303.html,

Edsger Dijkstra mentions a method of deriving concurrent algorithms
with correctness proofs. It is disarmingly simple:

    "And then something profound and lovely happened. By analyzing by
    what structure of argument the proof obligations could be met, the
    numerical mathematician Th.J. Dekker designed, within a few hours,
    the above solution, together with its correctness argument, and
    this settled the contest. [...] the pair "c1,c2" implements the
    mutual exclusion, while "turn" has been introduced to resolve the
    tie when the two processes simultaneously try to enter their
    critical sections."

Similarly, a mathematician working on an optimization for a special
case of the FFT algorithm, might use a combination of analysis and
synthesis as a more or less formal process by which she can derive
certain classes of solution, perhaps also with correctness proofs.

There are in fact numerous examples of this in the computing
literature, although it is seldom if ever stated that this is a
recognised technique that is being used. A very obvious example is the
Hindley-Milner type inference algorithm as described by Damas and
Milner in [2]. Here the soundness and completeness are described in
terms of a set of formal rules of deduction, and the structure of the
algorithm W, which infers the type of a given expressions, very closely
follows the available rules in the inference systems for type
judgements and semantic truth.

Another good example is John Harrison's elegant treatment of exact
real arithmetic in [3]. Here a series of proofs of the required
properties of the arithmetic operations on real numbers expressed to
any given precision are used to produce the algorithms which are
written as ML functions to compute those values. The system resulting
from this specification is therefore automatically, because _by
definition,_ formally correct. I am sure numerous other examples could
be found. Often all that is necessary is a subtle shift of one's point
of view, to see the process of deriving the algorithm as a proof
process, or vice-versa, depending on the point of view from which the
author writes the description. Whatever that is, one gets the unifying
process by adding the opposite, unstated aspect: the proof or the
algorithm.

Changing the process of generation of a program is impossible to do
when it is written ad-hoc, unless it is so uniform and regular that it
could in fact have been produced by a formal mechanical process,
because then one can simply treat that program as a formal
specification. But unless it has a highly uniform, regular pattern,
which is not so elaborate that its abstract form is imperceptible to a
human being (*), an ad-hoc program cannot be restructured
automatically. Ask anyone who has applied a large number of patches,
from different sources, to an operating system kernel, say, and they
will tell you this is indeed so. And to see the truth of the clause
(*), try to abstract the FFTW generating algorithm form from a visual
inspection of the code it produces in different cases.

One should not think of this process of proving correctness as a
highly abstract one. It is no more or less abstract than the algorithm
itself, because in this view, the two things, the proof and the
algorithm, ARE THE SAME THING, seen from two different points of
view. This is a common experience, because people only feel they
actually understand an algorithm when the can SEE IMMEDIATELY WHY IT
REALLY DOES WORK. This is why NOBODY really understands public-key
cryptography algorithms: because no-one actually knows WHY they are
secure. Their security, such as it is, rests not on knowledge, but
solely on the ABSENCE of knowledge that they are NOT secure. So they
provide only a certificate, not actual knowledge. That's what ALL
certificates are for: they are used ONLY when actual knowledge is not
strictly necessary, because it is expected to be available at some
subsequent stage of a process. The certificate is therefore used
solely for the purpose of increasing efficiency in cases where it is
more likely than not that, when it becomes available, the actual
knowledge will concur with the certificate representing it.

There is a quite well-known problem on which the reader may enjoy
trying out this technique. One is set the task of determining which of
a certain number of otherwise indistinguishable objects (coins or
footballers, in the two instances we have heard) is either lighter or
heavier than the all others, which are the same weight, and whether
that object actually is lighter or heavier. The difficulty is that the
only measuring device one has available is a balance with fixed-length
arms, and in each version of the problem there is a certain maximum
number of weighings that one is allowed. The task is to describe an
_infallible method_ of determining the _correct_ answer. In some cases
it is obvious, and in others it is clearly impossible. For example, if
one is allowed three weighings to distinguish between one of 64
possibilities. Then the absolute maximum possible outcomes of three
weighings are 3*3*3=9 in number, so one cannot hope to distinguish the
one correct answer from 64 equally probable ones.

In some of these problems the solution is not immediately obvious, and
so it seems unlikely that it is possible, but not because of any
absolute bound on the information that the result of the given number
of measurements could contain. Of course these are the more
interesting cases, and one can use the algorithmic method of analysis
and synthesis Dijkstra mentions to solve them, or indeed to determine
that there is no possible solution. One needs only to consider how one
would convince _someone else_ that the result the method determines in
every possible case is correct; and then analyse the forms of those
arguments, and this will _automatically_ yield the algorithm, together
with its correctness proof. Of course, when the number of available
measurements is critical, as it is in the interesting cases, it is
important to make only those measurements which TOGETHER yield the
most information overall. So "maximizing the entropy," as it were
(choosing those measurements which discriminate most between the
possible outcomes of the WHOLE series of measurements) is the guiding
principle which determines the algorithm that this method generates.

The process as we have just described it is typical of the way
inference problems are tackled in Artifical Intelligence applications
such as are beautifully described by E.T Jaynes in his book
"Probability".

So we hope that we have allayed any fears the reader may have had that
the process of deriving correctness proofs is highly abstract, and
involves a great deal of incomprehensible logical and/or mathematical
symbols, and persuaded her instead that it is really an intuitive
process, and in a strong sense, one that is IDENTICAL with the actual
understanding of the algorithm with which it deals.

We can now go on to consider what Dijkstra's method has to do with
coding secure software. The problem with all security, is what one
might call "the leaking abstraction." This is always a result of some
fixed concrete representation of data as being encoded by
_information_ of some kind. And the leaking abstraction occurs when
that encoding is redundant, so that there are many different possible
encodings of the same data. Simple examples are:

(1) a forged bank-note, where there are two or more representations of
the same one note. Each note carries a unique identifying number,
which is its certificate of authenticity, and the difficulty of
manufacturing a convincing replica means that the certificate is
useful, because it is more likely than not that 'at the end of the
day' the real note will be matched with the actual object of value it
represents. Some fraction of some particular gold bar, for example.

(2) a person in disguise, masquerading as another person who has some
unique privilege; for example, the right of entry to some secure
facility, such as a military compound. Here there is redundancy
introduced into the representation of the individuals, so that, as in
the case of the forged bank-note, the successful infiltrator
compromises the ability of the experimental method to infallibly
discriminate between those who have this privilege and others who do
not.

(3) 'Picking' the five-lever lock on a door. Here the redundancy is in
the representation of the 'certificate' which signifies authenticity
of the tool used to open the door. It is because the lock-picker's
tools are another representation of the genuine key, and the method
the levers implement is not sufficient to discriminate between the
possible outcomes of genuine and false 'certificates.' Of course the
same principle applies to an illicit copy of the key.

(4) 'Tempest' decoding of the electromagnetic emissions from an LCD
connector on a flat-panel computer display. This is possible because
of the repeated scanning of similar images successively displayed on
the screen. This causes multiple, repeated transmissions of similar
signals which allow an 'attacker' to deconvolve the signal and recover
the original modulating pattern, which is the image being displayed on
the screen. If the computer only transmitted the changes to individual
pixels when they changed, this deconvolution would not be possible in
general.

Hopefully the reader will easily be able to produce many, many more
examples of what are all essentially different forms of the same
fundamental phenomenon. Abstractions which are realized in fixed
concrete representations invariably leak, simply because they
introduce redundancy into the representation of what are supposed (or
assumed) to be unique events. So if we could only find a method of
making abstractions in such a way that their concrete representations
were always changing, then we would have solved the problem of
computer and communications security in general, because there would
be no fixed 'certificate of identity' for the 'attacker' to forge. And
dually, there would be no redundancy for the 'attacker' to 'key into'
in the information which represents the data that system processes.

In [4], Dijkstra presents a method of designing 'operating systems' in
which layers of abstraction are successively introduced, one on top of
the other, and in which some concrete representation of the physical
hardware of the machine 'disappears' as each new abstraction is
added. The disappearing physical objects are replaced by multiple
virtual representations of objects with the same or similar
functionality. I.e., forgeries of the real identifiable hardware. The
paper does not give detailed examples of how this is done, but it
seems to us more than likely that using Dijkstra's other idea of
designing algorithms by a process of analysing what would be required
of a proof of correctness, and taking the view of this analysis as
being guided by a principle of maximizing the information content
(i.e. minimising the redundancy) of successive measurements, one could
formulate a method for deriving a general class of system designs of
the kind Dijkstra describes. One would start by considering what it is
one would need to do to convince someone else that a system is secure
(i.e. that the data are represented concretely with minimal
redundancy, or equivalently, maximum entropy.) For example, that the
data stored in a disk or core memory was 'mixed up' as much as
possible. This is achieved in Dijkstra's example system by
virtualising the physical pages and storing addresses as 'segments'
which are mapped by a virtual process onto physical pages of
memory. So it is only via the virtual page-mapping process that the
segment identifiers are meaningful. But the virtual page-mapping
process is not visible to any other process except via the highly
restrictive synchronisation primitives via which processes are
constrained to communicate.

In [1] Dijkstra mentions:

  "In the beginning I knew that programming posed a much greater
   intellectual challenge that generally understood of acknowledged,
   but my vision of the more detailed structure of that vision was
   still vague. Halfway [through] the multiprogramming project in
   Eindhoven I realized that we would have been in grave difficulties
   had we not seen in time the possibility of definitely unintended
   deadlocks. From that experience we concluded that a successful
   systems designer should recognize as early as possible situations
   in which some theory was needed. In our case we needed enough
   theory about deadlocks and their prevention to develop what became
   known as "the banker's algorithm", without which the
   multiprogramming system would only have been possible in a very
   crude form. By the end of the period I knew that the design of
   sophisticated digital systems was the perfect field of activity for
   the Mathematical Engineer."

[1] Edsger W. Dijkstra. EWD1303

    https://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/EWD1303.html

[2] Luis Damas and Robin Milner,
    Principal Type-schemes for Functional Programs,
    In: Proceedings of the 9th ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages
    ACM, 1982, 202--212.

    http://prooftoys.org/ian-grant/hm/milner-damas.pdf

[3] John Harrison, Introduction to Functional Programming.

    http://www.cl.cam.ac.uk/teaching/Lectures/funprog-jrh-1996/all.pdf

[4] Edsger W. Dijkstra "The Structure of 'THE'--Multiprogramming System"
    Commun. ACM 11 (1968), 5:341–346.

    http://www.cs.utexas.edu/users/EWD/ewd01xx/EWD196.PDF

_______________________________________________
Lightning mailing list
Lightning@gnu.org
https://lists.gnu.org/mailman/listinfo/lightning

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-19 15:29 Dijkstra's Methodology for Secure Systems Development Ian Grant
@ 2014-09-19 22:22 ` Panicz Maciej Godek
       [not found]   ` <CAMFYt2YCMizV3djOhKWRxYUoFq9fMJqaQ-jBYjYJHH1puDmT7Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2014-09-20 12:46   ` Taylan Ulrich Bayirli/Kammer
  0 siblings, 2 replies; 21+ messages in thread
From: Panicz Maciej Godek @ 2014-09-19 22:22 UTC (permalink / raw)
  To: Ian Grant, guile-devel

[-- Attachment #1: Type: text/plain, Size: 2702 bytes --]

Hi.
I've observed that some time ago you started sending tons of revolutionary
ideas regarding the way the software should be written, and crtiticising
the current practices.

I am not in the position to refer to those ideas, because I didn't manage
to comprehend them fully (although I am trying to figure out what is the
"system F" that you mentioned in your "thunder" essay).

I also made three other observations: firstly, that you are pointing out
significant vulnerabilities of the GNU project as a whole; secondly -- that
not every addressee wishes to become acquainted your thoughts, and lastly,
that if someone dares to criticise you, you're often getting impolite.

With regard to those observations, I can offer three suggestions. The first
one concerns software security and the odds of the aforementioned "Thompson
virus". As you pointed out, we cannot guarantee that there is no back door
in every GNU system installation, but I think that even if we apply your
methods, we won't be able to do so. Simply because (as some of the
participants of the discussion noted) the back doors can be implemented in
the hardware, not in the software, and you will never be able to guarantee
that no one is able to access your system. So why should we bother? If
there are some people accessing my files, why should I feel unfomfortable
with that? Why can't I trust that someome with such great power isn't going
to be mean and evil? (There's already so many things that I can't control.
I can't know for sure that I'm not going to die tomorrow, but I think that
being worried about that wouldn't make that last day of mine any better)

The second suggestion is that perhaps instead of sending all those letters
to some news groups, you should start a blog?

That way, you could watch the statistics and tell how many people are
actually interested in your concerns, and you could present your ideas in a
more coherent and systematic way. And people who didn't subscribe to Ian
Grant newsletter would have been receiving a few unwanted e-mails less per
week.

When it comes to the third question, please remember that other people have
their own issues, and may see no reason to consider your concerns more
important than theirs. When you're announcing that "there's no need to hook
guile to gdb, because if we rewrote all software with proper methodology,
there'd be no bugs", you seem to ignore the existing code base and common
practices. Of course if you can present a universal way of creating good
software, then I'm all ears, but so far I haven't seen such presentation
(or it might have drowned in the flood of your other thoughts and
discussions)

I wish you all best with your endeavour.
M.

[-- Attachment #2: Type: text/html, Size: 3035 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
       [not found]   ` <CAMFYt2YCMizV3djOhKWRxYUoFq9fMJqaQ-jBYjYJHH1puDmT7Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-09-19 23:17     ` Ian Grant
       [not found]       ` <CAKFjmdyH4SYVrpO64BOV3-Nfu7BCVVUnq07wzYQi43zO_VBU6g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2014-09-20  8:27       ` Thien-Thi Nguyen
  0 siblings, 2 replies; 21+ messages in thread
From: Ian Grant @ 2014-09-19 23:17 UTC (permalink / raw)
  To: Panicz Maciej Godek, Markus Kuhn, Theo deRaadt, Linus Torvalds,
	Richard Stallman, Vaughan Pratt, schellr
  Cc: lightning, guile-devel

 Hi Panicz,

What country do you live in at the moment? I ask because it makes a
difference sometimes to know someone's background. I live in Bolivia.
I am stateless, i.e. an illegal alien, and because of that have no
source of income. I live on money borrowed from friends and family. My
material possessions are a 3 yr old eepc, a sleeping bag, 2 pairs of
pants, four shirts, a fleece jacket, and that's it. I stink because I
don't care what people really think of me, and I just need to get this
job done. My teeth are rotting and falling out. That money is running
out. When it runs out, I will be incommunicado, so I am making the
most of the available bandwidth, shall we say.

Here's my blog. As you suspected, essentially no-one looks at it,
according to the stats. But I don't actually understand them, to tell
you the truth. It says there have been 600 pageviews in total (in two
years!), but then, it only details 10 % of them ... I don't know how
it works.

   http://livelogic.blogspot.com/

Thanks for reading the thunder thing. You are the first person who has
actually admitted, explicitly, to having done that! You obviously
don't worry what people with about you as a result. Good!

The reason I am posting a lot of people, and agitating like this, is
that I have been writing and publishing as you suggest, for over three
years now, and as you guessed, I have been largely ignored. I wrote
8,500 lines of Standard ML code the year before last to demonstrate
the ideas, and they are here:

   https://code.google.com/p/metaprogramming/

There are uses of ML functors and signatures there that I doubt you
will see in any book on Standard ML. Tell me if you do, I'd love to
know wrote it!

I used to be a sys-admin at Cambridge University. I was sys-admin
DAMTP, the dept. of applied maths and theoretical physics. Then later
I was at the Computer Laboratory. I worked at the University for
fifteen years or so. I installed a part of the front-line
authentication infrastructure at cl.cam.ac.uk, and Markus told me a
few weeks ago that the head sys-admin still does not want to touch
what I did, even though the protocols are outdated and the system now
sub-standard. That is a bit worrying, because it can only be because
they don't understand what I did, and that is not a clever position to
leave yourself in: after five years, wouldn't you think they could do
something about that? I also did stupid things, like maintaining
special Linux kernels in the days when Red Hat kernel source RPMs
routinely applied 100+ patches. I used to spend a week or two messing
about with them to integrate the pre-release NFS V5 patches. That's
how I know about how hard it is to integrate kernel patches from
disparate sources. And it's also how I know that it's just completely
fucking impossible to secure a distribution kernel. (The 'vanilla
kernel is probably a different matter, the problem with a kernel with
100 separate patches is that you just can't get any idea AT ALL what
they do when combined. No chance. Maybe Linus and/or Theo will concur?

So I spent fifteen doing sys-admin in a high-profile computer science
research department. At that site they do work for the NSA and GCHQ,
and they had, I was told, the source code for Microsoft Windows, for
example. So there is plenty of motive for various parties to attack
them.

I also taught functional programming and discrete maths and logic and
proof to undergraduates. I did that for over five years. I enjoyed
that work far more.

I worked with the automated reasoning group mostly, and theorists.
Robin Milner was there until he died tragically a year or so after I
left Cambridge for Bolivia. I am a friend of Markus Kuhn, Larry
Paulson, Mike Gordon, Glynn Winskel and Anuj Dewar at Cambridge,
amongst others.  You can email any of them and ask for a personal
reference if you like. They will surely say I am insane, for sure, but
I doubt any of them will able to tell you what is wrong with what I
have written, either here or in the documents  genesis and proofreps2
on the metaprogramming site.

I know a bit about commercial s/w development too. When I was 20 - 22
yrs old I was a contract Software Engineer at IBM UK, Hursley Labs.
Before that I worked for British Telecom, also as a software engineer.
And I did various work after that on 'office automation' type systems,
mainly for companies owned or employing a venture capital 'groupie'
who was a friend of mine.

My other major achievement is that I was a guile developer back in
1999. I wrote guile-pg, which ttn has let fall to bit-rot! Shame on
him! But after maintaining it for over a decade, I can't blame him for
getting bored.

There's lots more I could say, but I'll leave you to make your
judgement of what are my motives for agitating like this. Imagine you
knew, 100% for sure, that the FSF had been totally subverted and as a
consequence all free software, and also all commercial software had
been deeply compromised. Just like the software equivalent of Day X in
the movie "Salt," did you see that?

I'll tell you what you would do: you wouldn't care if you turned out
top be wrong, or if no-one listened to you, you would just ignore them
and carry on screaming "fire" until someone takes you seriously,
because while there's the slightest chance someone will listen to you,
there's chance that we will be able to rescue something from the
wreckage.

So at least you see that you are going to have to make a hell of a lot
stronger case than you have so far done if you want me to stop doing
this. Do you understand now?

Ian


On Fri, Sep 19, 2014 at 6:22 PM, Panicz Maciej Godek
<godek.maciek-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> Hi.
> I've observed that some time ago you started sending tons of revolutionary
> ideas regarding the way the software should be written, and crtiticising the
> current practices.
>
> I am not in the position to refer to those ideas, because I didn't manage to
> comprehend them fully (although I am trying to figure out what is the
> "system F" that you mentioned in your "thunder" essay).
>
> I also made three other observations: firstly, that you are pointing out
> significant vulnerabilities of the GNU project as a whole; secondly -- that
> not every addressee wishes to become acquainted your thoughts, and lastly,
> that if someone dares to criticise you, you're often getting impolite.
>
> With regard to those observations, I can offer three suggestions. The first
> one concerns software security and the odds of the aforementioned "Thompson
> virus". As you pointed out, we cannot guarantee that there is no back door
> in every GNU system installation, but I think that even if we apply your
> methods, we won't be able to do so. Simply because (as some of the
> participants of the discussion noted) the back doors can be implemented in
> the hardware, not in the software, and you will never be able to guarantee
> that no one is able to access your system. So why should we bother? If there
> are some people accessing my files, why should I feel unfomfortable with
> that? Why can't I trust that someome with such great power isn't going to be
> mean and evil? (There's already so many things that I can't control. I can't
> know for sure that I'm not going to die tomorrow, but I think that being
> worried about that wouldn't make that last day of mine any better)
>
> The second suggestion is that perhaps instead of sending all those letters
> to some news groups, you should start a blog?
>
> That way, you could watch the statistics and tell how many people are
> actually interested in your concerns, and you could present your ideas in a
> more coherent and systematic way. And people who didn't subscribe to Ian
> Grant newsletter would have been receiving a few unwanted e-mails less per
> week.
>
> When it comes to the third question, please remember that other people have
> their own issues, and may see no reason to consider your concerns more
> important than theirs. When you're announcing that "there's no need to hook
> guile to gdb, because if we rewrote all software with proper methodology,
> there'd be no bugs", you seem to ignore the existing code base and common
> practices. Of course if you can present a universal way of creating good
> software, then I'm all ears, but so far I haven't seen such presentation (or
> it might have drowned in the flood of your other thoughts and discussions)
>
> I wish you all best with your endeavour.
> M.
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
       [not found]       ` <CAKFjmdyH4SYVrpO64BOV3-Nfu7BCVVUnq07wzYQi43zO_VBU6g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-09-19 23:47         ` Ian Grant
  2014-09-20  8:50           ` Panicz Maciej Godek
  0 siblings, 1 reply; 21+ messages in thread
From: Ian Grant @ 2014-09-19 23:47 UTC (permalink / raw)
  To: Panicz Maciej Godek, Markus Kuhn, Theo deRaadt, Linus Torvalds,
	Richard Stallman, Vaughan Pratt, schellr
  Cc: lightning, guile-devel

On Fri, Sep 19, 2014 at 7:17 PM, Ian Grant
<ian.a.n.grant-gM/Ye1E23mwN+BqQ9rBEUg@public.gmane.org> wrote: a bunch of stuff, and added:

I forgot to add "you fucking ignorant jumped-up little prick!" So now
you have _textual evidence_ that I really am impolite :-)

Best wishes
Ian

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-19 23:17     ` Ian Grant
       [not found]       ` <CAKFjmdyH4SYVrpO64BOV3-Nfu7BCVVUnq07wzYQi43zO_VBU6g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-09-20  8:27       ` Thien-Thi Nguyen
  1 sibling, 0 replies; 21+ messages in thread
From: Thien-Thi Nguyen @ 2014-09-20  8:27 UTC (permalink / raw)
  To: guile-devel

[-- Attachment #1: Type: text/plain, Size: 6372 bytes --]

() Ian Grant <ian.a.n.grant-gM/Ye1E23mwN+BqQ9rBEUg@public.gmane.org>
() Fri, 19 Sep 2014 19:17:04 -0400

   My other major achievement is that I was a guile
   developer back in 1999. I wrote guile-pg, which ttn
   has let fall to bit-rot! Shame on him! But after
   maintaining it for over a decade, I can't blame him
   for getting bored.

I beg your indulgence for my slow pace.  I, too, am
unemployed and have rotting teeth.  But fear not!
Since Guile-PG 0.46 was released (2013-03-14), a bunch
of changes[0] have been accumulating and will Some Day
find their way into 0.47 (unless i die beforehand -- a
new maintainer for the Free Software you begat might
choose a different version numbering scheme :-D).

In the meantime, Guile-PG home page and repo are,
respectively:

 http://www.nongnu.org/guile-pg/
 http://git.sv.nongnu.org/cgit/guile-pg.git?h=p

BTW, i also read the thunder PDF.  I understood a bit
of it and believe myself to understand the overall
thrust, but who knows -- a fool is foolish, after all,
and certifying what a fool understands is 10 GOTO 10.

[to,cc recast]

___________________________________________________
[0] $ glog v0.46..
ab4645f 2014-01-27 [build] Don't enable GCC warnings in configuration.
dd63744 2014-01-27 [pq int] Elide single-use local var.
f7177fc 2014-01-27 [pq int] Avoid ‘NUM_INT’, ‘C_INT’ for oid, size_t.
c485d36 2014-01-25 [pq slog] Avoid ‘scm_ftell’.
72eab83 2014-01-25 [doc] Add some @cindex.
58c2b40 2014-01-25 [doc] Modernize title page.
a106f93 2014-01-25 [doc] Change license to GNU FDL.
ddb73f6 2014-01-25 [maint] Update ignorance; nfc.
c68d5be 2014-01-25 [maint] Update README, HACKING; nfc.
95b3d68 2014-01-25 [build int] Consolidate some ‘if !USE_CMOD’ blocks.
709760e 2014-01-25 [pq int] De-bifurcate module init flow.
534c6a6 2014-01-24 [build] Do sofix/uninstall-sofixed unconditionally.
015392c 2014-01-24 [build int] Use Guile-BAUX for to set {un,}sofix flags.
8a62ac9 2014-01-24 [build] Avoid sofile version.
22f828c 2014-01-24 [build int] Add abstraction: bx
3abe814 2014-01-24 [maint] Update years in copyright notice; nfc.
65370e5 2014-01-24 [maint] Reduce indentation in README; nfc.
dd8bc19 2014-01-23 [maint] Update "other stuff to do" in HACKING; nfc.
bbc0c43 2014-01-23 [boot] Revamp libpq configury.
16d387c 2014-01-23 [boot] If configure cannot find sane libpq, error out.
493ce00 2014-01-23 [boot int] Move ‘PQ_LDFLAGS’ munging after ‘LDFLAGS’ restoration.
693e3c6 2014-01-23 [pq int] Don't bother w/ <postgresql/...> headers.
9bc0ff4 2014-01-22 [maint] Mention ‘INITDB’ in "make kill-daemon" blurb in README; nfc.
b4ba56e 2014-01-22 [v] Overcome PostgreSQL 9.3 "CREATE TABLE .. serial" reticence.
ac17264 2014-01-22 [v] Handle PostgreSQL 9.3 config var ‘unix_socket_directories’.
50a55ef 2014-01-21 [boot int] Use ‘m4_normalize’ in ‘AC_CHECK_FUNCS’ arg.
6804f38 2014-01-21 [maint] Update src/libpq.org; nfc.
d25da80 2014-01-20 [maint] Update "other stuff to do" in HACKING; nfc.
7e12fdb 2014-01-19 [maint] Explain ‘status’ field values in libpq.org; nfc.
1f60a9c 2014-01-19 [v int] Move "fake-cluster-control 1" invocation to runtest.
9b473c8 2014-01-19 [v] Add "make drop-database".
cff5e2c 2014-01-19 [v] Create database exactly once, after daemon up.
790753d 2014-01-19 [v int] Add abstraction: frob
00d20cf 2014-01-19 [v int] Drop basic test table when done.
277e846 2014-01-19 [v int] Use temporary tables.
7d1c65a 2014-01-18 [v int] Use ‘(guile-baux common)’ procs.
c0ac49c 2014-01-18 [v] Fix bug: Do "COMMIT TRANSACTION" after cancel.
92d7d37 2014-01-18 [maint] Update src/libpq.org; nfc.
524ada2 2014-01-17 [v int] Use ‘truly_up’ more.
91b80a0 2014-01-17 [v int] Use "DROP DATABASE IF EXISTS".
11f446b 2014-01-17 [v int] Make ‘d/c’ take symbols, not strings.
b2788ac 2014-01-17 [v] Remove lame synchronization delays.
8a579ef 2014-01-16 [v int] Use "fake-cluster-control 0" more.
d8958fe 2014-01-16 [v] Handle daemon-down case where pidfile exists but is stale.
46dfe3f 2014-01-16 [doc] Mention PQ funcs behind dropped procs.
eea5a89 2014-01-15 [pq int] Use ‘C_ULONG’ for conversion to ‘size_t’.
a536173 2014-01-10 [pq int] Use ‘VALIDATE_NNINT_COPY’ more.
6d26dc7 2014-01-10 [slog] Decruft: Don't #include <guile/gh.h> directly.
832a9b3 2014-01-10 [slog] Decruft: Use GI_LEVEL_1_8 directly.
316cd85 2014-01-10 [slog] Use "snuggle/modsup.h".
80f1596 2014-01-10 [slog] Use ‘SNUGGLE_CHECK_CLASSIC_HEADERS’.
f0c0cde 2014-01-09 [slog] Use "snuggle/defsmob.h".
9f896bc 2014-01-09 [pq] Work w/ "integer" instead of "exact".
2e207f8 2014-01-09 [slog] Use "snuggle/humdrum.h".
39e847d 2014-01-09 [pq int] Use scm_set_c[ad]r_x directly.
363a2e2 2014-01-09 [pq int] Rename macro ‘VREF’ to ‘VECTOR_REF’.
76d3aa3 2014-01-09 [slog] Use "snuggle/finangle.h".
263b6d2 2014-01-09 [slog] Use "snuggle/level.h".
b1cb667 2014-01-09 [build int] Use Guile-BAUX for src/snuggle/*.h.
770c1d3 2014-01-09 [build int] Use Guile-BAUX for snuggle.m4.
adc87ec 2014-01-09 [maint] Update license notice to use URL instead of snail mail addr; nfc.
cbaff13 2014-01-09 [pq int] Avoid ‘EVAL_STRING’.
0f8e233 2014-01-09 [pq] Fix bug: Make ‘pg-result-error-field’ DTRT.
11b8337 2014-01-09 [int] Don't bother w/ ‘lob_close’ rv.
8100df0 2014-01-09 [int] Use ‘BSTRING’ more.
4d5a7ff 2014-01-08 [maint] Update years in copyright notice; nfc.
78e0d50 2014-01-08 Drop procs: pg-{getline,getlineasync,putline,endcopy}
6dbbd2f 2014-01-08 [v] Use pg-put-copy-{data,end} for async retrieval test.
5f5c1ed 2014-01-08 [v int] Rework loop to call ‘pg-flush’ inside.
fd35d01 2014-01-08 [v] Do/check ‘pg-get-result’ after ‘pg-put-copy-end’.
40f2089 2014-01-08 [v] No longer send "\\.\n" for COPY..STDIN.
689d814 2014-01-06 [v int] Stash validated protocol version.
00eeb0e 2014-01-05 [maint] Update ignorance; nfc.
79d133b 2013-12-20 Change bug-report email address.
a31789c 2013-07-24 [maint] Reformat NEWS; nfc.
1ae068e 2013-04-02 [maint] Update "repo branch discipline" in HACKING; nfc.

-- 
Thien-Thi Nguyen
   GPG key: 4C807502
   (if you're human and you know it)
      read my lisp: (responsep (questions 'technical)
                               (not (via 'mailing-list)))
                     => nil

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-19 23:47         ` Ian Grant
@ 2014-09-20  8:50           ` Panicz Maciej Godek
  2014-09-20 13:53             ` William ML Leslie
  0 siblings, 1 reply; 21+ messages in thread
From: Panicz Maciej Godek @ 2014-09-20  8:50 UTC (permalink / raw)
  To: Ian Grant, guile-devel

[-- Attachment #1: Type: text/plain, Size: 3141 bytes --]

Hey,
Maybe I'm a fucking ignorant jumped-up little prick, but at least I don't
stink ;]

Actually I don't think that you did put yourself into a particulatly
comfortable position, and even if you don't care what the people around you
think, maintaining personal hygiene seems like The Right Thing To Do.
Avoiding the soap is like raising an invisible fence that keeps the
external world away from you, which is not necessarily a good thing. It is
also a message -- that either you are a very poor person, or that you are
eager to treat every living person that you meet with disrespect -- because
you're simply making them feel unconfortable, and for no particular reason.

It is still unclear to me why you chose to move to Bolivia and live in such
poor conditions, but I have a feeling that it only lessens the odds of
achieving your goal (especially when you run out of money and die of
hunger). On the other hand I do admit that you're pursuing my childhood
dreams of being focused on programming entirely. I imagined myself sitting
on a sleeping bag with my laptop in a tunnel at a train station with a
label beside me stating that "I am writing free software for the great
good, please support".

On the other hand, now I see how non-linear the process of creating
software is -- that basically you need to balance on a thin line between
inspiration, motivation and focus.

Actually, I think that when your teeth are falling out, then the conditions
are probably not particularly propitious.

I've taken a look at your work and I have to say that I am really impressed
with some of your ideas, and I do agree on many points, but I have a
feeling that they are all presented in a rather messy way, so one can find
profound ideas lying next to some pile of crappy trivia. But I certainly
need to read a bit more to get a more adequate opinion.

When it comes to me, I live (and have been for the most of my life) in
Poland.

And I still find it difficult to see anything terrible in the idea that
"FSF had been subverted", when I interpret that in terms of software
security, because the way I see it, the main premise of FSF movement is to
share the code (as opposed to restricting it), and the main goal of the GNU
Operating System is to propagate that idea, rather than to provide a secure
and reliable operating system (which are only secondary goals that are
needed to be fulfilled in order for the operating system to become popular,
respected and desired -- or to advertise the idea well enough).

But most of all, I think, software (and free software in particular) is
about fun (so in this regard I would agree with Alan Perlis' foreword to
SICP). This is also what I like about your ideas -- that they encourage
code reuse, thus requiring the programmer to write only what's important.

The issue of being afraid of touching other programmers' (or sys-admins')
work is also important, because it shows that we didn't yet manage to work
out the means of communicating software in a disciplined and comprehendible
way, and I agree that we need to work more on that (and I also agree that
we should resort to logic)

Best regards

[-- Attachment #2: Type: text/html, Size: 3496 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-19 22:22 ` Panicz Maciej Godek
       [not found]   ` <CAMFYt2YCMizV3djOhKWRxYUoFq9fMJqaQ-jBYjYJHH1puDmT7Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-09-20 12:46   ` Taylan Ulrich Bayirli/Kammer
  2014-09-20 13:16     ` Eli Zaretskii
                       ` (2 more replies)
  1 sibling, 3 replies; 21+ messages in thread
From: Taylan Ulrich Bayirli/Kammer @ 2014-09-20 12:46 UTC (permalink / raw)
  To: Panicz Maciej Godek; +Cc: Ian Grant, guile-devel

Panicz Maciej Godek <godek.maciek@gmail.com> writes:

> [...]

First of all let me say I agree with you; guile-devel is the wrong place
to discuss these things.

I also feel uncomfortable about having been painted as the only person
agreeing with Ian.  According to him I was able to understand his idea
at least, but I'm not clear on how it ties in with the rest of reality,
like the possibility of hardware exploits...

Still:

> [...] the back doors can be implemented in the hardware, not in the
> software, and you will never be able to guarantee that no one is able
> to access your system.

Hopefully hardware will be addressed as well sooner or later.  On the
meanwhile, we can plug a couple holes on the software layer.

Also, if the hardware doesn't know enough about the software's workings,
it will have a hard time exploiting it.  Just like in the Thompson hack
case: if you use an infected C compiler to compile a *new* C compiler
codebase instead of the infected family, you will get a clean compiler,
because the infection doesn't know how to infect your new source code.
Similarly, if some hardware infection hasn't been fine-tuned for a piece
of software, it will just see a bunch of CPU instructions which it has
no idea how to modify to make an exploit.

Which I think brings us back to the "semantic fixpoint" thingy.  If we
define some semantics that can be automatically turned into very
different series of CPU instructions which nevertheless do the same
thing in the end, it will get increasingly difficult for the infection
algorithm to understand the semantics behind the CPU instructions and
inject itself into this series of CPU instructions.

Unfortunately we have in-hardware AES implementations, and neither
crypto software nor C compilers are very diverse, so you possibly get a
similar-enough bunch of CPU instructions every time for malicious
hardware to inject an exploit into.  (Or maybe not; maybe binary outputs
of common crypto software are already diverse enough every time you
change a compiler flag or update your compiler so that this attack is
implausible.  In that case we *only* need to plug software holes; the
rest is taken care of by full-disk encryption etc. in software so the
hardware never sees your data and never understands what you do.)

(By the way, while a clean C compiler can be used to compile a clean GCC
so we get back all its features which aren't in our new C compiler
that's been kept super-simple, the same can't be done for hardware;
instead we will have to keep running more and more different series of
CPU instructions if we want to be safe, which will mean we will suffer
performance, since these instructions are probably not the optimal ones
to get the job done...)

> If there are some people accessing my files, why should I feel
> unfomfortable with that?  Why can't I trust that someome with such
> great power isn't going to be mean and evil?

I always like to say, "there are no James Bond villains on Earth."  The
Hollywood trope of a sociopathic villain who's consciously evil for the
sake of it is a big distraction from the fact that groups like the Nazi
party or people like Joseph Stalin have in fact existed and came into
positions of power in our very real world.  And they didn't feel they
were evil; they genuinely believed they were doing the right thing.  How
long has it been since such a "scandal" of humanity last happened?  Is
it really thoroughly implausible that it would happen again?  Has it
even really stopped happening entirely, or is one of the most powerful
countries on the world supporting (even if indirectly) the bombardment
of civilians and the torture of captives in the Middle East right now?

I think it's quite difficult to find a good balance between being too
naive, and entering tinfoil-hat territory.  I've been pretty naive for
most of my life, living under a feeling of "everything bad and dark is
in the past" and that only some anomalies are left.  That's seem to be
wrong though, so I'm trying to correct my attitude; I hope I haven't
swayed too much into the tinfoil-hat direction while doing so. :-)

Taylan



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 12:46   ` Taylan Ulrich Bayirli/Kammer
@ 2014-09-20 13:16     ` Eli Zaretskii
  2014-09-20 16:45       ` Taylan Ulrich Bayirli/Kammer
  2014-09-21  9:04     ` Panicz Maciej Godek
  2014-09-22  1:03     ` William ML Leslie
  2 siblings, 1 reply; 21+ messages in thread
From: Eli Zaretskii @ 2014-09-20 13:16 UTC (permalink / raw)
  To: Taylan Ulrich Bayirli/Kammer; +Cc: ian.a.n.grant, guile-devel

> From: Taylan Ulrich Bayirli/Kammer <taylanbayirli@gmail.com>
> Date: Sat, 20 Sep 2014 14:46:01 +0200
> Cc: Ian Grant <ian.a.n.grant@googlemail.com>, guile-devel <guile-devel@gnu.org>
> > If there are some people accessing my files, why should I feel
> > unfomfortable with that?  Why can't I trust that someome with such
> > great power isn't going to be mean and evil?
> 
> I always like to say, "there are no James Bond villains on Earth."  The
> Hollywood trope of a sociopathic villain who's consciously evil for the
> sake of it is a big distraction from the fact that groups like the Nazi
> party or people like Joseph Stalin have in fact existed and came into
> positions of power in our very real world.  And they didn't feel they
> were evil; they genuinely believed they were doing the right thing.  How
> long has it been since such a "scandal" of humanity last happened?  Is
> it really thoroughly implausible that it would happen again?  Has it
> even really stopped happening entirely, or is one of the most powerful
> countries on the world supporting (even if indirectly) the bombardment
> of civilians and the torture of captives in the Middle East right now?

Evil is not about right and wrong.  Evil is about moral and immoral,
lawful and unlawful.  If you don't understand the fundamental
difference between those categories, perhaps you should refrain from
talking about Hitler, Stalin, and bombardment of civilians.  Certainly
here, where you yourself admitted this is OT.



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20  8:50           ` Panicz Maciej Godek
@ 2014-09-20 13:53             ` William ML Leslie
  0 siblings, 0 replies; 21+ messages in thread
From: William ML Leslie @ 2014-09-20 13:53 UTC (permalink / raw)
  To: guile-devel

[-- Attachment #1: Type: text/plain, Size: 1537 bytes --]

On 20 September 2014 18:50, Panicz Maciej Godek <godek.maciek@gmail.com>
wrote:

>
> ​​
> And I still find it difficult to see anything terrible in the idea that
> "FSF had been subverted", when I interpret that in terms of software
> security, because the way I see it, the main premise of FSF movement is to
> share the code (as opposed to restricting it), and the main goal of the GNU
> Operating System is to propagate that idea, rather than to provide a secure
> and reliable operating system (which are only secondary goals that are
> needed to be fulfilled in order for the operating system to become popular,
> respected and desired -- or to advertise the idea well enough).
>

The GNU exists to enable users to have control and power over the computers
they own.  Proprietary software is one mechanism to subvert that right;
security flaws in the mechanisms we use to run software from other sources
is another, and it is an important one because most people use a lot of
software they don't have time to audit.  Improving the security of the
platform is *key* to the goals of the GNU, but not only is it a long road,
it's not clear yet which paths are well lit.

-- 
William Leslie

Notice:
Likely much of this email is, by the nature of copyright, covered under
copyright law.  You absolutely MAY reproduce any part of it in accordance
with the copyright law of the nation you are reading this in.  Any attempt
to DENY YOU THOSE RIGHTS would be illegal without prior contractual
agreement.

[-- Attachment #2: Type: text/html, Size: 2049 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 13:16     ` Eli Zaretskii
@ 2014-09-20 16:45       ` Taylan Ulrich Bayirli/Kammer
  2014-09-20 17:58         ` Eli Zaretskii
  0 siblings, 1 reply; 21+ messages in thread
From: Taylan Ulrich Bayirli/Kammer @ 2014-09-20 16:45 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: ian.a.n.grant, guile-devel

Eli Zaretskii <eliz@gnu.org> writes:

> Evil is not about right and wrong.  Evil is about moral and immoral,
> lawful and unlawful.  If you don't understand the fundamental
> difference between those categories, perhaps you should refrain from
> talking about Hitler, Stalin, and bombardment of civilians.

If any of those words have well-defined semantics in a widely accepted
school of thought, then I didn't know that.  (Except for laws, though
I'm confused on how they're relevant at all.)  I was simply going on the
assumption that we all agree on ethics based on human well-being (or
more generally, that of any relatively intelligent creature; many other
animals seem to have emotions too).  It would follow that we need to be
careful not to give power to those disagreeing with that.  If the
assumption was wrong, then never mind.

Taylan



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 16:45       ` Taylan Ulrich Bayirli/Kammer
@ 2014-09-20 17:58         ` Eli Zaretskii
  2014-09-20 18:39           ` Left Right
  2014-09-20 19:11           ` Taylan Ulrich Bayirli/Kammer
  0 siblings, 2 replies; 21+ messages in thread
From: Eli Zaretskii @ 2014-09-20 17:58 UTC (permalink / raw)
  To: Taylan Ulrich Bayirli/Kammer; +Cc: ian.a.n.grant, guile-devel

> From: Taylan Ulrich Bayirli/Kammer <taylanbayirli@gmail.com>
> Cc: godek.maciek@gmail.com,  ian.a.n.grant@googlemail.com,  guile-devel@gnu.org
> Date: Sat, 20 Sep 2014 18:45:49 +0200
> 
> Eli Zaretskii <eliz@gnu.org> writes:
> 
> > Evil is not about right and wrong.  Evil is about moral and immoral,
> > lawful and unlawful.  If you don't understand the fundamental
> > difference between those categories, perhaps you should refrain from
> > talking about Hitler, Stalin, and bombardment of civilians.
> 
> If any of those words have well-defined semantics in a widely accepted
> school of thought, then I didn't know that.

And "right" and "wrong", do they have well-defined semantics?  No,
they don't, and yet you used them freely to make your point.  How's
that for consistency?

Your point was that villains act thinking they are doing the "right"
thing.  But that's exactly the weakness of "right" and "wrong": they
are intrinsically POV-relative.  When you and me have a conflict of
interests, what is "right" for you is "wrong" for me, and vice versa.

Use the words I suggested, and this problem disappears, even if others
remain.

> (Except for laws, though I'm confused on how they're relevant at
> all.)

Perhaps you don't understand why we have laws, then.

> I was simply going on the assumption that we all agree on
> ethics based on human well-being

Oh, come on, grow up!  This kind of naiveté will get you nowhere.
Tell me: when someone shoots a burglar who broke into their house and
threatened them with a weapon, what exactly happens to the "human
well-being" of the burglar?

> It would follow that we need to be careful not to give power to
> those disagreeing with that.  If the assumption was wrong, then
> never mind.

We can all agree on this assumption, but the question is what do you
do in practice with that?




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 17:58         ` Eli Zaretskii
@ 2014-09-20 18:39           ` Left Right
  2014-09-21  1:33             ` Nala Ginrut
  2014-09-26 15:45             ` Ian Grant
  2014-09-20 19:11           ` Taylan Ulrich Bayirli/Kammer
  1 sibling, 2 replies; 21+ messages in thread
From: Left Right @ 2014-09-20 18:39 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: ian.a.n.grant, guile-devel

Sorry, I really only registered to submit a couple of bugs, but I
couldn't miss the opportunity! Well, you see, there is a very well
known ethical school of thinking which does not think that ethics is
relative (I don't believe that too, but for other reasons). Immanuel
Kant is by far the best known proponent of universal ethics. I also
happen to work on my future thesis, which is about formalization of
ethics (as you would guess, if that's possible to formalize, or, at
least, I believe so, then I also must believe it to be universal). The
examples I like to give in this debate (of course there are other
famous school of ethical thought which disagree with this) is the
example of an elevator, which must implement an ethical program in
order to be considered functional (w/o going into detail, it is
possible to construct an elevator, which will be more efficient than
those we use normally, but it would be perceived as unfair).

To put a brief argument for Kant's view of the problem: he believed
that the right thing to do is to act freely, he also believed that
given the opportunity to act freely everyone would choose the same
strategy. These ideas seemed quite solid at the time, but not so much
any more. The world of philosophy of the day was deterministic and had
very weird concepts of what reality is made of :) Nevertheless, many
adopt his categorical imperative as a moral norm (which I don't think
anyone should, but that's a separate story).

Another great ethical thinker, who believed in universal ethics is
Aristotle. Surprisingly, he has a much better grounded view to offer.
The collection of his view also known in the modern world as teleology
survived a lot of paradigm shifts. (I subscribe to this idea too). It
was mostly advanced by philosophers of Abrahamic religions, and so it
is known in the modern world as Tomis or Aviccenism, but it doesn't
have to be religious in nature. I think it was just comfortable for
religions, which wanted to be universal to have a doctrine, which also
wanted to be universal. Put shortly, the premise of this doctrine is
that it is good to give which is due, and it is bad otherwise. Which,
kind of, transfers the responsibility of answering the question of
what is good to what is due, but, in the same sense as we have logical
system which don't define what is true and what is false (this is
mandatory defined outside the system), and they are still useful.

The counterexamples of ethical thought, where good and bad were
considered relative in one sense or another: of course utilitarianism,
libertarianism. Basically, everything that has nowadays to do with the
humaism of the Western world thrives on an assumption that ethics are
relative, perhaps to an individual, maybe to a group, or maybe the
time dimension makes them relative - depends on what philosopher you
pick.

----

I also read the OP, and, I think that there are thoughts that could be
useful, but it is unhelpful that the reaction creates a conflicting
situation. I would suggest the following proposition to Ian Grant, I
think it may be helpful:

It is possible to build a good, solid mathematical model (and it seems
like you are into that kind of programming, since you mention Dijkstra
and Milner very often), but it will not map well to the actual
observed phenomena. This is very well known problem in areas like
molecular biology, particle physics and economics / social studies.
I.e. for example, it is possible to come up with a model, which, given
some input DNA will make interesting inferences about it, but will be
completely worthless for making predictions about how actual ribosomes
synthesize polypeptides. Quite similarly, the hypothesis suggested by
Milner, I think it was "properly typed programs can't be buggy"
appears not to hold water. It is a good, consistent, even solid
theory, but it doesn't capture the nature of programming. And it
doesn't deliver on the promise. Programs in ML, too, have bugs.
I don't say this to discourage you, I think that searching for this
kind of models is important. I just wanted to say that maybe your
conclusions have been premature and lacking the statistical evidence
(lack of evidence isn't in itself a proof of the contrary).

Best,

Oleg



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 17:58         ` Eli Zaretskii
  2014-09-20 18:39           ` Left Right
@ 2014-09-20 19:11           ` Taylan Ulrich Bayirli/Kammer
  2014-09-20 19:51             ` Eli Zaretskii
  1 sibling, 1 reply; 21+ messages in thread
From: Taylan Ulrich Bayirli/Kammer @ 2014-09-20 19:11 UTC (permalink / raw)
  To: Eli Zaretskii; +Cc: ian.a.n.grant, guile-devel

Eli Zaretskii <eliz@gnu.org> writes:

> And "right" and "wrong", do they have well-defined semantics?  No,
> they don't, and yet you used them freely to make your point.  How's
> that for consistency?

Since I assumed they have no well-defined meanings, I used them such
that what I mean with them would have hopefully been clear enough.
(Which was the human well-being thing.)

> Use the words I suggested, and this problem disappears, even if others
> remain.

Well, that's false.  Many people think it's amoral to be homosexual.
And many countries' laws forbid it, too.  That's why I think right/moral
and wrong/amoral are more or less synonyms, and laws are just their
concrete codification.  (And again, I was going on the assumption of an
agreement on that human-rights based morals are the "correct" morals.)

>> (Except for laws, though I'm confused on how they're relevant at
>> all.)
>
> Perhaps you don't understand why we have laws, then.

I said that because laws are just the written down form of what a group
of people think is right.  They are the end product of a discussion on
what is and isn't right; using them to decide that would be circular
logic, so they have no place in that discussion.  (Or maybe just as
reference on what people previously decided, to use some possibly
acceptable form of appeal to authority or popular opinion; but you get
what I mean.)

> Tell me: when someone shoots a burglar who broke into their house and
> threatened them with a weapon, what exactly happens to the "human
> well-being" of the burglar?

It's traded off for the well-being of the home owner, and probably for
the well-being of future possible victims.  "Ethics calculus." ;-)


Anyway, I now suspect that the discussion might go on for dozens of
mails if we don't just abruptly stop; I had previously hoped that we
would instead quickly either agree or agree to disagree on clear points.
Or maybe we can just agree to disagree on the meaning and importance of
laws?  The other points seem cleared up, I think.  I'm desperately
looking for a way to end the discussion without requiring either side to
accept giving the other the "last word," so help me a little...

Taylan



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 19:11           ` Taylan Ulrich Bayirli/Kammer
@ 2014-09-20 19:51             ` Eli Zaretskii
  0 siblings, 0 replies; 21+ messages in thread
From: Eli Zaretskii @ 2014-09-20 19:51 UTC (permalink / raw)
  To: Taylan Ulrich Bayirli/Kammer; +Cc: ian.a.n.grant, guile-devel

> From: Taylan Ulrich Bayirli/Kammer <taylanbayirli@gmail.com>
> Cc: godek.maciek@gmail.com,  ian.a.n.grant@googlemail.com,  guile-devel@gnu.org
> Date: Sat, 20 Sep 2014 21:11:33 +0200
> 
> > Use the words I suggested, and this problem disappears, even if others
> > remain.
> 
> Well, that's false.  Many people think it's amoral to be homosexual.
> And many countries' laws forbid it, too.

Yes, and the Swiss think it's amoral to litter.  So what?  How is this
related to the issues at hand?

Are there countries that have laws that allow to freely kill, or break
into other's houses, or summarily hold people in captivity and torture
them?  No, there aren't.  So, for the purpose of this discussion, we
all agree what is mormal and what is not.

> >> (Except for laws, though I'm confused on how they're relevant at
> >> all.)
> >
> > Perhaps you don't understand why we have laws, then.
> 
> I said that because laws are just the written down form of what a group
> of people think is right.

Yes, but why do we bother to have laws at all?  Think about this, and
perhaps you will arrive at a much more useful explanation for why we
have laws.

> > Tell me: when someone shoots a burglar who broke into their house and
> > threatened them with a weapon, what exactly happens to the "human
> > well-being" of the burglar?
> 
> It's traded off for the well-being of the home owner, and probably for
> the well-being of future possible victims.  "Ethics calculus." ;-)

Right, and Stalin "traded off" well-being of his victims for that of
himself and his satraps.  And Hitler "traded off" the well-being of
Jews for that of the Aryans.  Your "well-being" methodology is a dead
end: using it, you will never be able to decide whom to support and
whom to condemn in a given conflict.

> Anyway, I now suspect that the discussion might go on for dozens of
> mails if we don't just abruptly stop; I had previously hoped that we
> would instead quickly either agree or agree to disagree on clear points.
> Or maybe we can just agree to disagree on the meaning and importance of
> laws?  The other points seem cleared up, I think.  I'm desperately
> looking for a way to end the discussion without requiring either side to
> accept giving the other the "last word," so help me a little...

OK, I will now stop.



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 18:39           ` Left Right
@ 2014-09-21  1:33             ` Nala Ginrut
  2014-09-26 15:45             ` Ian Grant
  1 sibling, 0 replies; 21+ messages in thread
From: Nala Ginrut @ 2014-09-21  1:33 UTC (permalink / raw)
  To: Left Right; +Cc: ian.a.n.grant, guile-devel

[-- Attachment #1: Type: text/plain, Size: 5044 bytes --]

2014年9月21日 上午2:39于 "Left Right" <olegsivokon@gmail.com>写道:
>
> Sorry, I really only registered to submit a couple of bugs, but I
> couldn't miss the opportunity! Well, you see, there is a very well
> known ethical school of thinking which does not think that ethics is
> relative (I don't believe that too, but for other reasons).

Well, I think the topic becomes a little ridiculous and OT.
I don't believe ethics is relative too, in essential. But it's no related
to the original topic of thread, and it's interesting IMO.
No matter what the ethics is, it's a good point to hack the compiler & OS
to find the problem and solve it.
Please don't find some reasons to avoid such a hack. I understand it's hard
hack. But we can't say it's unnecessary just because most of people in the
world is friendly and no evil.

 Immanuel
> Kant is by far the best known proponent of universal ethics. I also
> happen to work on my future thesis, which is about formalization of
> ethics (as you would guess, if that's possible to formalize, or, at
> least, I believe so, then I also must believe it to be universal). The
> examples I like to give in this debate (of course there are other
> famous school of ethical thought which disagree with this) is the
> example of an elevator, which must implement an ethical program in
> order to be considered functional (w/o going into detail, it is
> possible to construct an elevator, which will be more efficient than
> those we use normally, but it would be perceived as unfair).
>
> To put a brief argument for Kant's view of the problem: he believed
> that the right thing to do is to act freely, he also believed that
> given the opportunity to act freely everyone would choose the same
> strategy. These ideas seemed quite solid at the time, but not so much
> any more. The world of philosophy of the day was deterministic and had
> very weird concepts of what reality is made of :) Nevertheless, many
> adopt his categorical imperative as a moral norm (which I don't think
> anyone should, but that's a separate story).
>
> Another great ethical thinker, who believed in universal ethics is
> Aristotle. Surprisingly, he has a much better grounded view to offer.
> The collection of his view also known in the modern world as teleology
> survived a lot of paradigm shifts. (I subscribe to this idea too). It
> was mostly advanced by philosophers of Abrahamic religions, and so it
> is known in the modern world as Tomis or Aviccenism, but it doesn't
> have to be religious in nature. I think it was just comfortable for
> religions, which wanted to be universal to have a doctrine, which also
> wanted to be universal. Put shortly, the premise of this doctrine is
> that it is good to give which is due, and it is bad otherwise. Which,
> kind of, transfers the responsibility of answering the question of
> what is good to what is due, but, in the same sense as we have logical
> system which don't define what is true and what is false (this is
> mandatory defined outside the system), and they are still useful.
>
> The counterexamples of ethical thought, where good and bad were
> considered relative in one sense or another: of course utilitarianism,
> libertarianism. Basically, everything that has nowadays to do with the
> humaism of the Western world thrives on an assumption that ethics are
> relative, perhaps to an individual, maybe to a group, or maybe the
> time dimension makes them relative - depends on what philosopher you
> pick.
>
> ----
>
> I also read the OP, and, I think that there are thoughts that could be
> useful, but it is unhelpful that the reaction creates a conflicting
> situation. I would suggest the following proposition to Ian Grant, I
> think it may be helpful:
>
> It is possible to build a good, solid mathematical model (and it seems
> like you are into that kind of programming, since you mention Dijkstra
> and Milner very often), but it will not map well to the actual
> observed phenomena. This is very well known problem in areas like
> molecular biology, particle physics and economics / social studies.
> I.e. for example, it is possible to come up with a model, which, given
> some input DNA will make interesting inferences about it, but will be
> completely worthless for making predictions about how actual ribosomes
> synthesize polypeptides. Quite similarly, the hypothesis suggested by
> Milner, I think it was "properly typed programs can't be buggy"
> appears not to hold water. It is a good, consistent, even solid
> theory, but it doesn't capture the nature of programming. And it
> doesn't deliver on the promise. Programs in ML, too, have bugs.
> I don't say this to discourage you, I think that searching for this
> kind of models is important. I just wanted to say that maybe your
> conclusions have been premature and lacking the statistical evidence
> (lack of evidence isn't in itself a proof of the contrary).
>
> Best,
>
> Oleg
>

[-- Attachment #2: Type: text/html, Size: 5824 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 12:46   ` Taylan Ulrich Bayirli/Kammer
  2014-09-20 13:16     ` Eli Zaretskii
@ 2014-09-21  9:04     ` Panicz Maciej Godek
  2014-09-21 11:11       ` Taylan Ulrich Bayirli/Kammer
  2014-09-29 18:07       ` Ian Grant
  2014-09-22  1:03     ` William ML Leslie
  2 siblings, 2 replies; 21+ messages in thread
From: Panicz Maciej Godek @ 2014-09-21  9:04 UTC (permalink / raw)
  To: Taylan Ulrich Bayirli/Kammer; +Cc: Ian Grant, guile-devel

[-- Attachment #1: Type: text/plain, Size: 3194 bytes --]

2014-09-20 14:46 GMT+02:00 Taylan Ulrich Bayirli/Kammer <
taylanbayirli@gmail.com>:

> Panicz Maciej Godek <godek.maciek@gmail.com> writes:
>
> > [...]
>
> First of all let me say I agree with you; guile-devel is the wrong place
> to discuss these things.
>
>
Having this settled, let's proceed with our discussion :)


> I also feel uncomfortable about having been painted as the only person
> agreeing with Ian.  According to him I was able to understand his idea
> at least, but I'm not clear on how it ties in with the rest of reality,
> like the possibility of hardware exploits...
>
> Still:
>
> > [...] the back doors can be implemented in the hardware, not in the
> > software, and you will never be able to guarantee that no one is able
> > to access your system.
>
> Hopefully hardware will be addressed as well sooner or later.


How can we know that the enemy isn't using some laws of physics that we
weren't taught at school (and that he deliberately keeps that knowledge out
of schools)? Then our enemy will always be in control! This reasoning,
although paranoid, seems completely valid, but it does abuse the notion of
"enemy", by putting it in an extremely asymmetical situation.

>  On the
> meanwhile, we can plug a couple holes on the software layer.
>
> Also, if the hardware doesn't know enough about the software's workings,
> it will have a hard time exploiting it.  Just like in the Thompson hack
> case: if you use an infected C compiler to compile a *new* C compiler
> codebase instead of the infected family, you will get a clean compiler,
> because the infection doesn't know how to infect your new source code.
>

So if I get it right, the assumption is that the infected compiler detects
some pattern in the source code, and once we write the same logic
differently, we can be more certain that after compilation, our new
compiler is no longer infected?

And couldn't we, for instance, take e.g. the Tiny C Compiler, compile it
with GCC, and look at the binaries to make sure that there are no
suspicious instructions, and then compile GCC with TCC?
Or do we assume that the author of the Thompson virus was clever enough
that all the programs that are used for viewing binaries (that were
compiled with infected GCC) are also mean and show different binary code,
hiding anything that could be suspicious? [But if so, then we could detect
that by generating all possible binary sequences and checking whether the
generated ones are the same as the viewed ones. Or could this process also
be sabotaged?]


> I think it's quite difficult to find a good balance between being too
> naive, and entering tinfoil-hat territory.  I've been pretty naive for
> most of my life, living under a feeling of "everything bad and dark is
> in the past" and that only some anomalies are left.  That's seem to be
> wrong though, so I'm trying to correct my attitude; I hope I haven't
> swayed too much into the tinfoil-hat direction while doing so. :-)


Actually the direction the discussion eventually took surprised me a bit.
So maybe to discharge the atmosphere, I shall include the reference to XKCD
strip (plainly it was made up to lull our vigilance): http://xkcd.com/792/

[-- Attachment #2: Type: text/html, Size: 4563 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-21  9:04     ` Panicz Maciej Godek
@ 2014-09-21 11:11       ` Taylan Ulrich Bayirli/Kammer
  2014-09-21 19:52         ` Panicz Maciej Godek
  2014-09-29 18:07       ` Ian Grant
  1 sibling, 1 reply; 21+ messages in thread
From: Taylan Ulrich Bayirli/Kammer @ 2014-09-21 11:11 UTC (permalink / raw)
  To: Panicz Maciej Godek; +Cc: Ian Grant, guile-devel

Panicz Maciej Godek <godek.maciek@gmail.com> writes:

> How can we know that the enemy isn't using some laws of physics that
> we weren't taught at school (and that he deliberately keeps that
> knowledge out of schools)? Then our enemy will always be in control!
> This reasoning, although paranoid, seems completely valid, but it does
> abuse the notion of "enemy", by putting it in an extremely asymmetical
> situation.

That's too far a fetch.  I think it's implausible.

That being said, I have no clear criteria for what is and isn't
plausible so far, other than personal intuition.  Does anyone have
something better?

> So if I get it right, the assumption is that the infected compiler
> detects some pattern in the source code, and once we write the same
> logic differently, we can be more certain that after compilation, our
> new compiler is no longer infected?

I think that's what it boils down to.

> And couldn't we, for instance, take e.g. the Tiny C Compiler, compile
> it with GCC, and look at the binaries to make sure that there are no
> suspicious instructions, and then compile GCC with TCC?

If the Tiny C Compiler is really tiny enough to make inspecting its
binary plausible, then that should work, AFAIUI.

> Or do we assume that the author of the Thompson virus was clever
> enough that all the programs that are used for viewing binaries (that
> were compiled with infected GCC) are also mean and show different
> binary code, hiding anything that could be suspicious? [But if so,
> then we could detect that by generating all possible binary sequences
> and checking whether the generated ones are the same as the viewed
> ones. Or could this process also be sabotaged?]

That also is possible but seems implausible to me.

All in all, the TCC should work for a one-time verification, though a
more general verification mechanism might be useful in the long term?

> Actually the direction the discussion eventually took surprised me a
> bit.
> So maybe to discharge the atmosphere, I shall include the reference to
> XKCD strip (plainly it was made up to lull our vigilance):
> http://xkcd.com/792/

That's a hilarious one. :-)

Still, one last political remark from me:

Things are more complicated.  Google might be incapable of evil, but
then they might be a tool of the US government.  Not calling the US
government "evil" either, but consider people like Julian Assange or
Edward Snowden.  Things get unpleasant, and someone with good ideals
ends up being dubbed a terrorist.  And they might not be able to become
part of the government to push their ideals into acceptance, so they
should at least have the ability to discuss them anonymously without
ending up on a watch list.

That's part of the reason I think free software is important, and I
think many people would agree.  (If you don't, or think my reasoning is
flawed, then let's just agree to disagree so we don't continue with OT.)

Taylan



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-21 11:11       ` Taylan Ulrich Bayirli/Kammer
@ 2014-09-21 19:52         ` Panicz Maciej Godek
  0 siblings, 0 replies; 21+ messages in thread
From: Panicz Maciej Godek @ 2014-09-21 19:52 UTC (permalink / raw)
  To: Taylan Ulrich Bayirli/Kammer; +Cc: Ian Grant, guile-devel

[-- Attachment #1: Type: text/plain, Size: 1927 bytes --]

2014-09-21 13:11 GMT+02:00 Taylan Ulrich Bayirli/Kammer <
taylanbayirli@gmail.com>:

> [...]
> Still, one last political remark from me:
>
> Things are more complicated.  Google might be incapable of evil, but
> then they might be a tool of the US government.  Not calling the US
> government "evil" either, but consider people like Julian Assange or
> Edward Snowden.  Things get unpleasant, and someone with good ideals
> ends up being dubbed a terrorist.  And they might not be able to become
> part of the government to push their ideals into acceptance, so they
> should at least have the ability to discuss them anonymously without
> ending up on a watch list.
>
> That's part of the reason I think free software is important, and I
> think many people would agree.  (If you don't, or think my reasoning is
> flawed, then let's just agree to disagree so we don't continue with OT.)


I think that I'd be insane to disagree with the need for free software.
All I want to say is that FSF has already done a great deal of work by
popularizing the notion of free software, and although I wouldn't want to
diminish the significance Ian's concerns, it's just too hard for me to
believe, that even if we tackle the problem post factum (if we actually are
endangered), it will be too late to handle it (but I do agree that I might
be deadly wrong with this point. There's even a proverb "mądry polak po
szkodzie" -- "a pole is wise only after getting harm").
On the other hand, the idea seems very interesting by itself, and this
alone makes it worth pursuing.
If there are people out there who believe that assuring that GCC binaries
are free from Thompson virus is crucial part of FSF mission, then I have
absolutely no intention to argue with that, although I am strongly
convinced that it is reckless if a programmer suffers from malnutrition or
neglects personal hygiene at his own will.

[-- Attachment #2: Type: text/html, Size: 2383 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 12:46   ` Taylan Ulrich Bayirli/Kammer
  2014-09-20 13:16     ` Eli Zaretskii
  2014-09-21  9:04     ` Panicz Maciej Godek
@ 2014-09-22  1:03     ` William ML Leslie
  2 siblings, 0 replies; 21+ messages in thread
From: William ML Leslie @ 2014-09-22  1:03 UTC (permalink / raw)
  To: guile-devel

[-- Attachment #1: Type: text/plain, Size: 4313 bytes --]

I wasn't really sure if I should reply to this thread again, but I guess I
should clear up some of my thoughts and experiences here.

On 20 September 2014 22:46, Taylan Ulrich Bayirli/Kammer <
taylanbayirli@gmail.com> wrote:

> Panicz Maciej Godek <godek.maciek@gmail.com> writes:
> > [...] the back doors can be implemented in the hardware, not in the
> > software, and you will never be able to guarantee that no one is able
> > to access your system.
>
> Hopefully hardware will be addressed as well sooner or later.  On the
> meanwhile, we can plug a couple holes on the software layer.
>

​Yes.  While paying the right people at a cash-strapped AMD, Kensington or
Seagate to include something that sets up a backdoor is feasable, with
software, existing systems can be compromised.  Vectors that are very
difficult to detect are especially dangerous.



>
> Also, if the hardware doesn't know enough about the software's workings,
> it will have a hard time exploiting it.  Just like in the Thompson hack
> case: if you use an infected C compiler to compile a *new* C compiler
> codebase instead of the infected family, you will get a clean compiler,
> because the infection doesn't know how to infect your new source code.
> Similarly, if some hardware infection hasn't been fine-tuned for a piece
> of software, it will just see a bunch of CPU instructions which it has
> no idea how to modify to make an exploit.
>

Sometimes you have to have specific knowledge of the instruction sequence
used, other times not.  Programatically searching for common memory safety
bugs is not infeasable.  Further, depending on what interface the hardware
uses to the system, it could already have access to arbitrary memory or
some part of the trusted path (such as a hard disk serving up an infected
kernel).  Arbitrary code execution at root or better is sometimes just a
matter of clobbering a pointer if you're connected via PCI.  Or, you could
convince the user to install a propriatary driver.



>
> Which I think brings us back to the "semantic fixpoint" thingy.  If we
> define some semantics that can be automatically turned into very
> different series of CPU instructions which nevertheless do the same
> thing in the end, it will get increasingly difficult for the infection
> algorithm to understand the semantics behind the CPU instructions and
> inject itself into this series of CPU instructions.
> ​​
>
> ​​

​​Maybe.  There are really levels of this kind of attack.  It may have been
easier to acheive a semantic form of this crack - rather than recognising a
huge range of syntactic variations of "this is the C compiler", they may
have used some semantic mechanism.

Equivalence of (possibly stateful) functions is a very interesting problem,
and while it's impossible to do generally, it is also one of the​ common
exercises (over a specific set of functions) when learning a proof
assistant.  Nevertheless - semantic versions of the Richie crack are not
simply a matter of "this looks like it's writing an ELF header" or "this
seems to be register allocation", and while that does seem a different
class of exploit to one that requires figuring out if gcc will insert an
imul or a lea, there are difficulties with that kind of attack that might
make it easier to detect:

​Firstly, there is performance.  Doing that kind of semantic match on every
compilation may have some overhead (in terms of compilation time) that we
can detect.​  Then again, we don't know exactly what that analysis might be
looking for, so we can't say how simple it might be.  Maybe it's doing
abstract interpretation on glibc's write?

Secondly, the analysis would need to be sophisticated enough to recognise
where it can insert itself without otherwise changing the semantics of the
compiler.

​​Nevertheless, I don't have an immediate solution to that potential
problem.


-- 
William Leslie

Notice:
Likely much of this email is, by the nature of copyright, covered under
copyright law.  You absolutely MAY reproduce any part of it in accordance
with the copyright law of the nation you are reading this in.  Any attempt
to DENY YOU THOSE RIGHTS would be illegal without prior contractual
agreement.

[-- Attachment #2: Type: text/html, Size: 6012 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-20 18:39           ` Left Right
  2014-09-21  1:33             ` Nala Ginrut
@ 2014-09-26 15:45             ` Ian Grant
  1 sibling, 0 replies; 21+ messages in thread
From: Ian Grant @ 2014-09-26 15:45 UTC (permalink / raw)
  To: Left Right; +Cc: guile-devel

    http://livelogic.blogspot.com/2014/09/beware-good-and-wise.html



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Dijkstra's Methodology for Secure Systems Development
  2014-09-21  9:04     ` Panicz Maciej Godek
  2014-09-21 11:11       ` Taylan Ulrich Bayirli/Kammer
@ 2014-09-29 18:07       ` Ian Grant
  1 sibling, 0 replies; 21+ messages in thread
From: Ian Grant @ 2014-09-29 18:07 UTC (permalink / raw)
  To: Panicz Maciej Godek; +Cc: Markus Kuhn, guile-devel, gcc@gcc.gnu.org

The following is a response to what some may think an implausible
suggestion made here:

    http://lists.gnu.org/archive/html/guile-devel/2014-09/msg00124.html

The suggestion is that the system of education has been subverted so
that there are "unknown" physical laws which give "the unseen enemy"
strange powers over the semantics of computer hardware and software.

   http://livelogic.blogspot.com/2014/09/subverting-laws-of-physics.html

I make what I think most people will see to be a convincing argument
that this is in fact the case.



^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2014-09-29 18:07 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-09-19 15:29 Dijkstra's Methodology for Secure Systems Development Ian Grant
2014-09-19 22:22 ` Panicz Maciej Godek
     [not found]   ` <CAMFYt2YCMizV3djOhKWRxYUoFq9fMJqaQ-jBYjYJHH1puDmT7Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-09-19 23:17     ` Ian Grant
     [not found]       ` <CAKFjmdyH4SYVrpO64BOV3-Nfu7BCVVUnq07wzYQi43zO_VBU6g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-09-19 23:47         ` Ian Grant
2014-09-20  8:50           ` Panicz Maciej Godek
2014-09-20 13:53             ` William ML Leslie
2014-09-20  8:27       ` Thien-Thi Nguyen
2014-09-20 12:46   ` Taylan Ulrich Bayirli/Kammer
2014-09-20 13:16     ` Eli Zaretskii
2014-09-20 16:45       ` Taylan Ulrich Bayirli/Kammer
2014-09-20 17:58         ` Eli Zaretskii
2014-09-20 18:39           ` Left Right
2014-09-21  1:33             ` Nala Ginrut
2014-09-26 15:45             ` Ian Grant
2014-09-20 19:11           ` Taylan Ulrich Bayirli/Kammer
2014-09-20 19:51             ` Eli Zaretskii
2014-09-21  9:04     ` Panicz Maciej Godek
2014-09-21 11:11       ` Taylan Ulrich Bayirli/Kammer
2014-09-21 19:52         ` Panicz Maciej Godek
2014-09-29 18:07       ` Ian Grant
2014-09-22  1:03     ` William ML Leslie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).