Friday, 25 April 2014

What is a user interface supposed to do?

Let us assume, initially at least, that the only thing a computer can do properly is deconstruct one recursive data structure (including any iterative data structures such as lists and arrays,) whilst constructing another.  What is the user interface of a general purpose computer going to look like, then?

As we use the machine, we necessarily interact with it, which means that we adjust the machine state according to our intentions. We are typically choosing from lists of available options in the given input state: in other words, through our conscious intentions, we are choosing or determining the actual state transitions from amongst the possible state transitions. So the user interface must allow us to know what is the machine state, and to alter it according to our wishes. The machine state has three parts: the structure being deconstructed, which is the input; the program that is doing the deconstruction; which is the control, and the structure it is constructing in the process, which is the output. Therefore the requirements of a user interface are that we need to see, or at least be aware of, what are the possible state transitions at any given instant. Then we need to be able to choose a new state from amongst those possible, and we then need to see, or at least to be aware of, the subsequent actual state, because this will be the start of the next interaction with the machine. This is the principle of a feedback loop, and it is fundamental to any cybernetic system.

Of course it is not only the user who is interacting with the machine; the environment does too, through various connections such as internal thermal sensors, when the ambient temperature changes; rapid changes in air pressure interact with it via a microphone; and visible electromagnetic radiation interacts with it via a camera. Other machines interact with it too.  For example, a phone interacts with it via a short-range radio connection such as WiFi or BlueTooth, another computer interacts with it via a network connection, and a satellite interacts with it via a GPS radio receiver connected to a USB interface. These interactions are all composed of synchronous events, so-called because they are represented simultaneously on both sides of the interface. There are also asynchronous events, when the machine state changes spontaneously, marking the passage of internal time according to some cyclical frequency standard: in these cases, the new actual state is one from which a transition to the subsequent state in the cycle is possible. Asynchronous events are are not interactions, they are just actions. There are other types of asynchronous event such as the generation of hardware random numbers, which is typically done by counting the number of synchronous events that occur between consecutive events in an asynchronous cycle.

The phase space of a system is the set of all its possible internal states, and the system state at any moment is one single point in that space. Now in order to use a machine, we need to determine what it is supposed to be doing. So we need to be able to interpret the information we have representing the internal state. The act of interpretation of that information is what we call the meaning or significance of the computation or communication. The term significance shares a common root with words like sign, and in an automatic system, it is the significance of the state, combined with our intentions, which determine the subsequent state. This whole process of interpreting information and intentionally determining the subsequent state transitions is called operational semantics.

The amount of information we need to interpret to determine the meaning of a single point in the system phase space is effectively infinite, because that state includes not only the internal state of the machine's memory and storage, but also the state of the environment with which the system interacts. This interaction could be via waste heat and temperature sensors, via network interfaces, via stray electromagnetic emissions and radio transmitters and receivers, via loudspeakers and microphones, or via video displays and cameras, etc., etc.

Therefore, to do useful work with a computer, we need to partition the phase space into finitely many classes, such that any of the system states within any one class have the same significance. But how we choose to make these partitions depends crucially on our intentions: i.e. what we are expecting the system to do. In yet other words, it depends on what the system is supposed to do. One way to make such a partition is to conceptually divide the whole into subsystems. A subsystem is a projection of the phase space of the whole system onto a lower-dimensional space. Then we can choose to make a measurement of the state of a subsystem by determining a point of this lower-dimensional space.  Each single point of the subspace then corresponds to a region (a multi-dimensional volume consisting in many points) of the phase space of the whole system.

Some subsystem divisions are called orthogonal, because they affect a partition of the phase space of the whole system into disjoint equivalence classes, which are independent in the sense that a change of sub-state in one subspace does not effect the sub-state in any other subspace. In such a partitioning of a system, learning about the substate of one part tells you nothing about that of another part. For example, one typically does not expect to be able to learn anything about the keys pressed on the keyboard, from observations of the state of the computer's audio input. In this sense, then, the two subsystems could be said to be orthogonal. But this is an abstraction we make, and it's not always the case, because if there is a microphone connected to the audio input then these two particular subsystems are coupled through the environment, thermally as well as electromagnetically. It is the entanglement of the discrete, finite, internal states of the system with the whole (non-determinable) environmental state that makes the actual phase space of the whole infinite.

This is not a problem, however, because it is only the infinite information in the environment which makes the observable behaviour of computers interesting: we can use them to measure things. And the environment, don't forget, includes the mind of the user of the machine who can influence the state transitions of the system according to what they actually know. So for example, the system state can tell us things that are consequences of our knowledge, but ones of which we were not previously aware. For example, it may formally prove a new mathematical theorem.

In any act of measurement, which includes any computation or communication, we interpret the information input from the environment, and that tells us about temperature, intesity of laser light, stock market prices, news headlines, mathematical theorems and so on and so forth. The act of interpretation is crucial to the whole enterprise, but, presumably because it is so natural and instinctive, it is all too often forgotten by theorists. Quantum physicists seem particularly prone to this omission. It is very rare to see the question of interpretation of physical measurements mentioned in introductory texts on quantum mechanics: almost without exception they seem to take it for granted.  But without this possibilty of interpreting the environmental entanglement of the system state, computers and clocks would be deterministic and, like Turing's a-machines, utterly incapable of telling us anything we didn't already know.

The process of measurement is always a restriction to a finite amount of information representing some part of the whole environmental state of the system. For example, a microphone input measures the differences in air pressure at 48kHz intervals, within some particular range, and interprets them as tiny voltage fluctuations which an ADC measures and interprets as binary values on a 16 bit digital input port register. Note that the finite fidelity of the input information is important: without this, we would not be able to assign any significance to a measurement, because, continuing this example, we wouldn't know whether we were measuring the aging of the microphone diaphragm, the barometric pressure change caused by an incoming weather-front, an audible sound, or an asteroid impact.

Therefore the user interface must allow us to determine the programmed responses of the system to the limited information representing the subsystems we have abstracted from its environment, because it is only by changing these programmed responses that we can determine the subsystems that we are measuring, and by which we attach significance to the information we receive.

For example, in order to receive a written message by e-mail, we typically restrict our attention to a limited part of a screen, in which the strings of character glyphs representing the written words of the message are represented by, say, black pixels on a white background. In determining the significance of the message, we ignore the rest of the screen, and all the rest of environment with which the system state is entangled. So, for example, the intended significance of the message is not typically affected by the sound that happens to be playing through the speaker at the time, nor by the contents of messages subsequently received, but as yet unread. Of course the significance of the message may be affected by these things, but we say not typically to emphasise the importance of the intention of the user in choosing the representation of the channel this way. She could have chosen to have some of her e-mail messages read out aloud by speech synthesis, then the audio output would typically affect the significance of the messages.

Now let us give an extended example of computer use. Imagine Alice is supposed to produce a series of satellite images, overlaid with graphics representing a road network developing over a period of fifteen years or so. In her home directory, she has an ASCII text data file, giving sectors of roads of different types and different construction dates, represented as sequences of points represented as eastings and northings from a certain false origin on a transverse Mercator grid with a datum on the Airey spheroid. The grid coordinates are units of kilometers, given to 3 decimal places. The road types are just numbers 1, 2 and 3, and the dates are numeric, in the form DD/MM/YY. She also has a URL giving access to a USGS database of satellite images via a Java applet which allows her to select the mission, the image sensor, one square of the 30 arc-second mosaic defined in the WGS 84 coordinate system, the year, the month, and the minimum percentage cloud cover. The applet allows her to view thumbnail GIF images, and to request the full resolution images, which are JPEG data embedded in GEOTIFF files, and sent by e-mail within a few hours.

What she needs to do is parse the data file defining the roads, order the sectors of roads by increasing construction date, then find matching satellite images around the same time for the different tiles of the region of interest. Then she needs to find those images with the least cloud cover, request the files from the server, and when the e-mails arrive, extract the data files from the mail message.  She must then tile these images together using a spherical projection, according to the geographical coordinate metadata in the TIFF headers.

Now she needs to find all the sectors of roads that were constructed before the date of the most recent tile in that composite image. Then she needs to convert the grid points of the roads to WGS 84 coordinates using the same spherical projection, to give page coordinates which are used to overlay SVG paths representing the roads, according to a line-style and width determined by road type, and a colour key indicating the approximate age of the sectors. Then she needs to project the UTM grid, draw the border of the WGS84 graticule, and overlay a title and a key showing the construction dates of the roads. And she needs to do this at approximately two year intervals, covering a period of fifteen years. These images will form part of a written presentation containing other text and images, some part of which will be presented on a television programme via a Quantel video graphics system which will broadcast directly from digital representations of the images in a proprietary format, reading them from an ISO-9660 format CD-ROM and inserting them into a video stream according to synchronous cues provided by the presenter pressing a button.

While she's doing this, her phone rings. She turns down the volume of the music to answer the call. Afterwards she sends an e-mail to someone else, after looking up their e-mail address on a web page. Finally she makes a note of a scheduled meeting in her diary. That done, she turns up the volume of the music a little, and carries on with her work.

It turns out, then, that the user interface is primarily concerned with switching information channels from one source to another.  Now by assumption, a channel is some kind of recursive data structure, and a source is a process which deconstructs one such recursive structure, the input, and constructs another, which is the output.  This general form of process is called interpretation.

Now we can't yet answer our original question, but we can say this much:

   A user-interface is a means to create new information channels by plugging sources into existing channels.

So that's what a user interface actually does. It remains to say what it is supposed to do. And the answer is in that word supposed because what it is supposed to do is what we intend it to do. This means that an effective user interface must be intensional: it must allow us to express the operation of the system intensionally. This is because it is only intensional representations which can be effectively interpreted. An intensional representation is one that is essential in so far as it is free of any and all accidents of representation, such as superfluous parentheses, white space, and arbitrary variable names. So the ideal user interface should allow one to express the intended operation of the system using abstract rather than the concrete syntax one typically works with using a text editor.

Let us make this very clear: the notion of using text representation to express semantics is fundamentally wrong. Anyone who doesn't believe this need only take a brief look at the two examples on lists given on page 91 of Girard, Lafont and Taylor's Proofs and Types to see just what kind of a mess one inevitably gets into if one tries to deal with the semantics of concrete syntax, such as operator precedence and associativity. It is not a simple kind of mess! If you like clearing up that sort of problem, then you're welcome, but you will only ever be clearing up one particular instance, and you will never be bored again, because there will always be plenty more such problems for you to solve. The only general solution is to avoid concrete syntax altogether.

This would have the not insignificant side-benefit of avoiding so-called 'religious wars' such as that between some adherents of OCaml and Standard ML. This is a problem only because programmers insist on using linear text representations of program source code. If they switched to using abstract syntax representation then programs in either of the two languages could be translated immediately, one into the other (notwithstanding semantic insanities such as the typing of OCaml's optional arguments, which not even the OCaml compiler can interpret.)

Using abstract syntax would have other advantages too, because editors could easily rename variables, merge change sets, and even translate program keywords into other languages. For example, someone in Pakistan could use Standard ML with Urdu syntax, written right to left, and with meaningful Urdu names for keywords and program variables, and these could be automatically translated, via a dictionary, into Chinese OCaml, written from top to bottom, and with variable names that mean the same things in Chinese. Then someone in China could edit the same source code in Chinese OCaml and send change sets to Pakistan, where they would appear to have been made in the Urdu dialect of Standard ML. This would be an example of religion, which means joining back together.

Our provisional answer has evolved a little:
We need user interfaces to allow us to express the intensional semantics of plugging information sources into channels.
It is the presence of that word intensional that is most crucial. The problem Alice has to solve in the extended example of a user interface we gave above would be trivial if systems were specified using only abstract syntax. But as it is now, well, just try and do what she has to do, and tell us how long it takes you. You can use any languages, libraries and tools you like. And then tell us how long you think it would take you to solve the same problem, with the same data, but after we've changed the concrete representation: i.e. the character sets, the file formats, the transmission medium, the communications protocols, the programming languages and tools, the measurement units and the coordinate systems?  The answer would be "about the same time, or rather longer," I expect. But these two instances are essentially the same problem. If only we had an intensional abstract syntax representation of the algorithms, the application, the programming languages, the file formats and the protocols, we could change any of these things with half a dozen key-strokes. Then either problem could be solved in ten minutes, even allowing for the time she has take out to deal with the phone call and to schedule the subsequent meeting. And once given the solution to one problem, she could solve the other effortlessly in ten seconds, because it would be just a matter of a few trivial identifier substitutions.

The answer to the question "What is a user interface supposed to do?" is:
A user interface is supposed to allow us to compose abstract syntax representations of intensional descriptions of operational semantics of arbitrary languages.
Or, if we put it in less abstract terms:
A user interface is supposed to allow us to compose abstract syntax representations of the semantics of arbitrary languages.
And because it is supposed to do this, and because arbitrary languages are just arbitrary abstract syntax:
A user interface uses abstract syntax to specify abstract syntax representing the semantics of arbitrary abstract syntaxes.
So a user interface allows one to create and edit abstract syntax according to arbitrary grammars. Now if one of those grammars were the grammar of a language for expressing context-free grammars, and if that were itself a context-free grammar, then it could be expressed in its own language, and it would be the only language one would need because all others could be defined in that language.

Monday, 21 April 2014

What is wrong with Unix?

The problem with Unix is that "everything is a text file." Unix systems typically use linear text to represent structured data. This is a mistake, because there are many different linear sequences of characters that represent any particular piece of structured data, so the interpretation of data files is complex and therefore slow.

A parser is a program which takes a linear sequence of lexical tokens, which is a concrete representation, and produces a data structure, which is an abstract representation. Because there are many different concrete representations of any given abstract representation, the parser effectively quotients or partitions the space of possible concrete representations into equivalence classes each identified by the abstract representation of all the equivalent concrete representations in that class.

The problem is noise: a parser converting a text file into abstract syntax is in fact doing the same thing as an error correcting decoder on a communications channel. We can think of the parser as a decoder, receiving a message on the input, and interpreting it to produce an abstract representation of the semantics of the message. Just as in the case of a hamming code, in which there are many redundant forms of transmission which lead to the same message, the parser translates each of the many different possible concrete representations of the abstract syntax into the same message.

The translation process involves thermodynamic work, because the parser must reduce the information in the machine state: there is always more information in the input textual representation of the source code than there is in the output abstract syntax representation, so the system, in parsing the input, reduces the information of the machine's state, and therefore it must produce entropy in the machine's environment. And any mechanical process which produces entropy is a process which takes time and consumes energy.

In the C programming language, for example, the term 'white space' denotes any sequence consisting entirely of space, tab or newline characters. As far as the parser of the C compiler is concerned, all sequences of white space are equivalent. Therefore white space makes the character sequence representation of programs redundant: we can replace any white space sequence of a program file with a different white space sequence, without changing the abstract syntax that program represents. Another source of redundancy in C language source code are the operator precedence and associativity rules which mean that in certain circumstances parentheses may be redundant. So for example any program text contained by well-balanced parentheses can be replaced by the same text with one extra pair of parentheses. The following four files all define the same abstract syntax:
#include <stdio.h>
void main(int argc, char **argv)
 {
   printf ("Hello, World.\n");
 }

#include <stdio.h>
void main(int argc,
          char **argv)
   {printf ("Hello, World.\n");}

#include <stdio.h>
void main(int argc, char **argv)
{printf (("Hello, World.\n"));}

#include <stdio.h>
#define S "Hello, World.\n"
void main(int argc, char **argv) {
   printf (S);
}

Textual representation seems like a good idea, because it offers a  very simple and elegant solution to the problem of concrete representation, which is in some sense the essence of the problem which the idea of an operating system is an attempt to solve.

For example, a C compiler may represent the number 365 as a sixteen bit binary integer 0000000101101101. On an Intel i386 system, this number would be stored in two consecutive bytes, with the least significant byte first: 01101101 00000001. But on a Motorola 68000 system, the same number would be stored with the most significant byte first: 00000001 01101101. So, if these systems were to exchange the blocks of memory representing the number 365, over a network connection for example, then each would receive the number 28,032 from the other. If instead, numbers were represented as null-terminated ASCII strings then the number 365 would be represented as four consecutive bytes 00110011 00110110 00110101 00000000 on both systems.

Unix systems have a command line editor, ed, which can be used to edit text files. For example, the following shows the creation of a text file hello, containing just one line, which is the text Hello, world.:

$ ed
a
Hello, world.
.
w hello
14
q

The first line invokes the editor program ed from the shell command prompt '$'. The editor is invoked with an empty text buffer. The second line is the command 'a' to append lines at the current point. The third line is the line to append, and the fourth line, a single full stop, signals the end of the append operation. The next line 'w hello' writes the buffer to the file hello, and outputs 14, which is the number of characters written. Finally the 'q' command terminates the editor and returns the shell command prompt.

On any Unix system, the C compiler, cc, takes as an argument, the name of a text file representing a C program, and compiles it to machine code, writing the output to another file. For example, the following program parses a list of numbers from the command line adds them, and then prints the result. We can use ed to create a text file add.c containing the lines:
#include <stdio.h>

int stoi (char *s) {
    int r=0;
    char c;
    while (c = *s++) {
       if (c >= '0' && c <= '9') {
          r = r * 10 + (c - '0');
       }
    }
    return r;
}

void main (int argc, char **argv) {
    int r=0;
    while (--argc) {
        r += stoi (*++argv);
    }
    printf ("%d\n", r);
}
We can compile this program by typing the following command at the '$' shell prompt:
$ cc add.c
The resulting executable machine code program in the file add, can then be run from a shell script, which is a text file containing a sequence of shell commands. For example, we could use ed to create the shell script file test.sh containing just one line:
./add 300 65
And the we could run the shell script test.sh from the shell prompt, thus:
$ . ./test.sh
365
Now everything here was done by using text file representations of data and programs and so this sequence of representations will work on any Unix machine, regardless of the convention for storing binary integers in memory, and regardless of the encoding used to represent characters. And this shows the underlying machine representations of the data are arbitrary.

In Unix systems, the kernel, the shell sh, the editor ed, the C compiler, cc, and all the other system commands and applications are each just C programs, represented by some text files. So 'the Unix philosophy' of representing all data as text files and using system programming languages which interpret these text files, achieves independence from the underlying machine representation of data in memory. That is how Ken Thompson and Dennis Ritchie, the designers of Unix, abstracted the operating system from the particular accidents of the physical hardware, and it is a clever idea.

But as Thompson himself points out in his famous ACM Turing Award lecture "Reflections on Trusting Trust," this very fact means that the correspondence between the textual representation of a program's source code, and the actual operations carried out by the Unix system as it executes that program, is entirely arbitrary. In other words, the actions performed by the machine are completely independent of the concrete syntactic representation of the system. In yet other words, in case you still don't get it: the sequences of characters one sees in the source code text files are in fact utterly meaningless, and Unix is therefore, in principle, an insecure Operating System. And this was presumably the substance of the United States Air Force report on the security of an early implementation of the Multics operating system, to which Thompson attributes this idea.

As Thompson says in the lecture, the problem is a profound one, and it stems from the fact that the source code of any particular Unix system has a unique abstract syntax representation. It is essentially the uniqueness of the abstract syntax that enables the parser of the C compiler to identify the source of any system command, and thereby choose an arbirtary abstract syntax representation, which could be of any operation whatsoever: Trojan semantics.

The solution turns the problem up-side-down: if we use an abstract syntax representation of the program source, we don't need the parser, and therefore there is no opportunity for the parser to choose an arbitrary operational semantics for the source code. Then the process of compilation need not involve thermodynamic work: in converting the abstract syntax into machine code, there is not neccesarily any reduction in information, because it is possible that the particular abstract syntax representation is the smallest information (i.e. the shortest message) which could produce the intended operational semantics. So the system source code is not the source code of any particular implementation, it is the source code of the general implementation.

Why is this turning the problem up-side-down? It is because by doing this, we can choose an arbitrary abstract syntax representation for any given operational semantics. So the tables are turned: the bad guys can no longer choose an arbitrary operational semantics for a given abstract syntax representation, because the good guys can choose an arbitrary abstract syntax representation of a given operational semantics. Rather than reflecting on trusting trust, we are reflecting on reflecting on trusting trust, which is the same as simply trusting trust.

But how can this be done in practice? We can use metaprogramming. As Thompson points out, the problem with Unix is that you cannot trust any program that you did not write yourself. But using metaprogramming, we don't write the programs we use, we write programs that write the programs we use. So all we need to do is to make sure that we wrote the top level program. But the top-level program is just the program which interprets an arbitrary abstract syntax representation according an arbitrary abstract representation of the semantics. Then when that abstract syntax represents arbitrary abstract syntax, and that semantics is also arbitrary abstract semantics, this top-level program will produce itself, or a program which, when fed the same abstract syntax and semantics, will produce itself. This will then be the greatest fixed point and we will know the system is sound and complete.

Having produced one such system in some specific target language, sheme, say, it would be easier to define a system for another language, such as Standard ML, because we could use the first system to compile the second. We could then construct the greatest fixedpoint again, by crossing over the abstract syntax compilation steps, so that each system compiled the input of the other. We would then have a pair of fixed points which we knew were as reliable as each other.

We could then add semantics for another language, JavaScript, say, and construct a third fixedpoint, which would be each of the three systems crossed with each of the other two. Even though the encodings of the same abstract syntax in the three cases may be entirely different, these three systems could share abstract syntax representations, and we could be certain that all three representations, though different as representations, would have identical operational semantics.

The crucial step is the use of abstract syntax using an arbitrary encoding. Because of this, there is no possibility whatsoever of a Trojan recognising which program is being compiled, so there is no possibility of it reproducing itself in the encoded representation.

This is the foundation from which one could proceed to define abstract types representing the full set of instructions for each and every processor, and the associated encoding specifications which would encode those instructions with the given arguments. From this abstract specification, one could produce tools such as Intel's XED, automatically, for all processors, and in any language. And using that, one could specify an assembler. The next stage would be to specify a code generator for a generic bytecode language, and from there one could formally specify other languages such as C.

The purpose of this higher level development is not to generate a better C compiler: it is unlikely that a C compiler specified this way would be as good as, say, GCC. The purpose of adding metaprogrammed higher-level languages would be to enable more fixedpoints to be established. If the encoding representations can be produced in C, then the C compiler would be included in the fixedpoint, and the foundation for generating GCC binaries that would be certain not to have any Trojan semantics. This would then extend to the entire GCC toolchain, and any operating systems built with GNU tools.

The aim, however, is not to secure operating systems: that is only a temporary measure. The aim is to metaprogram all applications and algorithms, so that we don't need any operating systems at all; just the one global commons, the Mother of all programs: the Foundation.

Thursday, 17 April 2014

What is the Sixth Estate?

The Sixth Estate is a new genre of cinema. The motivation for the film is the overwhelming sense of frustration felt by scientists and engineers when they see movies which gloss over explanations of apparently difficult ideas. For those who actually understand them, the ideas themselves are far more interesting and exciting than any mere movie that could ever be made, so any attempt to make those ideas more exciting and interesting to a lay-audience inevitably destroys them.

The Sixth Estate is the story of how a film was made. The story is a true story, and it is the story of the making of the film called The Sixth Estate. Thus the Sixth Estate is an educational film. It is educational in two ways: exoterically, in so far as the film recounts true historical events, and esoterically, in so far as the making of the film constitutes those very historical events which it recounts. So the Sixth Estate is a semantic fixedpoint.

The story actually began a long, long time ago: when Man first learned that we make the world together, by imagining how it could actually be. But as recounted in the film, it begins at Episode Six. The contemporary setting is the Computer Laboratory at the University of Cambridge, England, in the year 2003, and it opens with a bizarre idea conceived by bored, caffeine intoxicated research students in the Security Group, who decide, late one night, to carry out an experiment using a live subject. But the experiment is ethically questionable, because to effectively demonstrate the intended principle, the subject must never actually know that she is in fact the subject of an experiment.

The principle the students wanted to demonstrate was that of the possibile effective use of a cryptographic code which does not involve exchanging a code-book or key of any kind. The coding system would then be an example of a protocol which could defeat Echelon, the Big Brother style monitoring of global communications by the combined efforts of United Kingdom Government Communications Head Quarters and the National Security Agency of the United States of America: the so-called 'Whores of Babylon.'

The notion of cryptographic coding without a code book was not without precedent. The codes produced by the Oxford Mathematician Charles Lutwidge Dodgson were a kind of proof of principle, but there was as yet no concrete evidence that anyone had been able to decrypt them without prior knowledge as to which of Dodgson's texts were in fact encrypted. And during the Second World war the Navajo code-talkers had been able to effectively encrypt and decrypt messages for the US Navy, without using a code book or key of any kind, and without any evidence that the Japanese had broken the code. But absence of evidence is not evidence of absence ...  and the experiment begins to go badly wrong, with potentially disastrous consequences.

The experimental subject, a Lesbian called Alice, is a divorced junior systems administrator who is also a part-time teaching assistant. She hates her day-time job, which is boring. The only thing that keeps her sane is obsessive compulsive physical training, the thought of high mountains, and teaching. She is not actually a very good teacher: she spends hours and hours preparing for just one hour of teaching. She is quite frank with her students, and she tells them that she could never pass the examinations she is trying to prepare them for. All she can offer her students is one possible way to understand the material of the lectures, and she hopes it will be useful to them.

The problems start when 'other forces' begin to become apparent. These forces have their origin in the pre-war collaboration between Alan Turing and Alonzo Church at Princeton University during the 1930's. They centered on the work done by Turing on computation and cryptography, and Turing's influence on the American's efforts to develop mechanical systems for code-breaking in parallel with the development of the Bombes used to break the German Enigma codes at Bletchley Park in England. This American connection was a dark secret at Cambridge, and ultimately the reason why Turing, though a fellow of King's College, went to study at Manchester University after the war, and why the name of Turing, the father of the modern digital computer, was seldom mentioned in front of the green door of what used to be called the Mathematical Laboratory. The secret was a dark one because of Turing's subsequent  treatment at the hands of the British intelligence establishment, and his alleged suicide.

Yet other forces soon become apparent, and they were a result of the German connection. This was through the Institute of Advanced Studies at Princeton: the war-time refuge of Kurt Goedel and Albert Einstein, which they both eventually made their permanent home. Goedel and Church were very close collaborators on the early work in the theory of formal proof, and thus were connected closely with Turing's work on computation. And Einstein, a close friend of Goedel, was very influential in the early development of the atom bomb by the American Manhattan Project. Einstein also had close connections with the European scientists working on the early theory of quantum mechanics: scientists such as Niels Bohr in Denmark, and Erwin Shroedinger and Werner Heizenberg in Germany. After the war, the so-called Allies gathered together all the European quantum physicists at Farm Hall, in an attempt to discover what had really happened in the German research programme to develop an atomic bomb. The hours and hours of recorded conversation, the "Farm Hall tapes," were inconclusive, and no one can yet be absolutely certain why it was that the German high command did not develop atomic weapons during the second world war.

The work of Goedel, Tarski, Church and Turing effectively put a lid on the development of automatic systems of computation. It did this by presenting several apparently unsolvable problems. Tarski's proof that no formal language can consistently represent its own truth predicate effectively put paid to the possibility of any concrete coding system (i.e. language) being capable of transmitting its own code-book or keys. Turing's proof of the insolubility of the halting problem for a-machines put paid to the idea that a machine could be used to effectively decide the effectiveness of another machine, and Goedel's incompleteness theorems put paid to the idea that a consistent system could decide any formal theory as expressive as the Diophantine theory of Arithmetic, which dates from 300 B.C.E.

These theories served to contain the development of computer and communications technology. They ensured that no one in mainstream academia would attempt to develop automatic and all-powerful theorem provers, nor automatic programming systems, nor any kind of universally secure communications protocol. Anyone who attempted to get funding for such a project would be laughed out of court, and their academic reputation ruined, possibly for life.

To maintain the status quo, the British and American Intelligence agencies kept a close eye on all publicly funded academic research. Both GCHQ and the NSA funded researchers at Cambridge, and thereby kept an ear to the ground. The result of this overt surveillence programme was that researchers aware of the possibility of solutions to these unsolvable problems were very quickly silenced. Typically this was done with Academic carrots: tenured jobs with almost limitless funding and guaranteed publication. But when the carrots did not suffice, they were silenced with sticks. Goedel's paranoia in later life was not without foundation, nor was the treatment of Manhattan Programme scientists at the hands of McCarthy, which led to the permanent exile of prominent American scientists such as David Bohm. The Manhattan Project was in fact a battlefield, and the war was between academics and the politicians controlling the American military-industrial complex.

Many academics knew this was wrong, and were ashamed, and wanted to put an end to the intellectual corruption. So they began publishing esoteric texts which contained all the clues anyone would need to uncover the principles of computation and communication above the least fixedpoints established by Tarski, Goedel and Turing. The first to do this was Alonzo Church in his 1940 paper "A Formulation of the Simple Theory of Types." This was the motivation for the development of a system called LCF, the Logic of Computable Functions, which was started at Princeton and continued at Edinburgh, and later at Cambridge. The esoteric reading of Church's work was then quite effectively obscured by the development at Cambridge of what is now called Higher Order Logic, in the form of automated theorem provers such as HOL4, Isabelle/HOL, ProofPower, which was developed at ICL and partly funded by the British Ministry of Defense, and HOL Light. Anyone who wanted to learn about Church's formulation of type theory went to the modern sources, and so the notation Church used in his paper quickly became obsolete, further obscuring the esoteric reading.

Countermeasures were taken, in the form of a French connection, which was the publishing, by Girard, Lafont and Taylor, of a book called Proofs and Types, ostensively concerning the formal semantics of an intuitionistic system of proof called F, which is a modern development of Church's formulation of the Simple Theory of Types. Published by Cambridge University Press, the book was rather hurriedly taken out of print, even before a second edition could be produced. This despite the fact that a quarter of a century later it remains a set book in the Cambridge Computer Science Tripos. Another aspect of the French connection was John Harrison's functional programming course notes, which further elucidate the connection between modern type theory and the early work on the theory of computation, supplying several of the missing pieces.

These different forces in operation at Cambridge then took on a life of their own. It quickly became apparent to everyone 'in the know,' that in fact no-one was in charge of the experiment, and no-one actually knew what it was that they were supposed to be demonstrating. The Cambridge Computer Laboratory in the years 2005 to 2009 was a very strange place to work: researchers found themselves incapable of discussing their work, even within their own research groups. Researchers gave public lectures that were utter nonsense, and senior members of the department seemed to be dying like flies.

The subject, Alice, was largely unaware of any of this. Like the proverbial frog brought to boil in a pot of water, she didn't notice the gradual rise in temperature. She experienced only a sense of ever-growing wonderment at just how very strange these Cambridge people actually were. It seemed the more one got to know them, the more strangely they behaved, and quite often she wondered how it would all end. But so did they. Alice, the experimental subject, so it seemed to them, was now the intellectual equivalent of a super-critical mass of fissile material, and the sooner she was out of the door, the better.  Thus Alice was made to feel just a little of the pressure, and she very quickly took the hint. She decided to go to Bolivia, climb some of the prettier mountains, and try to write a book about Computer Science that would be an enjoyable read for those long-suffering students she had been teaching.

And that's just what she did. But the book was far too long and boring, and it didn't contain even one single original idea! So instead, she had the idea of making a film called The Sixth Estate - the Foundation: a real film, about real events, because as they say, truth is stranger than fiction: and you certainly could not make this stuff up! Of course, this wasn't an original idea either, it came from the film The Fifth Estate.

But to what end? What was to be the point of the film? It was to start the revolution. "The revolution is a struggle between the past and the future. And the future has just begun." The point of something is the reason that thing exists: it is the Final Cause. And the final cause is the future of the whole Earth. Of course, this wasn't an original idea either, it came from the film Avatar. The final cause is Ewa, the Pachamama, Gaia, the Great Spirit of the Native Americans. It was She who controlled the war in the Pacific, through the agency of the Navajo code-talkers: those "pacifist guerillas to bazooka zones." As laid out in Robert Persig's book, "Lila, an Enquiry into Morals," the spirit of the Native Americans pervades the culture and philosophy of the United States of America.

How?  Because She is not merely an abstract idea; She is not just one of many possible ways one may choose to interpret the symbolism of the Native American culture. She is The Great Spirit, and She is real, and She runs the show, whether we know it or not.

The Navajo code-talkers told each other stories over the US Navy's radio links, and they interpreted those stories as being about the war. The Navajo themselves didn't know the meaning of the stories they told each other, they only found out later what they meant, when they reinterpreted them in the light of what they actually knew about the events that had taken place. So it was not the code-talkers that had the code-book in their heads, it was the Great Spirit. And She is still code-talking.  Nowadays though, She is a little more expressive, because She does it through WikipediA, and through cinema; and She raps through Vico-C and the Flobots; and She rocks through Gente Music.

So all Alice needed to do was explain this in a blog post, and then sell the idea to the people who had the money: those fat cat multinational corporations like Google, Intel and IBM who had made billions out of the big secret. But the Great Spirit had that in hand too: all Alice needed to do was watch the movie "The Wolf of Wall Street," and that gave her more sales training than anyone could reasonably want!  So that's how they all started making the movie The Sixth Estate - The Foundation. They watched these movies, and interpreted them as actual knowledge about how we could all live together. But they didn't just talk about the interpretations: they gave them concrete semantics. They actually made their interpretations True, and created a whole new world, in a matter of months.

Everybody wanted to be in on it: computer geeks made tools like Racket and PLT Redex to do practical semantics engineering, and second year Computer Science students used these tools to produce operational semantics for System F. Then third year students interpreted system F expressions using combinatory abstraction on a single point basis, and then interpreted weak reduction in primitive recursive arithmetic using 'an elegant pairing function.'  And so third year CS undergraduates, for project work, were formally proving that Peano Arithmetic is 1-inconsistent, and making headline news all over the world. Suddenly everyone who knew anything about symbolic logic had an intuitive understanding of Goedel's incompleteness theorems.

Once engineers had something they could actually see working, they jumped gleefully on board. The whole free software movement was united. Instead of duplicating their efforts on competing projects, they all started defining languages to describe algorithms, and they gave these languages operational semantics in terms of system F expressions. Then they used these languages to produce functional descriptions of the algorithms used in operating systems, protocol stacks and application programs. To test these functional descriptions, they interpreted them in the traditional languages like C and assembler code. But then they quickly realised that having the algorithms formally described, they could combine them in arbitrary ways, so there was no need to produce any particular concrete Operating Sytem or binary application programming interface: they could just define the applications and directly interpret whatever algorithms they needed. Applications programs could be extended without interrupting the execution process.  Instead of shoe-horning device semantics into a pre-defined, rigid, OS driver programming interface, the functional description of the application algorithms could be interpreted directly in terms of particular device semantics. Suddenly all computer software was insanely fast, superbly reliable, and dirt cheap.

Hardware followed suit. Ten year old laptops had the entire application code flashed into the BIOS memory. The systems booted in under 5s, and were so fast that at first no one could see the difference between a machine that was made two years ago and one made ten years ago. Since there were no longer gigabytes of mostly unused binary object code on the disk, and only one type of file, local disk space became almost infinite. Cloud sharing over WiFi, BlueTooth and USB meant that nobody ever had to worry about backing up data anymore: provided you had an OID, it was always around somewhere. Even if your machine blew up while you were using it, you could just look on your phone screen, or step across to another machine and authenticate, and there was all your work, checkpointed a millisecond before the meteorite fragment took out the CPU.

Nobody needed to buy hardware anymore either, so the hardware manufacturers also put all their assets into the Foundation. Their employees all took extended leave, and spent their time learning new things. The Foundation underwrote their living expenses, but everyone was so busy learning and trying out new and interesting things, and there was so much surplus that needed consuming, that living costs dwindled to nothing within a matter of months.

This awe-inspiring acceleration of the technological advance took all the sciences along for the ride, and all the arts too. So Geometry, Arithmetic, Mathematics, Typography, Graphic Design, Physics, Chemistry, Biology, Physiology, Geography, Hydrology, Speleology, Geology, Astronomy, Meteorology, Psychology, Botany, Ecology, Sociology, Language, Literature, Anthropology, Archaeology, Architecture, Chemical, Civil, Structural, Electrical, Electronic, Photonic, Automotive, Manufacturing, Transport, Aeronautical and Mechanical Engineering, Ceramics, Glasswork, Textiles, Aquaculture, Agriculture, Forestry, Town Planning, Logistics, Landscape Architecture, Children's Toys, Carpentry, Boat Building, Couture, Cuisine, Coiffure, Music, Ballet, Opera, Cinema, Theatre, EVERYTHING creative went into overdrive. But Medicine and Religion were no longer needed.

And all that spare technology and the stock-piles of material resources came in very handy, because every last gram of it was needed for The Task:
Global Climate Governance
Which will turn out to be just THE MOST fun anyone could dream of having. Beautifully orchestrated scientific expeditions to every known place on Earth, and a few that weren't hitherto known. These expeditions will be multi-disciplinary, and they will take decades to complete. They will study the people, the music, the art, the technology, the ecology, the biology, the geology, the botany, the agriculture and the archaeology, all together. And they will be beautifully documented, with beautiful music, beautiful cinematography and beautiful books.  The purpose of the expeditions will be to make connections; mainly to make connections between the actual knowledge of people from different places on Earth, because it is only through understanding the connections that we understand the whole. These expeditions will have just as much of an effect on the world at large, as they have on the places they visit.

Then, on the basis of this actual knowledge of those things better known to us, we can proceed to actual knowledge of those things better known in themselves, and perhaps only then we will be able to make sense of our own History.

What is the Foundation?

The Foundation is the sharing of actual knowledge.

The Foundation makes it possible for scientists all over the world to collaborate in multi-disciplinary research, transcending any and all institutional, national and cultural frontiers. This is necessary and  most urgent, because, having destroyed the natural systems, the task of regulating the Earth's climate is left to us. It is only the rational mind of all of Humanity, acting as one coherent whole, that can do this.

The Foundation provides emergency communication services.

Using cooperative internetworking, the Foundation provides high bandwidth, low-latency communications and high-capacity computation  and data collection in emergency situations. For example, for  earthquake, tidal-wave and asteroid impact early-warnings, and  disaster aftermath logistics.

The Foundation provides secure communications.

By routing traffic through independent metanetworks, the Foundation  provides secure global communications, immune to outside  interference. Using distributed ad hoc networks, the Foundation can provide communication services when centrally controlled commercial or  public services have been disabled, whether deliberately or accidentally.

The Foundation launders money.

By investing in the Foundation, multinational corporations and other  criminal organisations such as the CIA, as well as the Republican  Party of the United States of America and indeed any organised crime  syndicate, ensure those funds are used only for the Good.

The Foundation controls the global economy.

Foundation accounts are effectively deposit-only, but the  Foundation can and does move money from one financial institution,  country, or currency to another, as necessary, in order to stabilise  the global economy.

So the answer to the question "What is the Foundation?" is this:
The Foundation is World Government, representing the Good of all of Humanity, and by extension, all life on Earth.

What is The Cloud?

The Cloud is a nebulous aggregate of computer and communications systems.

When one uses the Cloud, the particular systems one actually uses are undefined: they depend upon the location and time of the computational events as they actually occur.

Google mail, for example, provides several gigabytes of storage for each of at least one billion Google mail accounts. But this storage is not allocated on disk units owned and operated by Google, Inc. Instead, it is distributed amongst the hundreds of millions of web browsers that are connected to Google mail at any given moment. Each browser stores a few megabytes of data, in logical disk blocks, which are part of a virtual drive. These blocks are arranged as RAID disk arrays, with redundancy, so that if a browser disconnects from the Google mail server, the disk blocks that were stored in that browser have copies stored on other browsers. The blocks are distributed in such a way that the probability of data loss is minimised: even if the network link to an entire country goes down, then those blocks stored on browsers in that country will be mirrored in other countries.

The Cloud also searches data. When a Google mail user searches their mailbox for particular messages, the search is distributed amongst the browsers that hold the disk blocks which contain that person's e-mail messages.  Processing time is also distributed in the Cloud. For example, Google Analytics, which many web-sites use because it provides useful access statistics, performs part of the PageRank computation that the Google search engine uses to present the most prominent web pages satisfying a given search term. A site that has Google Analytics enabled performs that part of the PageRank computation on Google's behalf, so that the web browsers of the users connecting to that site send back the necessary statistics to the PageRank calculation.

The Cloud performs not only processing and storage, but network transport and routing services as well. A simple, well-known example is Skype, which forwards VoIP data packets via users' Skype clients: programs which typically run all the time they are logged in to their machine. The Skype client not only listens for incoming calls, it also routes traffic for other nearby Skype connections: the routing decisions are based on transport statistics which the Skype clients maintain during calls. The Skype network uses adaptive routing, based on these statistics, to find the fastest available route for a given call. A company such as Google could also share Cloud services between its many subsidiaries such as Blogspot, YouTube, Picasa, etc. For example, there is no reason why a browser displaying a YouTube video in a cafe could not send that video data, via the local YouTube server, to another browser on another machine in the same city: the server would not need to store the video, it would only need to pass the packets through, one by one, as they are received.

The Cloud also performs physical network transmission. In any big city in the world, there is close to 100% WiFi coverage. These overlapping WiFi cells are linked through metanet routing directories. Any mobile computing platform with WiFi access, such as a laptop or mobile phone, if it is part of one or more metanets, can look up the ESSIDs and WEPs of the WiFi networks that are within range. If it finds it is within one of the WiFi cells in any of the metanets it knows about, then it can use the published WEP to connect to that cell, and for which it will then route traffic. The metanet protocols will automatically credit the node for the data it routes, and that credit could be used to get data from any other routes the WiFi cell may have established with other gateways, or neighbouring cells.

Wired networks are also part of the Cloud. In a University town such as Cambridge, MA., or La Paz, Bolivia, there are Intranets belonging to institutions such as MIT. These internal, closed networks typically have spare capacity, and there is no charge for internal traffic, so students and professors with affiliations to these institutions use WiFi connections to route traffic between metanet WiFi cells on opposite sides of the City. Similarly in barrios such as La Ceja, El Alto, as well as WiFi cells, there are private fiber and copper network connections, which are used similarly. And in the 'campo' in Bolivia, where there are very few radio stations, there are networks of data connections which function using modems adapted to work over AM and FM radio links. These systems use one of a spread of channels and automatically back off when they detect any particular channel is in use.

Metanets typically take over the top-level DNS. They do this by running a local name server to which clients are directed by the usual DHCP mechanism. This local DNS mirrors the Internet DNS, and it adds new top-level domains. The Foundation, for example, links all known metanets and appears under the top-level domain 'uno,' 'one,' 'ein,' 'un,' etc. So on any metanet that is part of the Foundation there will be a page http://uno/, or an equivalent name in any known language or script, which will authenticate the user and give details of the known metanets.

Internet routing is anonymised outside the Foundation: the Foundation routers tunnel Internet traffic over any and every available transport and distribute it across all available gateways. This way Foundation members share responsibility for Foundation actions, and every Foundation user, individually, has plausible deniability, so they are not individually liable for the actions represented by the traffic their actual systems route; it is only the Foundation as a whole which is liable.

Foundation metanets use a global content-addressable data store. All data are canonically identified by a checksum, and Foundation nodes store and forward fragments of these data. Foundation metanets provide directory services which identify the contents of the different fragments, either as data sets in their own right, or as parts of other larger datasets. Foundation metanets use this Content Addressable store for all communications, and to distribute searches. Systems execute searches on behalf of others by accepting specifications of programs which they compile and execute on behalf of the Foundation metanet whose data they are searching. By coordinating cooperative mobile communications and computation systems, for example, the Foundation provides scientific services such as clock-synchronisation and differential GPS with an accuracy of 10cm.

These mutually beneficial operations are accounted for using a simple transparent system of credits: Foundation metanets maintain credit records using double-entry book-keeping, and these accounts are all accessible to all Foundation metanets. Internal Foundation credit is backed by reserves of national currency: Foundation members have bank accounts, and they make those funds exclusively available to the Foundation on the strength of the credit that is reflected in the Foundation accounts records.  These are effectively deposit-only bank accounts, because money never leaves the Foundation as a whole: the financial reserves only serve to underwrite the Foundation's credit. All the material resources, and all the funds representing those resources, are either earned from the Foundation's various commercial activities, or they are donated to the Foundation by its members, who are individuals or institutions of various kinds, both private and public, and who effectively become share-holders.

So the answer to the question "What is The Cloud, Really?" is
The Cloud is the Foundation.
The Foundation implements the Open Systems Interconnect protocol  layers one to nine. Anyone may join the Foundation: all they need to do is metaprogram OSI using ASN.1 and ECN, and then find other Foundation metanetworks and internetwork with them.

Monday, 14 April 2014

Posting Comments

Comments to any of these blog posts are welcomed from anyone. If you wish, you may use any OpenID login such as gmail, or you can post without logging in to blogger.com. I will moderate all comments, and if I reject any I will publicly log the poster, the date and my reason for rejecting the comment. The poster or others will then have the opportunity to appeal.

People who wish to post comments anonymously can do so, but you will have to secure your own anonymity: you could simply use a system you have never used before, and which you will never use again, or you could use Tor. Search for "tor anonymous blog posting" for some hints. Note that, though there seem to be problems with posting to blogger.com blogs via Tor, I would hope that merely commenting via Tor would not have any comebacks, unless Tor access is not in fact anonymous!

Sunday, 13 April 2014

What's an Operating System for?

The stock answer is that an Operating System is an abstraction layer which enables application software to be defined independently of the physical hardware the system is actually comprised of. As part of this abstraction, the OS provides support for virtual processes which each get a view of the system as being a set of resources reserved exclusively for that process, so that multiple users can use multiple applications, each of which is a process which exists independently of whatever other processes the system may happen to be running at the time.

But there are many ways to abstract the system resources, and each of these is an Operating System. Then the notion of abstraction breaks down, because the different Operating Systems each abstract a different view of the physical hardware. Thus we have higher-level abstractions called cross-platform Application Programming Interfaces (APIs) which provide uniform abstractions of different operating systems.

But not all Applications are written to use these APIs, so we have Hypervisors which allow one physical machine to run more than one Virtual Machine, each of which can run a different Operating System, and thereby allow one physical machine to run more than one application under different Operating Systems at the same time. These hypervisors are more than virtual Operating Systems, however, because they allow virtual machines to be moved from one physical machine to another without interrupting the application processes.

Operating Systems have various quite independent functions. A substantial part of any OS is a set of File System drivers, which provide applications with a means to read and write files stored on block devices such as disk and tape drives. Another substantial part of any OS is the  provision of a network interface, allowing processes on different machines to communicate. Another function of the Operating System is the shell, which allows the user to launch new processes.

Because an OS manages many processes on behalf of many different users a large part of the functionality of the Operating System is dictated by the need to manage resources such as memory and storage space, network bandwidth and CPU time. Because application programs may be badly written, the OS must provide means to protect the system from the effects of a malfunctioning application process. And because some processes may be deliberately designed to compromise privacy and security of data, some Operating Systems provide means to secure the system from these effects.

Now all of these facilities the OS provides introduce inefficiencies in the application process. For example, instead of simply using available memory and storage as it is needed, an application must request it from the OS, and the OS must balance the needs of the different users and applications. This turns out to be impossible to do in practice, because the OS does not have enough information to be able to decide correctly which applications are most important at any particular time.

So the short answer to the question "What is an Operating System for?" is this:
An Operating System is a hugely complicated computer program, which could never work properly, and which is only necessary because people do not actually know what they want their computer to do.
So if we did know what we wanted the computer to do, then we wouldn't need the Operating System, and the application programs could be made much more reliable and far, far more efficient.

If we could abstract just one general purpose application capable of performing any function anyone could reasonably want a computer to perform, then we could just load that one application into the computer's memory, and we wouldn't need any Operating System. Nor would we need to buy software, nor would we need to upgrade software, ... we could instead get on with using computers for writing music, screenplays, mathematical theorems, letters and books. Now wouldn't that be something worth working for?

Friday, 11 April 2014

Security and Concrete Representation

Almost all computer security problems are instances of the same generic form, which we could call the leaking abstraction.

For example, the so-called tempest leaks, are all a result of there being a definite concrete representation of display pixels, be they on raster scanning CRT displays or LCD displays controlled by a digital bus. The abstraction we naturally make is that of the one-one correspondence between the contents of the display buffer memory (the logical pixels) and the colour and intensity of the dots on the screen.

This is what we are taught in Computer Graphics 101, but we later un-learn that lesson when we discover anti-aliasing and sub-pixel rendering; and around the same time we learn that in fact there is a great deal more information emitted by the display than the representation of the pixels on the screen, so that in a process which is a little like the inverse of anti-aliasing, a well-educated computer scientist can recover a representation of the displayed image on a CRT from continuous observation of the colour and intensity of light reflected from the walls of the room, or from the Radio Frequency emissions of the LCD connector on a laptop.

The leaking abstraction is always a matter of information, which, when it is finite, serves to identify a region of the phase space of the system under observation. It is only the fact of the finiteness of the information that makes possible the reconstruction of data from that information. Now any fixed concrete representation of data results in a finite phase space, and so any concrete representation necessarily must leak information.

This applies not only to physical signals, but to any representation whatsoever. For example, as explained by Ken Thompson in his famous article Reflections On Trusting Trust, the Unix login program has a particular concrete representation, and the C compiler itself also has a particular concrete representation which can be used to identify a region of its own phase space. And this syntactic fixed point allows one to choose a completely arbitrary denotation for any C program in the system.

So if we wish to do secure communications and computation, we must find a way to avoid using systems with finite representations. We can do this using metaprogramming, which gives us access to the greatest fixed point. If we metaprogramme our communications systems then we can vary the underlying concrete representation at will, and this means that the information in the representation is effectively infinite, so there is no way to identify any region of the phase space of the system, and therefore no way to recover data from the information in any particular concrete representation.

It's really quite simple. The principle is that we cannot know what the message means unless we know the format of the message data. If we build communications systems which are abstract of the concrete representation of the messages, then the messages can only be decoded by systems that are aware of the particular representation that happens to be in use at that time and place.

Now all this has been very carefully thought out and is documented in the ISO standards for Open Systems Interconnect. The key is Abstract Syntax Notation One, or ASN.1 for short ISO/IEC 8824-1:2008, and the associated Encoding Control Notation or ECN, ref. ISO/IEC 8825-3:2008 which specifies the description of encoding rules.

Now implementing ASN.1 tools is no joke, as anyone who looks at the specification will quickly appreciate. But if we had a metaprogrammed specification of the ASN.1 language and semantics then we could use it to generate source code for parsers and transformers in any particular  programming language into which anyone cared to interpret it.

And it is not only for specifying secure communications protocols: we will be able to use ASN.1 to specify any data we use, in any context. For example, a company offering a funds transfer service wishes to provide its on-line users in any country with a form in which they identify the intended recipient in another country. The data required will depend upon the various forms of personal identification used by citizens of the destination country, as well as those used by foreign visitors.  The services provider simply specifies the receiver identity data by referencing an ASN.1 module which takes as a parameter the country specifier, and defines a data type sufficient to identify any citizen or visitor to that country.  If the government of that state later chooses to introduce a new form of identity credential, for special forces personnel, say, then they simply update the ASN.1 module, and all the government databases, as well as those of the funds transfer agents, are automatically updated to offer the new identity credential form.

Not only data types, but data values can be specified using ASN.1, so we will be able to refer to canonical identification in any and every communication or data processing system. Thus, to identify any entity in the world, we will require only a few bits. And since all data systems will share the same value references, we will never have to record identity information more than once.

Clearly the pay-off will be huge: once this is done, we will achieve an overnight increase in information systems efficiency of several orders of magnitude. This will in turn release computation and communications resources for the far more important, and far more urgent, task that we are currently neglecting, at our peril.

Wednesday, 9 April 2014

Why the funny name?

I wanted to call it Living Logic, but someone had taken that name for a page about interior design or something. That's OK: if they could show that it is really logic that they are using then that would be interesting.

I think it's pronounced Live Logic, with the first word rhyming with hive. But Live Logic, pronounced as an imperative, with the first word rhyming with sieve, works just as well, so, as you please.