Sunday 19 May 2019

On Psychobotanics and Neurology

Beware. This is what happens when yo try to poison a philosopher with halucinogenic drugs. This is a letter I wrote to a friend about four months after I had been administered some very strong drug, without any advance notice. I had been living in Guanay at the time, and I ended up on the streets here in Cochabamba for four days, because whilst still under the influence this drug, somebody gave me alcohol spiked with some sort of opioid, I suspect. Whatever it was, I think thieves use it here to get someone to just drop all their portable possessions and go wandering off somewhere! So I had these two very weird trips back to back. Anyway, this is the letter I wrote in July.

Hi, Sorry, I didn't get any birthday message. And I forgot to send you one. I lost my access to other e-mail addresses at the same time as I lost various other things including my passport, computer, credit cards etc. But I have no-one to blame, it's a long story which I am slowly thinking about how to write up. I learned more from these few days that I have ever learned before. And I think there is no other way I could have learned it. Basically I was given some sort of drug by someone, I don't know who or why, but it was very interesting. My whole idea about how nervous systems produce the sense of self was upside-down. It may seem incredible, but I think now that our sense of self is not a higher function at all. I used to think that we abstract our sense of self intellectually as we learn language, so that it is ultimately a rational thing. But this is not so. It is an animal thing. All animals have it. Without it, things are really interesting. We are not confined to our physical bodies. But we can still reason and we still experience, just the experience is not of any one particular person anymore. So empathy is the basic substrate on which understanding is built. It is because we have empathy that we can get an intellectual understanding of what other people when they say something. And in fact we can know what other people feel from this empathy. So you were right all along. We can know what other people feel and their reasons for doing things. I always thought this was implausible, because how could we know what process someone else's reasoning would go through to produce their actions. But in fact the rational process comes after the direct experience. This drug was some sort of Scopolamine I think. I have seen the Brugmansia plants being grown in people's gardens around Guanay and it's use is mainly brujeria (witchcraft). It's an interesting plant, known only through cultivars associated with human habitation: there are no known instances of it in the wild. The way I think it might work is by severely disrupting the enteric nervous system and some other parts of the autonomous nervous system. Amusing consequences were complete out of body experience: that of looking at myself through someone else's eyes. Interestingly this inverted red and green, because the person I was looking at, sitting were I was, wore a green t-shirt, where mine was red. I felt words being said that weren't mine. They were forced through me. I had almost no motor control at this time. My arms were spastic, and my legs I think. I had tremendous sensations of heat and tingling in arms and legs. And for several days afterwards I could feel differences in temperature of the ground, because I was barefoot (I threw my shoes away, like you do from time to time!) These differences were associated with property and people and I had no idea what caused them. But for example, the ground outside one shop would be freezing cold, and that outside the shop next door would be warm and toasty. This extended to the ground around the women selling things on the street. I also felt vibrations acutely. A few days later these senses left. I know that having one's body screwed up by something like this will have all sorts of weird effects, but these seemed to have a consistency that was beyond anything that could have been produced by a pathology, so I think there is a systemic basis for it. And as a result I am totally convinced the basis of neuroscience is completely wrong. But actually, I was convinced of that before. Neural coding representing sensor states using characteristics of signals is a crazy idea.

I don't expect this is very coherent, but I hope it's amusing!

And the following is a letter I wrote to another friend, earlier, about some thoughts I'd had on neurology. The above experience was very interesting!

I have been thinking about brains. Quite a while ago I had a look
through a book on molecular biology, and every sense I looked at
involved that animal's motor, what? faculties? For example, E. coli
bacteria have a sensory system for finding food. This works with a
feedback loop through the flagellae, these rotating hair-like
propellers they have. As they move along (and only as they move) they
can detect differences in concentration of nutrients and toxins and
they have a rudimentary neural network of chemical signals that can
put the motors in reverse if things are not to their liking. They
thereby affect a manoeuvre that I heard modern jet fighter pilots use:
they turn off the electronic flight controls and do a random tumble
before switching them on again and tearing off again in a straight
line.

If you look at the mammalian eye too, you find that it has saccades,
micro-saccades, pupil dilation etc. and then there are the conscious
muscular actions of changing the focus and direction. I don't know
about you, but I only have few degrees of clear vision in the center
of my visual field. The world seems clear because I construct an image
by scanning around. I am sure a good deal of my sense of distance is a
product of the relative muscular sensations in focussing both eyes at
a given distance. According to MacKay, The image that falls on the
retina is severely blurred by chromatic aberration; to the tune of 5x
the resolution. This is probably sorted out by the micro-saccades
which are of the same angular magnitude.

I think children learn to speak at the same time they learn to
understand the words they hear. It seems to me to be the easiest way
to learn to recognise the different sounds of different people
speaking the same words with different intonation etc. If the child
learns to speak those words, by imitation, then the muscular
sensations of the act of speaking will be associated with the auditory
sensations of hearing the words. So the phonetic structure will be
revealed this way; the muscular sensations will be much more uniform
than the sounds. By phonetic structure, I mean the invariant of all
the different actual sounds of words being spoken. I am sure I am not
the first to have thought of this: it seems very obvious. I may have
even read it somewhere. This is presumably the means by which
schizophrenics hear the voices of their different selves: all the
different selves must be coupled through the sensory-motor
cortex. This makes me wonder if one could treat schizophrenia through
some sort of intensive speech therapy to couple the different selves
more tightly so that they may fuse, or at least adapt to each other a
bit. Maybe by learning another language would do it.

Anyway, the idea I had was that the neural associations that develop
are those which lead to a kind of covariance of the organism and
environment. As the animal moves in all its various ways, it
influences its environment, if only just by being in a different place
in it, and the name of the game is maintaining your structure as this
happens; and this includes your neural structure. The principle is one
of least surprisal: you don't want to be surprised too often because
its exhausting, and it is better to be surprised by things that are
significant. This suggests a kind of maximum entropy principle in the
expectation: you try to predict what will happen and if you are
surprised too often then you need to be more discriminating in your
predictions, or if you can't then you need to stop associating those
sensations with the consequences you do.

This is very vague, sorry. I don't know much about neurons. But I
would look for these sensory-motor correlations being established
through some sort of predictor-corrector mechanism which seems like it
could be quite simple. For example in the case of micro-saccades, I
would expect there to be a close connection between the motor neurons
that trigger the saccades and the optic nerve, which is presumably
where the deconvolution of the aberration happens. The idea is that
the saccade triggers some predictor neurons at the same time as the
motor neurons and then the circuits are tuned according to how much of
a surprise the prediction produced. Then those prediction neurons must
be thereafter involved in modulating the usual channel from the five
or so neighbouring retinal cells. And I would expect the trigger for
the micro-saccades in turn to be some function of those predictor
cells' surprisal during normal operation.

Also in hearing: our ability to locate the direction of sounds seems
uncannily good if all we do is compare the amplitude of signals in
each ear. I suspect there is a connection between the semi-circular
canals and the hearing which allows a small head movement to pick up a
derivative of the signal with angle, which would allow much better
discrimination. This probably involves using the resulting phase shift
of higher frequencies.

Another example is learning to recognise grandmother. The word
`granny' you associate with some granny sensations like smell and
touch. The association must be in terms of some actual neuronal
activity, presumably in the same set of neurons that would have been
active had you experienced those sensations before hearing the
word. The activity in this set constitutes a prediction which then
makes them more likely to be recognised amongst the mass of possible
things one can recognise. The same goes for the activation of whatever
neuronal activity corresponds with any part of the whole grandmother
phenomenon. Memory is just an inevitable side-effect of the learning
process. :-)

The idea is that all cognition involves some pre-action on the part of
the subject and this corresponds to a prediction which is then
tuned. The success of the whole operation should result in a
blocking-up of the world of perception in such a way that the
surprises are fairly uniform. The pre-action is always going to be
triggered by some other experience, so we proceed helplessly from one
association to the next and this cognitive incontinence is what is
called consciousness!

Maybe the whole show runs like this? So that the neurons are all
constantly tuning each other by firing and making a prediction about
what will come back from around the various loops, then somehow
collectively adjusting their surprisal which would have to be done via
some intersticial medium I would guess. If this were the case then
sleep would be a good thing, in a stable world, because the networks
could settle down in the absence of input. A characteristic of this
would be that the firing rates would be more random in the absence of
stimulation than they would be otherwise, because the situation of
maximal adaptation would be one of least surprise and that is the
maximum entropy distribution. The way I imagine this working is that
there are networks of `tensions' which are stable, up to a point, but
there would be a sliding scale (presumably a log-law) of catastrophes
as many simultaneous changes occurred at once to relieve tensions that
had built up.  Sleep would be a way to redistribute tension which had
built up through external stimulus.

The important point is not to look at the brain as a stimulus-response
mechanism because you miss out on half of the information, which comes
from knowing, or having a theory about, the cause of the stimulus. An
organism is a like a scientific experiment: there is a body of
knowledge which informs experiments on the world and knowing what you
did to cause the stimulus you received is half of being able to
understand what it represents. The other half is your response which
in turn will trigger another experiment. The world and the conscious
content are then two ways of describing this ongoing covariance.

I think that our whole notion of logical consistency is really an
abstraction of this principle of least surprisal. If you take this to
its `logical limit' (i.e. where every prediction is certain) then
least surprisal is just logical consistency.

Logic is abstract language. Specifically, it's abstract syntax. You
don't need any semantics to get logic. In fact, you may not even need
syntax in the sense that the phonetic structure of language contains
most if not all of the syntax. This seems necessary if children are to
be able to learn it, since children don't have access to anything but
the phonetic structure. This hypothesis is testable. It predicts
strong correlations between the phonetic structure of words that are
common to some parts of speech. For example strong correlation between
verbs, and less strong correlations between those of nouns which are
frequently prefixed with articles or followed by conjugations of the
verb `to be', which is not really a verb at all. This may be the
reason for the apparent redundancy in many languages of having verbs
conjugated according to the subject. To get good discrimination though
one would want the individual nouns to be as different as
possible. Then to make the syntax explicit there would have to be
markers (articles) in front of nouns just to classify them. I'm going
to ask Ted Briscoe about this. I think he would know if anyone does.

Logic arose from analysis of abstract deductions. The laws of logic
are really just the meanings of the non-denoting words. For example
the syllogism

x is y
all y are z
-----------
x is z

is just the meaning of the word 'all'. Or for a more modern example

A is true    B is true
----------------------
   A and B is true

is just the meaning of the word `and'.

So in a sense children use logic as they are learning language. There
is a lot of mileage to be had in thinking about processes in the world
at large as logical constructions. This is because the abstract
representations we make of them are essentially logical. When we
describe some phenomena we describe conditions under which it
occurs. These conditions correspond to premisses and the phenomena
they bring about correspond to conclusions. Then an explanation of the
phenomena is a derivation of the same conclusion from the same
premisses, but in more detail. Any phenomenological description can be
recast as a logical deduction. The laws of this `phenomenologic' can
be considered as a reduction scheme. But they are not necessarily
unique or fundamental. There may be two or more different ways to
reduce a phenomena and each may be interesting or useful for different
reasons.

For example we don't just describe lightning as a blue flash
accompanied by a loud bang. We give the conditions under which occurs:
clouds and dust storms etc. If you get this description right then it
is both necessary and sufficient and will actually predict lightning
in the sense that whenever you observe these conditions you will
observe lightning. A good explanation of lightning will start with
exactly the same conditions, but predict some phenomena that is not
lightning itself. Electric charge build-up on clouds, say. Then there
will be another description of lightning in terms of electric charge
concentrations instead of meteorological conditions. Now the theory
can predict lightning strikes from high tension power lines as well. A
good explanation is just one that isn't too often surprised.

There is no need to reduce the phenomena to fundamental processes. You
can't do this anyway, because fundamental processes will never explain
how the clouds got to be there in the first place: clouds are
irreducibly macroscopic phenomena because they create internal
conditions in which they persist. They are systems, not just
collections of molecules. You can't explain them in terms of molecular
dynamics because the molecular dynamics are determined by the
conditions which they create as a whole. This is just a basic
self-evident fact, but it has taken me twenty years to get used to it
because the idea of reductionism was so ingrained.(*)

When you take this holistic view the problems of interpreting quantum
mechanics just go away. The macroscopic world is on the one hand a
product of quantum mechanics and on the other hand it determines the
boundary conditions for the Schrödinger equation or whatever you are
using to model the quantum states. This is no different to the
situation where a cloud determines its internal conditions which
determine the microscopic dynamics. It's only a problem if your goal
is to describe all the phenomena in the macroscopic world in terms of
fundamental particles. But that is a bizarre idea because the
phenomena in the world are always contingent and you cannot explain
contingent events using universal laws. Universal laws can only
explain spontaneous processes: i.e. things that happen for no reason
whatsoever, like photons, for example. So I am now totally at home
with the Copenhagen interpretation.

Wootters, a student of Wheeler, did his PhD on discrimination
information in quantum mechanics. He showed it was in some sense
maximal. I read about this ages and ages ago, as an aside that Wheeler
made, but I can't remember where. The way Wheeler described it was
quite funny. You imagine you are being attacked by some tribe, one of
two, but you don't know which. Each tribe has a certain distribution
of common features that members have at random. The distributions are
such that you need a minimum of observations to learn which tribe it
is that is attacking you. I think these are the distributions that
correspond to commuting measurements. I am interested in this because
least surprisal sounds like it might correspond to maximum
discrimination. This is what makes me think that a formal model of
real neural networks might not be feasible: the underlying principle
may just not be reducible. On the other hand one may consider it is
just an artefact of using logic to reason about the world: we
abstracted quantum theory that way because these are the only
phenomena we recognise as independent. They correspond to exactly the
most information-efficient reduction of a description of a system into
subsystems.

A good logic will capture everything that it is possible to describe
and this is why we have a notion of universal computation. Church's
thesis doesn't necessarily claim that the world is a computer: merely
that every rational process can be represented as a computation of
some sort. The meaning of Gödel's first incompleteness theorem is that
what are the true facts in any domain like arithmetic are not
independent of the deduction system in which they can be proved. The
same surely holds of the world at large. Logic is in the world because
that is how we describe it: we use language, symbolic representation,
and that imposes a logical structure on whatever we describe and
thereby limits what we can know as true. Quantum information and
quantum logic are barking up the wrong tree because they are looking
for logic and information outside of experience.

One might wonder how it is that we know we have captured every
possible rational process or method one could describe. The usual
explanation offered is the evidence provided by the fact that every
time someone has constructed what appears to be a more elaborate and
more expressive system it has been found to reduce to the
Turing/Church one. But this is just evidence. One might be able to get
something more like a proof by considering what features there are in
a language that has to be learn-able by children who do not have any
language to start with. Possibly that everything that is known can
only be known up to isomorphism and then must be described as a finite
string from a finite alphabet. So maybe you can prove Church's thesis
using category theory.

Then there is the question of how it is that a logic cannot capture
what is true about a domain, but a computational process can capture
everything that it is possible to describe. I think this might be
because classical logic includes the law of the excluded middle which
makes the theories not recursively enumerable. Intuitionistic logic
does not have this `problem' and the there is a one-one correspondence
between certain typed programs and proofs.

I am still completely in two minds as to whether I `believe' classical
logic or intuitionistic logic.

Tell me what's unconvincing or unclear about any of the above and I will
make that the subject of an essay. I am writing a book on logic and
science and conscious experience and I want one chapter on neuroscience.

Best wishes


Ian

No comments:

Post a Comment