Thursday, 23 May 2019

A General Strategy for Solving Problems by Analysis of Situational Logic

This is an e-mail I wrote to Sarah Jacob in September 2010, after she mentioned she was reading something about the tragedy of the commons for a course she was studying. This is about the practical utility of the technique of analysis of situational logic as a general strategy for enumerating and evaluating potential routes to solving problems. The example of this so-called dilemma is a good one, because it shows that surprisingly often, things which are presented as insoluble conundra are not in fact insoluble: it is just the "unquestionable premisses" which make them seem insoluble. And logical analysis is a guide to identifying exactly what these unquestioned premisses are, and it will almost immediately suggest various different ways they could be changed to solve the problem.

Dear Sarah,

I haven't read the paper about the tragedy of the commons, but I know the idea. It is a consequence of a result in game theory. I'll try and write about this from memory. Here goes.

The result is that in some games (all variations of one called the Prisoner's Dilemma) the assumption of perfect rationality forces the players to choose strategies which are less than optimal.

Have you seen or read "A Beautiful Mind"? John Nash was a joint winner of the 1994 Nobel Prize for his mathematical proof that under certain conditions, certain types of positive sum, multi-player games have unique equilibrium strategies which are optimal. An optimal equilibrium strategy is one that all players can adopt and which gives the best results in the sense that if any of the players were to choose a slightly different strategy then they would individually lose out, so they are all driven back to the optimal strategy. The theory was useful because it meant that these games had equilibrium strategies. This is important because a fundamental (quite notorious) principle of economic theory is that people act perfectly rationally. This, combined with the existence of a Nash equilibrium means that the 'players' will (not can, they *will*) use the information they have about the situation to each compute the Nash equilibrium and adopt this as their strategy. The game therefore plays out according to the rules and we have established a law of economics! Hooray!! Note that what we mean by a law is not that the theory is true. No theory is true. Truth is only something that we know in relation to experience.

What the theory does is provide us with a description which we may compare with reality and then find it true or not according to whether it accurately describes the reality; and this is a matter of judgement which is a form of experience. For example we may find a negotiating situation that complies with all the conditions of the Nash equilibrium, in particular the one that the players are all perfectly rational and have access to all the information they need, including Nash's theorem. Then in that situation the theorem is true or not according to whether they actually do adopt the Nash equilibrium strategy and this is something we can test by simply asking them their strategies.

Now because I know a bit about logic I can tell you that the answer will always be that it is true: because Nash proved his result mathematically the only thing we could ever discover by comparing it with experience is that the theory was being mis-applied in some way; in other words that the situation did not in fact meet all the criteria that Nash put into the statement of the theorem.

This is how logic works: it connects language with experience. This is useful because it allows us to transform collections of statements into other statements using purely syntactic rules: rules that ignore the actual meaning of the statements altogether. Then, when we judge that our experience is described by the original statements, we can look at the transformed statements and they will also describe our experience. This is useful, because real situations always have lots of complicating detail and most of it is irrelevant to the matter at hand.

If this sounds complicated and abstract then here's a really simple example: you have 172 sheep and someone comes up holding an envelope and asks if they can buy 42 of them for $156.76 each. You type into your calculator 42 x 156.76 and press the equals button. The calculator prints out the answer, and you say "Yes" then they give you the envelope and, lo, it contains exactly that amount of money! Then you type in 172 - 42 and press equals. It prints out a number and lo and behold, that is exactly how many sheep you have left after they take their 42 away!

The calculator doesn't know anything about sheep or money. Your analysis of the situation was such that you could represent it using a sum, which is just a string of symbols, and the calculator manipulated these according to very strict but very well-defined rules and came up with the answer. We are able to make machines which manipulate symbols furiously and they are very useful, because the symbols represent anything we want. The thing is the machines don't need to know anything about what they mean: we can still make really useful machines, like iPhones for example, which pass around symbolic representations of reality and transform them in all sorts of useful ways. The ultimate such machine is probably one that can prove any statement about something called the real-closed field. This covers every statement in applied mathematics. My friend Larry Paulson is busy making it now. It was invented by the logician Alfred Tarski in the '30s.

In the case of a negotiation, the politics of the people and their attitudes, the colour of their hats and many other things are irrelevant to the matter at hand, which is just how to get the best deal for everyone. This Nash equilibrium idea really helps because people are able to forget about these irrelevant details and just compute the optimal strategy, and use it! That's why these guys got a Nobel prize: the theory was used in real negotiations and it meant that people were able to collectively make the decision that was best for everyone.

This logical process of describing a situation symbolically, manipulating the symbols according to very, very strict rules, and then re-interpreting the results is what lies behind all applied mathematics.

But back to game theory: the prisoner's dilemma is a game where two prisoners are in separate cells, unable to communicate. They are both offered an ultimatum: if neither testifies against the other they will both be released, but if either one testifies against the other then he will get a short sentence, and if one testifies and the other doesn't then the one that doesn't will get a long sentence.

Now this game is positive sum (both can win) and it has an optimal strategy which is obviously that both should refuse to testify because then they both go free and the win is highest for all concerned. But the problem is that the optimum strategy is unstable. If either player has the slightest doubt that the other won't do the right thing then it is better that he defect, because the consequences of not testifying when the other testifies are much worse than if both testify against each other. Now I think this game actually conforms to Nash's conditions because the slight change to one player's strategy results in a worse situation for them. It's not a counter-example to Nash's theorem which holds in this case because there is a unique optimum stable equilibrium, it just happens to be less optimum than an unstable one.

The tragedy of the commons is a multi-player game with a similar internal logic. There is a finite common resource which is used by a group of people. There is enough for everyone, provided they don't waste it. None of them know how much the others are using, but all of them know that the resource is finite. It is in everyone's interests that it doesn't run out (so this is a positive sum game). There is an optimal strategy which that everyone uses only what they need: then they all benefit because the resource will not run out. But if any one of them thinks there is a chance that any other will not adopt this strategy then it pays to take as much as they can as quickly as possible. So the optimum is not a stable equilibrium and the net effect is that everyone adopts the less than optimal strategy of grabbing all they can because the ones who don't do this won't have anything when it runs out.

Now I can demonstrate something useful. Logic allows you to analyse a problem like this systematically. Remember, the theory is true only in so far as actual experience conforms with the premises. So all we need to do is go through the premises one by one and see what it is we can change about the situation that makes that premiss false, then the theory won't hold. So let's go:

Premiss number one: it's a multi-player game. Get rid of the other players! Or make it not a game. One of the assumptions of game theory is that everyone is out to win. Game theory doesn't apply if this isn't the case. (Well, there is a technicality in that if everyone tries to lose then it is still a game, just the scoring system is up-side-down) The way to stop it being a game is to have a significant proportion of the people not playing to win. Then even if several people abuse the resource there is still some net benefit.

Premiss number two: the resource is finite. That's easy, just make it infinite! That sounds silly, but I'll come back to this later.

Premiss number three: there is enough for everyone, provided they don't waste it. So the problem goes away if there is not enough for everyone even when they don't waste it! This is because then the unstable optimal strategy is not an option. The result is the same: the people will use as much as they can as quickly as they can, but there was no better strategy. This actually tells us something. It says that the irony of existence of the unstable optimal strategy is not really material. We can't fault the rationality of the people if they behave the same way even when the optimal strategy isn't available. So an unstable optimal strategy is not really worth having.

Premiss number four: none of them know how much the others are using. So any way we can falsify this will solve the problem. Put people's water meters on public display etc. Notice that this isn't a matter of shaming people: that is not how the game is described. The only way to influence people's behaviour is to change the payoff. Shame isn't part of the abstraction, people's sole motivation is reward. This means that when the meters are on display it can be anonymous: all they need to know is that no-one else is abusing the resource, so they can confidently not abuse it themselves knowing that they won't lose out as a result.

Premiss number five: all of them know the resource is finite. This is like three. If some of them don't know the resource is finite then the unstable optimal strategy is no longer available and the ironic element is gone, but the result is the same: everyone uses as much as they can as fast as they can.

Premiss number six: it's in everyone's interests that the resource doesn't run out. Make it in everyone's interest that the resource does run out! Maybe this can be turned into a method to get people to clear up a finite amount of rubbish as quickly as possible. This also the premiss that the "government regulation" operates against: by fining a community for overgrazing for example.

That's it for the premises. But there is another way to get rid of the problem and that is to change the game so that the optimal strategy is no longer unstable. This could be done by getting people to make commitments to each other. This happens naturally in many communities if there's any sense of inter-dependence. Then everyone has confidence that the others won't abuse the resource because everyone knows everyone else is committed to looking after it.

There are other ways we can change the theory but they're not obvious because I have not been very explicit about the statement of the problem. The missing premiss is the scoring system. Every game must have a payoff matrix which defines precisely how to compute the different rewards (or penalties in negative sum games). I didn't write out the payoff matrix, but it would be something like: each person scores 1 per resource unit per time unit for the resource they take until it runs out. Then we could define the finite resource as a stream of X resource-units per time-unit.

Now we have another premiss to change: We could do all sorts of things here. We could adjust the gain so that it is greater when the group as a whole is using something like the reasonable amount. Or we could reduce the gain for people who use more and give that to the people who use less. These all correspond to different ways to tax and reward people: they rely on central control and monitoring. They all effectively make the unstable equilibrium stable.

Now let's get back to the "make the resource infinite" idea, which is one I favour. I actually do not believe that there are any finite resources. The reason for this is that people adapt. This takes a more abstract view of 'resource' to mean a type of resource, not a specific one. For example, if a community runs their well dry they will typically find water elsewhere, or relocate! I don't mean to suggest that this is the answer to the world's water problems: I am giving an example of adaptation that I hope shows what I mean by "make the resource infinite". When a community runs out of wood they might use solar power, or more efficient stoves. The running out motivates changes that might not otherwise have happened. What is interesting here is that all these instances of adaptation are abstraction: When a resource runs out, look for another instance of the same class of stuff.

This seems to be the opposite of what I argued earlier, when I deduced that abstract ideas were a barrier to adaptation. So something is wrong with either this argument or the one I gave before, which I have forgotten! This is another feature of logic: logic does not contain contradiction. If you deduce a contradiction then you have made a mistake. Until you find the source of the error nothing you say can have any meaning.

Now I have got it: the earlier argument was that abstract ideas were a barrier to adapation: this argument is not the negation of that one because it says that abstraction is the response of adaptation. Abstract ideas are the results of the process of abstraction. So these are consistent conclusions: Fixed abstract are a barrier to adaptation and adaptation itself can be described as abstracting.

The thing to note is that because we analysed an abstract description of the problem: we didn't say which community it was, nor what the resource was, nor what the amount was, we have identified potential solutions for many quite different problems. What we have done is produced a theory: we did this by analysing just the abstract form of the problem, ignoring the particulars, which is quite literally ignoring the meaning of the statements: we said "resource" and "players" without saying what they really were. We can now take this analysis to any actual problem and translate the abstract solutions we found into that domain to get possible solutions the specific problem.

But what is probably more useful is that by abstracting just the salient features of the problem we were able to systematically analyse it without being distracted by irrelevancies. For example we discovered the positive uses for this problem: sometimes you want people to do something as quickly as possible. We also realised that it is not actually a tragedy of the mis-application of reason: the tragedy is really just the reward scheme which pays people to waste the resource, because they behave this way even when the optimal strategy is not available.

It is very unlikely that one would have thought of all of this if one were analysing just one specific case, say, a problem of community in Gwent that had overgrazed the common. But when we considered the problem as an abstract logical theory we had the bare bones all laid before us and this was completely obvious.

Logic is just common sense made rigorous. This is because the rules of logic were abstracted from actual valid common-sense arguments. When we use logic we are just using common sense, but we are able to to do it better because we aren't distracted by particulars that are irrelevant. We can also reason about many different problems at once. This is in exactly the way that a calculator can calculate the sum of seven plus four sheep at the same time as calculating the sum of seven plus four apples. The words seven and four don't mean anything in particular: the calculator just does the sum based on the form of the numbers. It doesn't have to know what they mean.

You may feel that you use logic whenever you try to persuade someone of something by reason, but this isn't true. You only use logic when you reason about reasoning.

If you have followed all this then you will be able to explain exactly what Bertrand Russell meant when he said that "mathematics is the subject in which we don't know what we are talking about and neither do we care whether what we say about it is true or not."

You will also be able to explain why you were wrong when you wrote "and this fact is true until something or someone discovers another fact that negates the first one."

And for your last assignment: you will be able to explain why I say that no, I don't give logic and science any weight at all as regards the truth.

The clue to all these assignments is: "there are no inexperienced truths." which is something that Brouwer apparently said.

Can you tell that I miss teaching?

Thanks for that. It was a good exercise and it would make a nice basis for a chapter of the book.

Koans are a really good example of mystical nonsense. You can't describe actual experience. The sound of one hand clapping is description withoute xperience. Which is the same as logic without interpretation. Experienceis something that arises in description: you can't have one without the other.

Ian

No comments:

Post a Comment