Chapter 13
The strategy of the Individual
A strategy for seeking co-operation
Whether anyone is an entrepreneur, a solution builder, a contractor, a middleman, a consultant, an expert or a specialist, it will seem a daunting task for anyone starting out in the Information Age to have to build up their own network of contacts. There are so many people, so many choices. Where do you start? How do you get people's attention and interest? How can you find and maintain a permanent network of active contacts? How do you get people to co-operate with you?
This is where it is necessary to have Zen-ness: a conceptual model that help you to become streetwise on the Net. Such a model comes from a unique experiment proposed in the late 1970's by the political scientist Robert Axelrod. He invited contestants from universities all around the world to take part in tournaments where computer programs would compete against each other to win points according to how successful they were at co-operation. The tournament took the form of a continuous series of plays in a game situation called The Prisoner's Dilemma. This is a game, due originally to Merrill Flood and Melvin Dresher in about 1950, which depends upon the participants co-operating with each other in order to get the best results.
For the readers who want to go through the mechanics and details of the tournament form of Prisoner's Dilemma, the rules and structure of the games are shown in figure 12.1
In the simplified version of the Prisoner's Dilemma chosen for the tournaments, programs continually interact with each other on a round robin basis where they decide whether to co-operate or defect. Points were awarded according to the decisions made.
Calling one program A and the other program B, points were awarded as follows:
1)
If program A decides to co-operate, it will get 3 points if program B also co-operates, but, 0 points if program B defects.
2)
If program A defects, it will get 5 points if program B chooses to co-operate, but, only 1 point if player B defects.
As the programs are not allowed to communicate, to program A it looks as if the best policy would be to defect as there seems to be a statistical advantage if program B's decision is an unknown.
However, this situation would look identical to program B, so, if they both made what would seem to be the best decision they would continuously each decide to defect and continuously each receive only one point at each play. Clearly, it would be much preferable if they could somehow come to an agreement to both co-operate whereupon they would get 3 points every time.
Figure 12.1
The computer programs competed in thousands of these Prisoner's Dilemma games with the objective of trying to win the most points. They were not allowed to communicate directly with each other but were allowed to see the results of all previous plays. Some of the programs in the tournament were highly complicated, but, the winner was the simplest programmer of all: a program called TIT FOR TAT. This program would always decide to co-operate with another program at a first encounter. At each succeeding encounter, it would copy the decision made by its opponent in the previous play.
Despite the sophistication of many of the other programs, this simple strategy succeeded over all others in a variety of different variations of the game. It even succeeded when the designers of other programs were made aware of the TIT FOR TAT strategy. The only superior strategy to emerge was a variation of TIT FOR TAT that allowed forgiveness: where instead of invariably copying a defection, it would allow a competitor to make a few defections before following suit. This improvement, on the original program, allowed other programs to learn from their mistake of defecting, thus, providing another opportunity to co-operate once they have learnt that co-operating was a better tactic than defecting.
Axelrod used the results from these Prisoner's Dilemma tournaments to explain how, in a world of people selfishly looking after their own interests, cooperation could emerge spontaneously (Note: Robert Axelrod, in collaboration with the distinguished evolutionary biologist W. D. Hamilton Axelrod collaborated to produce a technical paper, "The Evolution of Cooperation in Biological Systems". This paper eventually formed the basis of Robert Axelrod's book " The Evolution of Cooperation" - first published in 1984 by Basic Books of New York with the reprint version published in 1985)
The original paper, written in 1981 by Hamilton and Robert Axelrod, was an explanation as to how altruism could have evolved. (this was an enigma for many years and was often used to refute Darwin's theories which were based upon survival of the fittest). Altruism does seem to be self defeating if used in a competitive environment, but, what the tit-for-tat strategy clearly demonstrated was that it could have a profoundly beneficial effect at a group and an environmental level.
Tit-for-tat involves an individual player using the same tactic that an opponent uses in a previous exchange, so, if all players start off by being generous, the whole population will soon be continually co-operating with each other. If a cheat arrives in such an environment, they will gain from the first exchange (taking a favor without honouring a return obligation), but, after that the cheat will consistently do badly.
Axelrod then looked at many different real life strategies that had similarities to the Prisoner's Dilemma. He studied such situations as diverse as biological systems, the behavior of US senators and trench warfare in the 1914-18 World War. He concluded that this strategy of TIT FOR TAT emerged spontaneously if there was a strong likelihood of further interactions. He also observed that a strategy of TIT FOR TAT could emerge successful even if a population of competitors were predominantly using the more selfish strategy of constant defection.
This, it seems, is the key to understanding how to acquire co-operation: an exchange has to be associated with the expectation of many similar events in the future. It is in keeping with the essence of game theory: making a play as if the identical situation is going to be repeated again and again.