Chapter 12
A question of trust
Dealing with the problem of trust
During the writing of the past few chapters, the reviewer comments have repeatedly brought up the question of trust. This was felt to be the most crucial element in the success of any e-business venture. This is absolutely true, but, in the world of the Internet, where there is often no face to face communications and deception is so easy, it is necessary to use rules and standards that are completely foreign to the conventional world of business and management .
Trust in the communication environment of the Internet has to be totally disassociated from the conception of trust used in the world of bricks and mortar. Subjective judgements have no place. Reputations and promises have little credibility. Friendships are uncertain, advice is always suspect, reliability is always questionable. Allegiances are subject to reversals. Defection is always a possibility. It is easy for people to renege on a deal if they get a better offer.
Conventional thinking would see this as a state of affairs where it would be impossible for anybody to be able to cooperate or collaborate with anyone else. However, this is no more than conditions of uncertainty and is easily accommodated within the framework of Game Theory.
The trick is to dispense with the notion of trust and in its place substitute the notion of risk. At a stroke, this simple expedient removes all the problems associated with trust. Instead of considering how much you trust somebody, you estimate the risk factors involved and use those in a strategy for collaborative associations. To explain this, two models might be useful: a game of Solo Whist and a game of Tit-for-tat.
In the game of Solo Whist, all the cards are dealt out to four players at a table such that they each have thirteen cards. At each trick (round of play), every player plays a card of their choice and the winner of the trick is the player who has played the highest card. In this way it is very similar to bridge, with players working out a strategy to win as many tricks as they can with the cards at their disposal.
The game starts with each player proposing how many tricks they will win. The one who proposes to will win the most, is the one who plays to win (or lose) the points associated with that call. The highest proposal is of course the player who proposes to win every one of the thirteen tricks in the hand. Surprisingly, the second highest call is not to win twelve cards, but, to win no cards at all. This is a special call named ,"Spread Misere": the word "Spread" indicating that the caller must lay down their cards at the start of play so that all the other players can see the hand they are playing against.
It is this element of playing with open cards that is an important element of any business strategy on the Internet. In fact, it is a good strategy in an business situation for anyone who seeks to create a relationship of trust.
Without entrepreneurial experience, the idea of playing a competitive game with all your options exposed may seem to be a foolish strategy. People can always anticipate your next move. They can know how you are playing the game. But, if you stop to think about this for a moment, who would you rather collaborate with: somebody who you can anticipate, or, someone who might pull an unexpected trick out of the bag?
When a player calls "Spread Misere", they will know that every card that they are able to play will be obvious to everyone else. They will then have to feel certain that whatever way the others play they will not be able to defeat the call. They can therefore only have confidence in the call if they have anticipated all the possible ways in which the other players can combine to defeat them. Such a strategy would be disastrous in a zero sum game where winners win what losers lose, but, in a non zero sum game where everyone can be winners this is likely to be the optimum strategy.
In a game of cards, such as the game of Solo Whist, the term "trust" is out of place. For example, if you see somebody play a card from their open hand after they have called "Spread Misere" you wouldn't say that you trusted them to make that play: you'd say you expected them to make that play because that was the sensible card to play. It is this concept of "the sensible way to play the game" that replaces the concept of trust and it is this same concept that replaces the element of trust in dealing with people on the Internet.
For this replacement for trust to be effective, an auteur, or entrepreneur, hasn't got to convince people that they are trustworthy; instead they have to convince people that they are skilled at playing the game of risk. To do this, they have to make all the other players aware of the options open so that all can see that they choose the most sensible options and are therefore skilled in playing the game. In this way a collaborator can be "trusted" to do what is expected of them.
To fully understand why such a strategy works, it is necessary to understand another game: the game of "Tit-for-Tat". This was a strategy that evolved out of a competition set by the political scientist Robert Axelrod who, in the 1970's, invited a number of universities to take part in a competition to see who could device the best computer program to play "The Prisoner's Dilemma". This is the Classic Game theory scenario where two criminals are arrested for a crime and are taken to separate cells to be questioned.
Each criminal has a choice of blaming the other and getting off scot free, or pleading not guilty and being charged with a lesser offence that carries a small penalty. Without knowing what the other criminal may say, it would seem that the best strategy for each of the criminals is to blame the other. However if they both blame each other they will both get the maximum sentence. The optimum strategy is for each to plead not guilty and trust that the other will have the sense to do the same.
In the computer version of the game, the options open to each computer program are to cooperate or defect. This is described more fully in "The Entrepreneurial Web" but, in essence, when two programs compete against each other there are four possible outcomes:
1) A program wins 3 points if it decides to cooperate and the other program also decides to cooperate.
2) A program wins 1 point if it cooperates but the other defects.
3) If the program decides to defect it will get 5 points if the other program decides to cooperate
4) If the program decides to defect and the other program also decides to defect it will get one point.
To two competing programs, the option to defect seems to be the optimum choice, but, if they both defect then they would have lesser rewards than if they had both decided to take the seemingly less favourable option of cooperating.
The tournament was played on a round robin basis with each of the programs playing against each other many times. Despite the sophistication of some of the programs that took part, the winner was the simplest program of all: a program named Tit-for-tat. This program would play a cooperation when it first encountered another program, but, at every subsequent meeting it copied whatever choice the other program had made in the previous encounter.
Although this "Tit-for-tat" program was an algorithm, it was effectively looking to establish unbroken sequences of mutual cooperation. If other programs could detect its pattern of play, they could establish a strategy of mutual cooperation which would enable them to gain more points. In other words it was looking for other programs to play the game sensibly, who could be trusted to choose the option that would be most beneficial to them both in the long run.
This simple strategy was only ever beaten at subsequent tournaments by a program that was similar, but, allowed the occasional exception where it plays a cooperation when the other program had previously played a defection. This would be the equivalent of occasionally forgiving another program for making a mistake, enabling a fresh start to be made to perhaps establish a new relationship of mutual cooperation.
Robert Axelrod, used these results to describe similar activity that occurred with humans in the real world. He showed how mutual cooperation emerges spontaneously through a similar strategy used in all kinds of business, social and political scenarios.
His empirical observations revealed a highly critical factor: mutual cooperation would be more likely to take place if there was a chance of further cooperative encounters - such that the anticipated accumulative rewards from future encounters would have the expectation of providing more gain than an immediate defection.