Tactics: Cooperation

. .

It's interesting how pursuing self-interest is viewed as a 'dark' thing, really. Alright, some do say that people help others out of self-interest or at least to make themselves feel better, what could be also viewed as egotistic, to some degree, but that argument devolves into generalised statements how 'everything is done for the ego', although that can't be really proven, since it's making statements about our deepest urges and motivators without any real proof, so, in the end, you can either accept it or not.

Anyway, I always had a strange belief that codes of honour and ethics could be explained in terms of self-interest. Many ethic codes were useful at some time, not nessecarily useful for everyone, but certainly useful for people who installed them. These codes do not grow obsolete on their own; either the interest group which was endorsing them loses interest in doing so, or another interest group starts installing a new one, for their own needs. The church's ideologies didn't vanish because people thought better ( or worse ) of them, they vanished because there were new ideologies propagated by new powers. Money, a new deity, didn't emerge because people thought it was a good idea, it emerged because someone, something, profits from it. Masses still follow the dominant ideology, the dominant code of ethics; not because they thought of it and said "oh, this is good", definitely. And, definitely, someone profits from them doing so. However, relations between ethics and profits aside, let's embark on a little experiment. It's not mine, of course, I can't be arsed to do anything like that, being a slacker I am, I stole it from somewhere.

The belief that the relationships of people or groups are competitive, where one side gains on the expense of the others, is firmly entrenched into the minds of many. Often it's compared to playing chess or other board games. If you've played risk, you'll remember that you gain territory by taking it from others. Monopoly, you earn money on the expense of others, mostly. Indeed, in these games with set rules, it's very hard to do things otherwise, and the rule that you gain only so much as the other looses is preety much correct. It's very interesting to see how most opponents of pursuing self-interest are in fact opposed to this, to the competitive (instead of a solidary one) attitude towards others. Of course, many people who claim to be interested only in profiting themselves take that attitude, without really thinking twice.

However, I think it's plain that relationships between people, competition between political parties, relations between countries are far more complex and cannot be explained by that simple 'I win, you lose.' theory. An old example is found in the classic "Prisoner's dillema". Two suspects, let's call them A and B, for the same crime are held in separate cells, without any ability to communicate. The prosecution offers both suspects a deal; if they testify against the other one ( we'll call this option 'cheating', since it is ), they go free, and the other one gets 5 years of prison. If they both testify against the other, they both get 4 years of prison. If both of them refuse to testify, both of them get 2 years of prison.

The paradox of this dillema is, basically, this: Egotistic self interest and logical thinking tells us to cheat. However, it's only reasonable to expect that the other person will cheat, too, using the same line of thinking. On the other hand, your egotistic self-interest would profit much more from cooperation. Still, how do you get people to cooperate? After all, criminals aren't very famous for moral... eh, how are they called.... ah, scrupules, although in this particular example they could surely use some. It also gives us some insight on why is cheating punished in society. Then again, maybe not. Who punishes the state for cheating poor criminals?

Also, it's a great way to describe trade. Imagine this situation; Two smugglers agree to meet on a certain place, exchanging drugs for money. They exchange their goods without really checking what's inside the coffers. Anyway, the option of cheating is clearly there. If one of them cheats and the other chooses to cooperate, he earns, say, 100 thousand and the other loses 20. If both of them cheat, neither of them earns anything. If both of them cooperate, both of them earn 20 thousand.

The rule is, basically, that profit from cheating is greater then the profit if both cooperate, which is greater then the profit if they both cheat, which is greater then the profit if you cooperate and the other cheats you.

Things get really interesting when this sort of exchange repeats itself more times. Then, the one-time profit you get from cheating is dwarfed by the long-term profit from cooperation. They lose out not through some kind of punishment for cheating, but through not being able to participate in the exchange again. Eventually, the strategy which is based on cooperation, in the long term, wins through and kills off other strategies.

A good example of this was an experiment conducted by Robert Axelrod in 1979. Competitors, of course, were computer programs, and the goal was simple; to gather as many points as possible using the following rules: Each program plays 200 times in a row against every other program. The program can choose either cooperation or cheating; for cheating the other, it gets 5 points, and the other 0 points, if both cooperate, both get 3 points, if they both cheat, they both get 1 point.

The winner, to nearly everyone's suprise was the smallest program, called "eye for an eye", by Anatol Rapoport. How did it operate? Simply, each time it started 'trading' with a competitor, it started with cooperation; if the opponent cheated, it would cheat the opponent next turn. If the opponent started cooperating again, it would revert to cooperation. It was the best program, by far. Why?

There are three really important traits of good programs; First, that it was a program which cooperated by default. In fact, only cooperative programs, which started off on a good note, got the good scores.

Second, it has the ability to quickly react to cheating and penalize it. This is important, as programs which have acted independently of the the other's actions haven't done very well, either. It reacted to cheating promptly. Programs which had complex algorithms which didn't seem to respond as quickly and efficently to cheating didn't do too good, either.

Third, the ability to forgive, or, in terms of the experiment, to resume cooperation after the other competitor stopped cheating, has also shown to be crucial. Ploughing on with the conflict after the other party doesn't want to pursue it, isn't really sound behaviour, according to mathemathics, at least.

Similar experiments were conducted later; and, almost every time, the winner was the 'eye for an eye' program. No program without the abovementioned three traits got to the top.

Even more significant was the 'evolution tournament' which composed of a series of Axelrod's tournaments. The number of 'offspring' of the programs was determined proportionally to the number of points scored in the last tournament. The goal was to determine what kind of programs will, in such a simulation of evolution, survive. Almost all of the survivors were 'cooperative' programs. The best program was 'eye for an eye', which didn't win by defeating others; it always scored equal or less points then the other program, but it couldn't be cheated without repercussions, it won by enforcing cooperation.

Cooperation, punishment and forgiveness may or may not be viewed as ethical; however, they can be, in most cases, very profitable. It may be that it's viewed as ethical just because it's in our interest to do so. However, this is an interesting example how behavior based on pure self-interest might be seen as behaviour based on ethics, although it isn't. It also provokes questions, like "Could it be that the world's dominant codes of ethics today are dominant exactly because they're based on pure interests which value nothing else but profit of some kind?", or, like "Could it mean that people ( parties, countries, etc... ) we see as moral authorities are behaving the way they are only because they profit from it, and, if the cirrumstances were different, would stop doing so only because it's not profitable anymore?"

There are a lot of questions which I can't even be bothered to state right now which are raised here, but, my final point would be that cooperation wasn't invented because it was ethical or morally sound; it was invented because it was, and still is in your interest ( financial or otherwise ) to cooperate. At least most of the time.

Tags: Dark Aspect Legacy

Comments on Tactics: Cooperation

Be the first to comment
Please login to comment

Start your path today

Our community is not roleplay and we recognize that life is not as black and white as the fiction. We welcome people from all backgrounds who have a thirst for learning and improving themselves and the world through the unique lens of the Force.

Join Today