Selfishness and Cooperation

Evolutionary methods can be used to shine a light on the conditions for selfish or cooperative behavior. Imagine a situation, where you have to work together with a team of random strangers. The outcome will be depending on the sum of the individual efforts, but the success will be equally shared afterwards. In a computer experiment, we have investigated the evolution of cooperative behavior in two scenarios. Players were randomly divided into groups and had the chance to increase their investment by paying money into a pot where it was multiplied. The players were controlled by a neural network that controlled the setting strategy. Using our evolutionary design tool FREVO, we evolved the behavior in order to maximize the profit for each player.  After several rounds, the more successful (thus richer) individuals were allowed to stay in the pool and produce more offspring than the less successful ones.

Game setup (from demesos.blogspot.com)

In the first scenario the payout was the pot times three. So if, everybody would cooperate, you can earn your money gets tripled. If the maximum bet was 20$ this means a 60$ return, in other words a 40$ revenue. But if everybody in a group pays in, it’s even better to defect – let’s say five out of six cooperate, you get a 50$ revenue. Under this conditions, defection turned out to be the only stable strategy. For each system state, individuals with the defecting gene could make more revenue. In other words, ruthless behavior payed off. This is especially interesting when starting with an economic situation where most of the population cooperates. What happens is that defecting players take advantage of the cooperators and within a few iterations, the cooperation strategies vanish, leaving only defecting players.
The situation changed, when we introduced a nonlinear “synergy factor” into the payoffs. This meant that the money of cooperating players was not multiplied linearly, but over-proportionally. Assume you are working with some colleagues on a common project, let’s say writing a book. If you alone invest enough time into you chapter, the book still sucks because of the other chapters which are lame or missing. If half of the authors cooperate, the book might be accepted by a mediocre publisher, but still would not be that promising. But if everybody cooperates, the result is not double the revenue of the 50% case but much more!
In the experiment we reflected this issue by a quadratic factor in the pot function. Evolving the stable strategies showed that after some generations of defecting players, cooperation evolved as a stable strategy!

Emergence and decay of cooperative behavior (note the different time scales)

When comparing the decay from a cooperating society to a selfish one in scenario 1 and the emergence of cooperation from a selfish society in scenario 2, it is interesting that the decay is much faster than the build up.

So, be careful when tinkering with rules (e.g., taxation of revenues) in an economic system – it might be easier to destroy a working system than to rebuild cooperative behavior.

Citations to this discussion:

3 thoughts on “Selfishness and Cooperation

  1. Very interesting. This is very similar to Robert Axelrod’s Iterated Prisoner’s Dilemma. His results were similar. If I remember correctly, one of the best strategy he found was also the most simple: Tit for Tat. Meaning that each person does what the other person did the last round. So if you cooperated last time, I will cooperate this time. Although the best strategy was one of exploitation, where a player would lull another into cooperating, then every once in a while would defect, claiming that he “forgot the rules” or something similar, then would go back to cooperating.

    But I think his payouts did not compound… they stayed the same each time. He also didn’t have a group of players, it was always two players. Did you ever run the experiment with only two players?

  2. And you are exactly right when it comes to evolutionary methods. I used a Genetic Algorithm in my M.S. thesis to develop strategies for people in a work environment who had the option to 1) do work 2) lie about doing work or 3) do nothing. It can be very powerful.

    I haven’t used the FREVO tool. Can you specify parts of the Genetic Algorithm like mutation rate, number of offspring, and crossover points?

    • Thanks for your suggestions!
      Yes, in FREVO you can specify several parameters for the Genetic Algorithm (mutation rate, population size, etc.) as well as for the neural network (number of hidden neurons, fully meshed vs. feed forward, …).
      Repeating the experiment with different number of players should be no great effort – as soon as I have some time, I will go for it.

Leave a Reply