The Prisoner's Dilemma Competition

Results from CEC'04 (there are also some pictures)

If you have arrived at this page by mistake you might like to go to the home page or read more about the prisoners dilemma

This page presents the results from the IPD competition we organised at CEC'04


Competition 1

This is a re-run the original experiment of Axelrod's. We aim to see if a tit-for-tat strategy still dominates or whether somebody can develop a better strategy taking into account that there has been 20 years since the original competition (for example, tit-for-2-tats was claimed to be better than tit-for-tat).

In total we had 223 entries. This was made up from 19 web based entries, 195 java based entries, and 9 standard entries (these being RAND, NEG, ALLC, ALLD, TFT, STFT, TFTT, GRIM, Pavlov)

The payoff matrix we used was as follows

 


Cooperate
Defect
Cooperate

R=3

T=5
R=3

S=0

Defect

S=0

P=1
T=5

P=1

We used a mean of 200 iterations per round, with a standard deviation of 20.

The results can be seen here, with a detailed breakdown of all the interactions behing availabe in this text file.


Competition 2

Thi is an identical competition to the one above except there is some (low) probability that there is noise in the data. That is, a signal to cooperate or defect could be mis-interpreted

In total we had 223 entries. This was made up from 19 web based entries, 195 java based entries, and 9 standard entries (these being RAND, NEG, ALLC, ALLD, TFT, STFT, TFTT, GRIM, Pavlov). That is, we used the same entries as for competition 1.

The payoff matrix we used was as follows

 


Cooperate
Defect
Cooperate

R=3

T=5
R=3

S=0

Defect

S=0

P=1
T=5

P=1

We used a mean of 200 iterations per round, with a standard deviation of 20. The noise factor was set to 0.1.

The results can be seen here, with a detailed breakdown of all the interactions behing availabe in this text file.


Competition 3

This competition that allows you to submit a strategy to an IPD that has more than one player and more than one payoff, that is, multi player and multi-choice.

In total we had 15 entries for this competition.

The payoff matrix we used was as follows

 

   
Player B

Levels of Cooperation
1
¾
½
¼
0
Player A
1
4
3
2
1
0
¾
¼
½
½
¼
¾
0
5
4
3
2
1

We used a mean of 200 iterations per round, with a standard deviation of 20 (although there is only one round).

The results can be seen here, with a detailed breakdown of all the interactions behing availabe in this text file.