To address these issues, Tatsuya Sasaki collaborated with colleagues Isamu Okada from Soka University and Yutaka Nakai from the Shibaura Institute of Technology in Japan. These researchers adopted a new approach, one that is different from the traditional assessment rules that are based on compulsory moral assessment.
Their results unveil a new champion of moral assessment rules, referred to as "Staying". Sasaki and colleagues examined the Staying rule by applying the helping game of two persons (a mover and a receiver). They consider two different types for the person on the moving end, "freeloading" that is to refuse to help, whoever the opponent, and "cooperation" that is to help when the opponent has a good reputation or to refuse to help when the opponent has a bad reputation.
They define the moral assessment rule for "Staying", as follows. When the person on the receiving end has a good reputation, the Staying rule assesses the person on the moving end, who either helps or refuses to help, as good or bad, respectively. This is necessary to stabilize cooperation once it has been established.
In striking contrast to more traditional rules, "under Staying", if the potential receiver has a bad reputation, the reputation of the person who helps remains the same as in the prior assessment. In this case, a choice about whether or not to render aid to the potential receiver does not affect the reputation of the potential mover.
A game-theoretical analysis demonstrates – for the first time – that the Staying rule, in which the assessment system avoids making moral assessments in specific cases, is more effective in establishing cooperation as compared to traditional assessment rules. Indeed, under the Staying rule, good cooperators can proliferate no matter how many freeloaders surround them, so long as the error rate is sufficiently small.
This study suggests that the practice of avoiding moral assessments can be the best policy when assessing those who refuse to help ("punish") wrongdoers. "Reputation-seeking punishment, described as I’ll punish your bad behavior to make me look good,’ may not be the best way to subvert a population of freeloaders," says Sasaki.
This study has important implications for various contemporary issues, including the potential applications of artificial intelligence (AI) in terms of decision-making. "The results of future work that examines whether AI can learn to avoid making moral judgements will be fascinating," says Sasaki.
Publication in "Scientific Reports":Sasaki T, Okada I, Nakai Y. 2017. The evolution of conditional moral assessment in indirect reciprocity Scientific Reports 7:41870. http://dx.doi.org/10.1038/srep41870