27 February 2008

The science of good and evil

By
Cosmos Magazine
Cognitive psychology, evolutionary biology and game theory are offering fresh insights into one of the most perplexing of human capacities: morality. Tim Dean explores the science.
The science of good and evil

Credit: CORBIS

“Two canine animals, in a time of dearth, may be truly said to struggle with each other which shall get food and live.” Not quite ‘dog eat dog’, but thus did Charles Darwin describe the tireless struggle for existence that is a trademark of the natural world.

In 1976, evolutionary biologist Richard Dawkins went on to reinforce this view by describing organisms as merely vehicles for the true drivers of evolution: genes. “A predominant quality to be expected in a successful gene is ruthless selfishness,” says Dawkins. Further, “this gene selfishness will usually give rise to selfishness in individual behaviour”.

Dawkins suggests the principle of the ‘selfish gene’ paints the world in a harsh and pitiless light, in sharp relief to the world we might want to inhabit. “As we might wish to believe otherwise, universal love and the welfare of the species as a whole are concepts that simply do not make evolutionary sense.”

So, are we born into this cruel world merely to spite our fellow man and surge only towards our own selfish ends? Is it truly the case – as given voice in the film Wall Street by corporate raider, Gordon Gekko – that “greed is good”? If so, whence springs our “wish to believe otherwise”, of which Dawkins makes mention? And how can this wish be explained in the harsh light of evolution?

THERE’S NO QUESTION we have selfish desires and impulses. However, it’s clear that this is not all there is to the complex human psyche. Consider the following scenario and reflect on how it makes you feel:

“You’re sitting in a lifeboat on dark and turbulent seas. You’re cold and wet. But that’s the least of your worries. As the captain, the lifeboat and its occupants are your sole responsibility. Designed to hold no more than a dozen, the lifeboat is overloaded with 20 survivors. On the horizon looms a menacing thunderstorm. You face a choice. If you allow currents to drag the overloaded boat into the storm, it will surely sink and all hands will perish. However, if you can reduce the number of occupants to 12, then you stand a chance of survival and rescue.

What do you do? And how do you feel about it?

Do you order eight people overboard? Whom do you choose? The sick, elderly woman? The convict? The children who are unable to row? The overweight man burdening the boat?

What if someone volunteers? What if one of the women on board is pregnant? What if the elderly woman is your mother, or the children are yours? What if the convict is the only one strong enough to row?

Dilemmas like these have caused consternation amongst ethics students in universities worldwide. Yet the anguish we feel reveals some insight into the way we see the world.

The problem is, were this truly a dog-eat-dog world, we’d have no compunction in committing our fellow survivors to the waves to ensure our own survival (or the survival of our genes – the true masters of the ship). Yet we do have scruples about such a ruthless act, even in the face of our impending demise.

So why is it that we feel so strongly about morality?

Several thousand years of philosophy, from Socrates to Singer, and morality remains an enduring enigma. In fact, where philosophers once attempted to advise us on how to live a virtuous life, in the 20th century this practice waned. A pervasive scepticism as to whether it was possible to derive such a thing as an objective morality, or moral ‘truths’, led many thinkers to descend into moral relativism, or to simply debate semantics rather than offer guidance.

Yet from this philosophical quagmire has arisen an unlikely champion of moral insight: science. Research in the late 20th and early 21st century in a variety of fields, including cognitive psychology, evolutionary biology and game theory has come together under the banner of evolutionary psychology to offer a fresh perspective on morality – one that may shed new light on its origins and applications.

As stated by pioneering evolutionary psychologists Leda Cosmides and John Tooby, from the University of California in Santa Barbara, “the human mind is the most complex natural phenomenon humans have yet encountered, and Darwin’s gift to those who wish to understand it is a knowledge of the process that created it and gave it its distinctive organisation: evolution.”

IT’S VERY LIKELY you’re far better at moral reasoning than you might think. Try the following task:

“Your job is to test whether the following statement is true: when Fred travels to Sydney, he always takes the train. In front of you are four tickets, each representing four different trips. One side has the destination, the other side shows the method of transport, whether it be bus or train. If you were going to check that the above rule was true, which two tickets would you need to turn over to confirm it?”

If you said you’d turn over the first and third tickets to check the rule, then you’re in the majority. However, that’s not the right answer.

Now consider the following task along similar lines:

“You’re serving drinks behind a bar, and your job is to ensure that nobody under 18 is drinking alcohol. There are four people sitting at a table nearby drinking. You can clearly see what two of the individuals are drinking – a beer and a soft drink respectively – but you can’t see how old they are. You can also see the age of the two other individuals at the table – 25 and 15 respectively – but you can’t see what they’re drinking. This scenario is represented by the cards below, with the drinker’s age on one side, and their beverage on the other. If you were going to make sure that no one under 18 is drinking an alcoholic beverage, which two cards would you need to turn over?”

If in this case you said cards one and four – that you’d need to check the beer drinker wasn’t under 18, and that the 15-year-old did not have an alcoholic drink – then you’d be absolutely right. (It doesn’t matter how old the second person is because they’re drinking a soft drink. It also doesn’t matter what the 25-year-old is drinking because they can drink whatever they like.)

The interesting thing about these two tasks is they are of exactly the same logical form: if P then Q. As such, they both have the same reasoning and the same solution. However, it’s not intuitive that in the first test you need to check ticket four – the one going via bus (or not-Q) – to see whether it had Sydney (or P) on the other side. If it did say Sydney, then it would disprove the rule in the same way that if the 15-year-old had a beer it would break the second rule.

This test, called the Wason selection task, was pioneered in 1966 by Peter Wason and was explored by Cosmides and Tooby in the 1992 book they edited along with Jerome Barkow, The Adapted Mind. They discovered that while less than a quarter of respondents chose the correct answer for tests such as the first, more than three quarters had no trouble with the second. To Cosmides and Tooby, this finding implied that we had some kind of hardwired capacity in the human mind for understanding social exchanges, even when the abstract logic involved was not necessarily intuitive.

Why would we have such hardwired intuitions about social exchanges? According to Cosmides and Tooby, “in order to successfully engage in social exchange – cooperation between two or more individuals for mutual benefit – humans must be able to solve a number of complex computational problems.”

The key here is “cooperation between two or more individuals for mutual benefit”, which is often called ‘reciprocal altruism’ (you scratch my back, I’ll scratch yours). We see this kind of behaviour elsewhere in nature too (see “The moral animal”, p54), and as Cosmides and Tooby have stressed, we apparently have in-built mechanisms for handling it.


OUR INTUITIVE ABILITY to use reasoning to negotiate social exchanges might help us understand situations, but it doesn’t necessarily motivate an appropriate response. To do that, you need something extra, such as emotion, to compel us into action.

Consider the following two scenarios and how you feel about them:

“A brother and sister like to kiss each other on the mouth. When nobody is around, they find a secret hiding place and kiss each other on the mouth, passionately.”

Or this:

“A family’s dog was killed by a car in front of their house. They had heard that dog meat was delicious, so they cut up the dog’s body and cooked it and ate it for dinner.”

A common response is a feeling something is deeply wrong about both scenarios. Yet consider what is actually wrong about them. Nobody is harmed, and there is no malicious intent. However, we can’t help feel both are somehow morally wrong.

In 1993 Jonathan Haidt, a psychologist at the University of Virginia in Charlottesville, USA, presented these two scenarios to people from many different backgrounds and recorded their reactions. He found near universal condemnation of both scenarios, but when pressed, people had difficulty giving reasons for their feelings.

Haidt sees moral intuitions and emotions – disgust being one – as being at the foundation of our moral behaviour. “Emotions are nature’s way of making higher animals do things that were good for them,” says Haidt. “Emotions involve motivations and rewards, whereas cool reason – simply deciding that one plan is best – has no connection to motivational centres, and hence people can often know what is best yet choose something worse.”

That’s not to say all emotions are moral, only “those emotions that are linked to the interests or welfare either of society as a whole or at least of persons other than the judge or agent,” says Haidt.

In his Handbook of Affective Sciences (2003), Haidt classifies the moral emotions into four broad categories, corresponding to how they flavour interpersonal interactions.

The first is the ‘other-condemning’ emotions, such as contempt, anger and disgust. Consider how you react when you find you’ve been robbed, or cheated, or if you see someone shirking their responsibilities. These emotions motivate us primarily towards revenge (in the case of anger) or avoidance (in the case of disgust).

Second are the ‘self-conscious emotions’, such as shame, embarrassment and guilt. Reflect on how you feel when you’ve been caught doing something wrong or have broken a social convention. These feelings encourage us to monitor our behaviour and prevent the triggering of the other-condemning actions from others.

The third type of moral emotion is the ‘other-suffering’ family, best known through empathy, a cornerstone of many moral codes. Think about how you respond to seeing another person get hurt, or the common practice of ‘putting yourself in another’s shoes’. “Compassion makes people want to help, comfort, or otherwise alleviate the suffering of the other,” says Haidt.

The final class is the ‘other-praising’ family, which are typically associated with the good deeds of other people. We can all recollect feelings of gratitude we’ve had towards someone else when they’ve done something that has benefited us.

Together, these moral emotions – present in each and every one of us – are ‘pro-social’ and work together to encourage cooperative behaviour. They’re evolved rules of thumb that we can rapidly apply to situations in which an immediate response is required – a kind of ‘judge first, ask questions later’ approach.

Steven Pinker, a cognitive psychologist at Harvard University in Boston, is one of many high profile academics who have subscribed to this view. In his 2002 book, The Blank Slate, he encourages us to abandon the notion there is no human nature and instead consider how our behaviour – and morality – may be the product of evolution.

“Many moral emotions – sympathy, gratitude, guilt, shame, trust, righteous anger – can be explained as mechanisms that make adaptive cooperation possible,” says Pinker.

But this brings us to a critical juncture. If it’s the case that we have some kind of hardwired capacity for social reasoning, steered by a collection of moral emotions that encourage cooperation, how could this capacity have sprung forth from our fundamentally selfish genes?

For the answer to this question, we must turn to economics.

Here’s another scenario to consider. Think about what decision you’d make to achieve the best possible outcome.

“The day after you committed a daring bank heist, you and your partner in crime have been arrested under suspicion of robbery – a charge that carries a maximum penalty of 10 years in prison. However, the police have insufficient evidence to convict you immediately, but they could if one of you implicated the other – although to do that, they’d have to confess their own involvement. They interrogate you separately and offer each of you the same deal: if you both confess and implicate each other, you each receive a five-year reduced sentence for cooperating. If you both stay quiet and refuse to implicate each other, you both only get a six-month sentence for the minor charge of carrying a concealed weapon. However, if you confess and implicate your partner, who stays silent, you’ll walk free, while your partner will get the full 10-year sentence. However, if you stay silent and your partner implicates you, it’s them that will walk free while you get 10 years.

This is the infamous prisoner’s dilemma, as articulated by the late mathematician Albert Tucker in 1950. It serves as a paragon example of a ‘non-zero-sum’ game from within economic game theory, which studies the interactions between two or more agents given certain rules. A ‘zero-sum’ game is one where there is a fixed reward, and the amount gained by the winner is equivalent to the amount lost by other players. Examples include many board games such as chess, or a presidential election, where there can be only one winner.

A non-zero-sum game is one where the reward varies depending on the players’ combined actions. An example – and the reason why economists study it – is two nations trading surplus goods, where both benefit more from cooperation than not.

In the prisoner’s dilemma, it’s easy to conclude that you should cooperate with each other and stay silent, thus taking the minimal sentence each. Overall, this strategy would yield the best result averaged across both players.

However, if you can anticipate that your partner is thinking this as well, and they’re likely to cooperate, then it’s in your best interest to ‘defect’ and implicate them, thus walking free. Then again, your partner may be thinking the same thing, and is thus likely to defect as well, leaving you both with a worse outcome.

The end result is called a Nash equilibrium – named after the mathematician and Nobel laureate John Nash, whose life is dramatised in the film A Beautiful Mind – in which both players default to ‘defect’ rather than ‘cooperate’.

Where the prisoner’s dilemma gets particularly interesting is when the game is played repeatedly by the same players – the so-called ‘iterated prisoner’s dilemma’. In this case you can base your choices on what has transpired in previous turns, such as whether an opponent betrayed you and whether you want to punish them for it. Political scientist Robert Axelrod, of Michigan University in Ann Arbor, explored this in a famous 1981 paper in the U.S. journal, Science.

“My original interest in game theory arose from a concern with international politics and especially the risk of nuclear war,” says Axelrod. “The iterated prisoner’s dilemma game seemed to me to capture the essence of the tension between doing what is good for the individual (a selfish defection), and what is good for everyone (a cooperative choice). Therefore I was intrigued by the many strategies that had been proposed to play this game effectively.”

Axelrod ran a tournament of the iterated prisoner’s dilemma and invited participants to submit computer programs with various strategies. One program might always defect, another might be random while another might mimic the opposition’s choice on the following turn. What Axelrod observed at the tournament was illuminating.

“The result was that the simplest of all submitted entries won the tournament. This was tit-for-tat: cooperate on the first move, and then cooperate or defect exactly as the other player did on the preceding move.” A subsequent larger tournament again yielded a victory for tit-for-tat. Axelrod noticed this tit-for-tat strategy had all the hallmarks of reciprocity and subsequently went on to use the mathematics of game theory to demonstrate how such cooperation could emerge even from a population of selfish agents.

Crucially, fully cooperative – or ‘nice’ – strategies fared poorly, especially when confronted with a ‘nasty’ strategy that was unrestrained in defecting. Yet ‘nasty’ strategies also performed below those that were nice up to a point, yet vigorously punished nasty strategies when they appeared.

This proved to be a revelation to many – but not to Harvard evolutionary biologist Robert Trivers. In 1971 he published a study in The Quarterly Review of Biology called “The evolution of reciprocal altruism”, which detailed how altruistic behaviour can arise through natural selection. He showed how humans had both selfish and altruistic tendencies, and were inclined to behave in either ‘nice’ or ‘nasty’ ways to establish a balance – effectively a Nash equilibrium – in their ecological environment.

Dominic Johnson, also of Harvard, is one of a new generation of researcher who draws on the insights of Trivers and Axelrod and applies them to diverse fields. “Game theory has revolutionised the understanding of behaviour and strategy across several disciplines, including evolutionary biology, economics, and political science,” he says.

“Despite being a game that predicts non-cooperation at face value, when the prisoner’s dilemma is repeated over many interactions it predicts some surprising cooperative strategies, such as cooperation with strangers and forgiveness – it is by no means obvious that such apparently complex and fundamental human behaviours should emerge from so simple a game.”


HERE WE CAN start to see the beginnings of a coherent answer to our original question of how humans – driven as we are by selfish genes – have evolved such a complex capacity as morality.

In an environment where cooperation such as reciprocal altruism is beneficial – even if occasionally cheating is more beneficial – our selfish genes allow for the evolution of cooperative behaviour. However, in order for it to work, it needs to compel such behaviour. And what better device to compel us into action than our emotions, whether it be praising a cooperator for their altruistic actions or bringing righteous punishment upon a cheater for exploiting others’ good intentions.

Thousands of years of armchair speculation by philosophers about the nature of morality is a tough act to follow, although there are signs the science of morality is starting to shift thinking even amongst the plaid-wearing, elbow-patch brigade.

Kim Sterelny, of the Australian National University (ANU) in Canberra, studies the philosophy of biology and psychology. According to him, evolution has received a great deal of attention in philosophical circles recently. “Evolutionary psychology is being taken seriously in philosophy, but that’s different from saying that it’s being accepted.”

Yet, according to Sterelny, there are many philosophers, such as Daniel Dennett at Tufts University in Medford, north of Boston, and Richard Joyce at ANU, who are scrambling to review moral theory in light of the science.

As for whether we’re born selfish … or not? “No one said we were selected to be perfectly cooperative. No one thinks we were selected to be saintly. We’re selected to be partly cooperative, and partly generous,” says Sterelny. “If there’s selection for cooperation, and moral emotions that make morals easier and more reliable, that suggests we have a mixed psychology.”

On one hand we have our selfish tendencies, which have obvious adaptive benefits, especially when we’re in a competitive environment. On the other, we have altruistic tendencies selected to encourage us to cooperate with others, thus benefiting all. This means we have no single morality, and no single human nature. Instead, we have two prime driving forces – selfishness and cooperation – that work in tension, according to the dynamics of game theory, to yield our complex behaviour.

So maybe it’s not such folly to wish for “universal love and the welfare of the species” after all.

Tim Dean is the editor of Cosmos, a philosophy graduate, and coordinator of Socrates Café, a group for philosophical discussion, in Sydney.
NEWSLETTER

Sign up to our free newsletter and have "This Week in Cosmos" delivered to your inbox every Monday.

>> More information
Latest
issue
CONNECT
Like us on Facebook
Follow @CosmosMagazine
Add Cosmos to your Google+ circles