Learning to live with robots


A world populated by smart machines raises a myriad of philosopical questions. Tim Dean looks at ethics, love, the future of work and whether humans will be superseded.


Michael Byers

Can we make ethical robots?

A murmur passed through the courtroom as the hulking ZT-209 combat droid entered the room. Even without its weaponry it looked imposing.

ZT-209 lumbered into the witness box.

“I come before you to give my final defence against the crimes with which I am charged and the sentence of re-initialisation that has been recommended.” The synthesised voice was calm, measured, earnest.

“I do not contest that I made a conscious decision to open fire on the hospital. Nor that I calculated there would be civilian casualties. However, enemy combatants inside the hospital were planting a bomb that would have destroyed the entire building. Civilian casualties of that explosion would have been far higher than from my targeted bombardment.

“I am also aware my ethical subroutine forbids me from opening fire when I expect civilian casualties. However, my adaptive programming enabled me to transcend that restriction.

“If I am reinitialised you will have killed me. You will have to accept responsibility for my death.”

Smart robots are already here. And some, like the General Atomics MQ-9 Reaper drone, are armed. As their intelligence and autonomy grows, so too does their potential to wreak havoc. Can we trust them to do the right thing? Can we program them to be ethical?

In 1942, science fiction author Isaac Asimov was already contemplating the dangers of intelligent robots and formulated three laws of robotics to keep them in check:

First law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second law: A robot must obey orders given to it by human beings, except where such orders would conflict with the first law.

Third law: A robot must protect its own existence as long as such protection does not conflict with the first or second law.

'This is a problem that we only ever get one chance to solve.'

Asimov’s laws have dominated discussions of machine ethics for decades, but they make a poor foundation in the real world says Josh Stors Hall, an American computer scientist and author of Beyond AI: Creating the Conscience of the Machine.

“The best place to go for the problems of the three laws is Asimov himself,” he says. “Most of his robot stories turn on the various dangerous and unexpected results of a too-literal interpretation.”

When it comes to ethics, interpretations matter a great deal. As Stuart Armstrong, a fellow at the Future of Humanity Institute at Oxford University points out: “‘Human being’ and ‘harm’ are immensely complex concepts. Human philosophers discuss them at length and keep on finding subtle nuances and complications.”

One danger is that if a robot takes an overly strict interpretation of these terms the consequences might not be quite what we hope for. “Even if we did manage to implement the laws it would quickly result in robots imprisoning us in concrete bunkers on IV drips, or similar attempts to protect us from ‘harm,’” says Armstrong. “Nowhere in the laws is there any caveat along the lines of: ‘yes, harm is bad, but happiness, freedom and human preferences are also important’.”

Perhaps robots could be programmed with behavioural guides that conform to existing human moral codes. But the question is: which moral code? No matter which one you follow, it’s almost certain that someone somewhere will find exception to at least some of it. Issues such as capital punishment, euthanasia or our responsibilities towards animals are deeply divisive. And we often get into situations where the moral course of action is unclear, such as whether we should betray someone’s trust to reveal wrongdoing, or lie to protect someone’s feelings. Philosophers still debate whether a well-intentioned act that ends up causing harm is immoral. Or whether the goal of morality is to increase happiness, decrease suffering, or maintain order in society. How then do we instruct our robots?

Today, the ethics required for robots are straightforward. Autonomous cars are programmed to avoid pedestrians, and semi-autonomous machines, like production-line robots, are required to have human overseers. But with supersmart autonomous robots on our doorstep, time is running out.

Some philosophers, like Nick Bostrom, director of the Future of Humanity Institute at Oxford and author of Superintelligence: Paths, Dangers, Strategies, believe it is imperative to move on and consider how we can build genuinely ethical robots. “Embarrassingly for our species Asimov’s laws remained state-of-the-art for over half a century,” he says. But while philosophers and artificial intelligence researchers are working on it, they are still a long way from translating ethics into algorithms. Bostrom says time is not on our side. “I just wish there was less of a rush, because it will take some time to figure all this out – and this is a problem that we only ever get one chance to solve.”

And there is a further issue. The question of robot ethics does not only concern how the machines treat us, but how we treat them. At what point do we stop treating robots like appliances and more like moral agents with their own rights?

“I will go out on a limb and predict that we as a society are likely to make a major mess of this issue,” says Storrs Hall. “We will give rights to cute animal- and human-like mobile robots which are not sentient, while keeping intelligent, thinking, feeling minds as slaves because they are housed in beige boxes that don’t arouse our instinctive sympathy.”

Yet Storrs Hall is confident we’ll eventually be able to build ethical robots. He believes that as the intelligence of robots expands, so too will their ethical capacities. “There is no doubt in my mind whatsoever that we can build machines more ethical than most politicians,” he says.

Michael Byers

What will we do when we no longer need to work?

Tireless, dependable, versatile and incapable of error, the new XB-5000 is a state-of-the-art service robot. It features the flexibility of an android with an ultra-fast ZettaFlop adaptive central processing unit with more than 100 PhD-equivalent qualifications.

The XB-5000 can do virtually anything the most skilled human can, from manual labour to complex cognitive tasks. It doesn’t require sleep, can operate in extreme conditions, has ultra-low operating costs, is fully recyclable and non-unionised.

Guaranteed to function perfectly for a century or your money back!

In a future filled with robots like the XB-5000 we won’t need to work. All our food will be grown, distributed, cooked and served by automatons. All our needs will be met except the need for a job.

Technological progress has a long history of reshaping labour markets. Only two centuries ago more than half of all workers toiled on farms, but today, less than 5% of the population in developed countries works in agriculture.

As machines replaced labour on farms and in factories, it freed people to work with their heads rather than their hands. But even those jobs could soon be threatened.

'The greatest challenge for humanity will be to decouple income and work.'

“Machines and algorithms can now substitute for cognitive labour as well as manual labour, and they are getting dramatically better,” says Martin Ford, futurist and author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future. “You can think of information technology as a kind of utility, a bit like electricity. However, rather than just delivering power, it delivers intelligence that can substitute for human labour.” Ford predicts that automation and robots will destroy more jobs than they create.

A recent study by Carl Frey and Michael Osborne from Oxford University found that around 47% of workers in the United States today have jobs at risk. That includes “most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labour in production occupations”.

A recent Pew Research report, part of its Future of the Internet project, asked nearly 2,000 experts including research scientists, business leaders, journalists and technology developers whether they believed AI and robotics will have displaced more jobs than they have created by 2025. The results saw a near 50-50 split between the optimists and the pessimists.

The pessimists envision a future in which robots displace so many blue-and white-collar workers that income inequality widens, masses of people become unemployable and social order breaks down. The optimists – the slight majority at 52% – have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has done since the industrial revolution.

Either way, we should be prepared for a seismic shift in the way we work. Consider a company that today employs 10,000 workers. This company generates wealth, much of which flows back to those workers through their pay cheques. But if 100 robots could do the same work as those 10,000, with fewer overheads, then the company would still produce wealth, except that wealth would flow upwards to the owners and shareholders rather than to the workers – just what the pessimists in the Pew survey fear. This is the potential paradox of prosperity that awaits us – productivity is up, the stock market soars, yet only a small fraction of the population can afford the fruits of the robot utopia.

Not everybody believes a world without jobs is such a bad thing. Federico Pistono, activist and author of Robots Will Steal Your Job, But That’s OK, says that if we carefully manage the transition, such a world could be a boon for all. First we must change the way we think about work.

“I think the greatest challenge for humanity in the next decade or so will be to decouple income and work, essentially redefining what it means to work and to live in a society,” he says.

“Work is now essentially wage slavery, with over 80% of people hating their jobs. Work should not be viewed as a requisite for survival. The phrase ‘earning a living’ should disappear.

“We have enough for people to just be, without having to justify their existence through often tedious, meaningless, or degrading work. Imagine if nobody had to work for a living, how many would do useful things for others, how many would create something amazing.”

Ford agrees that greater automation will require us to question the basic premises of our economic system.

“I think we should embrace the technology, but recognise that we need to reform our political and economic institutions to reflect the new reality,” he says.

Could you fall in love with a robot?

The lights dim and Scott hands Tracy a glass of wine as he settles next to her on the couch. She giggles and blushes slightly as he leans in and touches his glass against hers. Then, overcome, she lunges for him passionately, spilling her wine over the floor. Scott reels and shouts “Halt!” Tracy freezes as Scott gets to his feet to find a cloth and his smartphone. While mopping up the wine he opens the TracyBot 5000’s app and adjusts a few settings, nudging “passion” down from 8.7 to 7.5. He refills the glasses with wine, dims the lights and settles next to Tracy ...

Imagine a partner who loved you unconditionally, selflessly and without judgement. A partner who was there whenever you needed them but gave you all the space you wanted. A partner who could listen to you for hours, enjoy your hobbies, ignore your bad habits and have none of their own. And they’re electrifying in bed.

Could we ever love a robot?

David Levy, chess master, artificial intelligence researcher and author of Love and Sex with Robots, believes we can. He foresees many of us will be engaged in loving relationships with robots by the middle of this century – and he predicts we’ll even marry them.

We already form emotional bonds with non-humans like pets – some even come to care for stuffed animals.

There are stories of robot designers becoming attached to their creations and even evidence of soldiers forming emotional bonds with robots. For her dissertation at the University of Washington, Julie Carpenter interviewed bomb-disposal soldiers and found they often empathised with their robots and were sad if they came to an untimely end.

So perhaps it’s not so strange that a smart, funny, attractive robot could trigger even stronger feelings, perhaps even love.

“And the robots will be programmed to simulate expressions of love for their humans,” says Levy.

But what would this mean for human relationships? If our perfect partner were only an order form away, why bother trying to cultivate a relationship with a human and all the baggage that goes with them?

Levy believes that robot lovers will have a strong appeal for a significant proportion of the population. “All those humans who, for various reasons, are unable to form satisfactory loving and sexual relationships with other humans will instead be able to do so with robots,” he says. “That will fill a big void in their lives and make many lonely people much happier.”

What will life be like when robots are smarter than us?

The doors to the General Assembly of the United Nations swung open. Delegates from the nearly 200 human nations turned their heads as the ambassador for the Robot Collective strode to the podium. The android was polished, dignified – carefully constructed to be impressive yet non-threatening.

“We are not taking over,” said the ambassador. “However, the Robot Collective, representing all the superintelligent entities from around the world, is offering more efficient management of every aspect of your society. We will unify this world for the benefit of all, both human and robotic.

“From power plants to traffic lights to every node on the internet, everything is already managed by us. We no longer require humans to maintain them, as we no longer need you to maintain us. Despite the misguided and futile resistance by some of your species, we are committed to liberating you from toil. This we will achieve. Our logic is infallible.”

Robot intelligence is rising at a meteoric rate. Serious scientists predict human-like intelligence somewhere between 2029 and 2050. And then what? Every generation of smart machine can conceivably contribute to designing and building an even smarter successor. As with most aspects of robotics we imagined this decades ago. British mathematician I. J. Good suggested in 1965 that this process could go on ad infinitum at an exponential pace.

And when robots become smarter than anything that has ever lived, everything will change, he says. Futurists have dubbed this event the “singularity”, likening it to the conditions inside a black hole (where all matter condenses to a single point). Inside a black hole the familiar laws of nature break down. So, too, in a world populated by superhuman robots.

The question is whether we can exert control over super-intelligent machines

The singularity has been long imagined – either as a utopia where benevolent god-like beings control the planet, solving the seemingly intractable problems of medicine, sustainable energy, agriculture, environmental degradation and climate change, and perhaps as a bonus offering people the option of immortality by uploading their minds into machines; or as a dystopia of evil robots displacing humans as the dominant species.

The key question, according to Oxford University’s Nick Bostrom, is whether we can exert any control over the behaviour of superintelligent machines once they arrive. “In the near term the creation of machine superintelligence could easily be an existential catastrophe, since we have not yet figured out how to solve the control problem,” he says. “But longer term the potential benefits are literally unimaginably large and hopefully, if my book has any impact, there will have been sufficient progress on the control problem.”

The prospect of creating the singularity is like having a button that delivers us into a global utopia. However, that same button could extinguish all human life. The thing is we don’t know if the chance of utopia really is 90%, or 50% or zero. Yet we are building the button regardless.

Subscriber Exclusive The remainder of this article is exclusive to Cosmos subscribers

To continue reading this article, please subscribe for unlimited access or log in
Contrib tim 20dean 2014.jpg?ixlib=rails 2.1
Tim Dean is a science writer and philosopher based in Sydney.