How do we cope with the new robot angst?

“Tomorrow can be a wonderful age. Our scientists today are opening the doors … to achievements that will benefit our children and generations to come.” So proclaimed Walt Disney in 1955 when he launched Tomorrowland amidst friendly robots and rocket backpacks.

Disney’s timing was auspicious. That same year the polio vaccine put an end to the scourge that paralysed or killed half a million people each year. We also got the atomic clock (paving the way to GPS satellites) and Velcro. A year later came videotape, the hard disk and the national nuclear power grid. Two years later, Sputnik was launched. Could robots be far behind?

The 21st century is not shaping up to be quite as sunny as the one imagined at Tomorrowland. The latest worry is that robots who are smarter and stronger than us might take over the world if we don’t watch out.

It’s not just luddites who worry. The latest angst comes from some of the most technologically savvy men on the planet. At last year’s World Economic Forum at Davos, so concerned were SpaceX and Tesla founder Elon Musk, Peter Thiel of PayPal and other Silicon Valley entrepreneurs that they personally committed $1 billion to funding “Open AI” a new non-profit company that proclaims Artificial Intelligence should remain “an extension of individual human wills”. They were supported by astrophysicist Stephen Hawking who has expressed concern that “the development of full artificial intelligence could spell the end of the human race”. Bill Gates also told startled reporters: “I am in the camp that is concerned about super intelligence.”

If Bill Gates is worried, shouldn’t we all be? And given the level of concern, how adequate is the response? To find out, I looked up some of the institutions that share Hawking’s concern about the potential dangers of AI. They are the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute and the Future of Life Institute. All of these impressively named bodies are linked – the same people are on the boards, and they are funded by the same people.

It is good that such boards exist, but most of the researchers and board members are older white men. There are nine women in total, out of 120 people at all the institutes, as well as one Asian man (at the Future of Humanity Institute) and one person of colour (actor Morgan Freeman at the Future of Life Institute Science Board).

We need to design artificial intelligence with a full understanding of what is at stake.

So here is one of my reservations. Open AI may seek to be “an extension of individual human wills”. But whose wills are represented? One of the women who is not on these boards is bioethicist Justine Cassell. She notes that most concern about the robot takeover comes from super-competitive power-hungry men whose expertise is quite far from the AI field.

She wonders if they are afraid of robots made in their own image. But there is another narrative. A movement called UBI – Universal Basic Income – once advocated by socialists, is now embraced by Silicon Valley entrepreneurs. They look forward to a future where intelligent machines not only power the economy, they do all the dangerous and mind-numbing jobs, leaving humans the time for leisurely and creative pursuits.

We need to design artificial intelligence with a full understanding of what is at stake. In Cassell’s words “since we make the robots, we can make the future”. We are moral agents, not helpless observers.

AI is our collective brainchild. And it takes a village to raise this powerful, smart and growing kid. Rules will be needed, and curfews set. But too much fear blinds us to the possibilities that intelligent machines may allow – a future where jobs of drudgery, mechanical complexity, and danger are no longer a part of human existence. We need an inclusive discussion to find the right parenting balance.

And as Cassell reminds us, unlike human children, with these kids you can always pull the plug.

Please login to favourite this article.