Dark intentions: should we fear AI with purpose?

It’s hard to ignore dystopian pronouncements about how Artificial Intelligence (AI) is going to take over our lives, especially when they come from luminaries in tech. Entrepreneur Elon Musk, for instance, says “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.” And IT maven Erik Brnyojolfsson, at MIT, has quoted Vladimir Putin’s claim that “The one who be-comes leader in this sphere will be ruler of the world.”

I understand the angst. AI is chalking up victories over human intelligence at an alarming rate. In 2016 AlphaGo, training itself on millions of human moves, beat the world Go champion, Lee Sedol. In 2017 the upgrade – AlphaGo Zero – trained itself to champion level in three days, without studying human moves.

Watson, which in 2011 beat human champions at the TV quiz Jeopardy, can now diagnose pneumonia better than radiologists. And Kalashnikov are training neural networks to fire machine guns. What’s not to fear? 

The real danger would be AIs with bad intentions and the competence to act upon them outside their normally closed and narrow worlds. AI is a long way from having either.

AlphaGo Zero isn’t going to wake up tomorrow, decide humans are no good at playing board games — not compared to AlphaZero, at least — and make some money beating us at online poker. 

And it’s certainly not going to wake up and decide to take over the world. That’s not in its code. It will never do anything but play the games we train it for. Indeed, it doesn’t even know it is playing board games. It will only ever do one thing: maxim-ise its estimate of the probability that it will win the current game. Other than that, it has no intentions of its own.

However, some machines already exist that do have broader intentions. But don’t panic. Those intentions are very modest and still are played out in a closed world. For instance, punch a destination into the screen of an autonomous car and its in-tent is to get you from A to B. How it does that is up to the car. 

Deep Space 1, the first fully autonomous spacecraft, also has limited human-given goals. These include things like adjusting the trajectory to get a better look at a passing asteroid. The spacecraft works out precisely how to achieve such goals for itself. 

There’s even a now rather old branch of robot programming based on beliefs, de-sires & intentions that goes by the acronym BDI.

In BDI, the robot has “beliefs” about the state of the world, some of which are programmed and others derived from its sensors. The robot might be given the “desire” of returning a book to the library. The robot’s “intentions” are the plan to execute this desire. So, based on its beliefs that the book is on my desk and my desk is in my office, the robot goes to my office, picks up the book, and drives it down the corridor to the library. We’ve been building robots that can achieve such simple goals now for decades.

So, some machines already do have simple intentions. But there’s no reason to go sending out alarmed tweets. These intentions are always human-given and of rather limited extent. 

Am I being too complacent? Suppose for a moment we foolishly gave some evil intents to a machine. Perhaps robots were given the goal of invading some country. The first flight of stairs or closed door would likely defeat their evil plans. 

One of the more frustrating aspects of working on AI is that what seems hard is of-ten easy and what seems easy is often hard. Playing chess, for instance, is hard for humans, but we can get machines to do it easily. On the other hand, picking up the chess pieces is child’s play for us but machines struggle. No robot has anything close to the dexterity of a three-year-old.

This is known as Moravec’s paradox, after Carnegie Mellon University roboticist Hans Moravec. Steven Pinker has said that he considers Moravec’s paradox to be the main lesson uncovered by AI research in 35 years. {%recommended 6326%}

I don’t entirely agree. I would hope that my colleagues and I have done more than just uncover Moravec’s paradox. Ask Siri a question. Or jump in a Tesla and press AutoPilot. Or get Amazon to recommend a book. These are all impressive exam-ples of AI in action today. 

But Moravec’s paradox does certainly highlight that we have a long way to go in getting machines to match, let alone exceed, our capabilities.

Computers don’t have any common sense. They don’t know that a glass of water when dropped will fall, likely break, and surely wet the carpet. Computers don’t understand language with any real depth. Google Translate will finding nothing strange with translating “he was pregnant”. Computers are brittle and lack our adaptability to work on new problems. Computers have limited social and emotion-al intelligence. And computers certainly have no consciousness or sentience. 

One day, I expect, we will build computers that match humans. And, sometime after, computers that exceed humans. They’ll have intents. Just like our children (for they will be our children), we won’t want to spell out in painful detail all that they should do. We have, I predict, a century or so to ensure we give them good intents.

Please login to favourite this article.