Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, and who we interviewed last October, sparked a debate on the future of intelligent machines with his book Superintelligence.
He believes it is imperative to move on and consider how we can build genuinely ethical robots. He warned Cosmos time was not on our side:
“…while philosophers and artificial intelligence researchers are working on it, they are still a long way from translating ethics into algorithms. I just wish there was less of a rush, because it will take some time to figure all this out – and this is a problem that we only ever get one chance to solve.”
Ford is more sanguine and points to the lack of progress towards truly intelligent machines.
No one is suggesting that anything like superintelligence exists now. In fact, we still have nothing approaching a general-purpose artificial intelligence or even a clear path to how it could be achieved. Recent advances in AI, from automated assistants such as Apple’s Siri to Google’s driverless cars, also reveal the technology’s severe limitations; both can be thrown off by situations that they haven’t encountered before. Artificial neural networks can learn for themselves to recognize cats in photos. But they must be shown hundreds of thousands of examples and still end up much less accurate at spotting cats than a child.
But concludes that we need to keep an eye on progress.
For the civilian, there’s no reason to lose sleep over scary robots. We have no technology that is remotely close to superintelligence. Then again, many of the largest corporations in the world are deeply invested in making their computers more intelligent; a true AI would give any one of these companies an unbelievable advantage. They also should be attuned to its potential downsides and figuring out how to avoid them.