The day is coming when AI will do everything better than we can – sooner than we think.
It’s our human superpower to invent tools that amplify ourselves. We’ve invented tools that amplify our muscles and allow us to lift things further and faster and higher. And now, through AI, we’re inventing machines that can amplify our minds. This opens up vast possibilities.
It would be terribly conceited to suppose that we humans are as smart as anything could possibly be. Of course, we are smart, but there are lots of limitations. The human brain works at biological speeds that are measured in the hertz. Computers work in the gigahertz – millions of times faster.
Human memory is limited by the size of our brains, which is limited by the size of the birth canal. We’ve still got a long way to go to build machines that match all of our capabilities, but where we have built computers in narrow-focus domains, machines already match human abilities, and in some cases have greatly surpassed us. The best chess-playing computer programs are now far better than the best grandmasters – if Magnus Carlsen played the best chess program today, he would lose 100 to nil. Similarly, by way of example, we have built computer programs that can interpret X-rays more accurately, more quickly, and more cheaply than any human doctor.
We’ve still got a way to go to match humans’ creativity, adaptability and flexibility. But I imagine that at some point we will exceed all those abilities too, and that point isn’t so far away. I surveyed 300 of my colleagues, other experts around the world working in the field. And the median answer they gave for when machines would match humans in all of their abilities was 2062.
Unfortunately, we’re already seeing machines being a little too persuasive. Machine-learning algorithms at the heart of Twitter’s and Facebook’s news feeds are manipulating how we vote, and probably changed the outcome of the Brexit referendum and the Trump presidential election. We are discovering that humans are easily hacked, and artificial intelligence is the perfect tool to do that, at speed and scale and cost.
Increasingly, we’re waking up to the idea that the digital space is one that needs to be regulated, and that we do need to be protected from some of its potential harms. I spend an increasing amount of my time talking to politicians and people in civil society, trying to inform them that this a really important conversation that we need to have, because these are technologies that will touch everyone.
It begins with trying to explain the opportunities for AI technology. If you wind the clock back even half a dozen years, most people’s ideas were derived from Hollywood movies – it used to be that everyone would put up a picture of the Terminator and discuss rather fantastical ideas about the robots taking over. But the reality tends to be far more prosaic than Hollywood would have us believe, and the good thing is that those conversations have moved on, and now tend to be much more nuanced about how we have to carefully regulate and control the use and misuse of such technologies – about the bias of the algorithms, for instance, or how problematic it is that facial recognition is becoming a pervasive technology.
Autonomous killing machines are definitely a threat, but these aren’t humanoid robots like the Terminator. They’re the drones that we sadly see in the skies above Ukraine and elsewhere. Drones are increasingly autonomous, and the same facial recognition software that opens our smartphones is being used by drones to potentially identify, track and kill humans on the ground. And that takes us to a very troubling place. One of my colleagues quite rightly compares them to weapons of mass destruction.
The wonderful thing about computers is that if you can get them to do something once, you can get them to do it 10,000 times, or a million times – they can repeat things long beyond human patience. And that’s true also in the battlefield. If you can get one drone to kill one person, you can get 1000 drones to kill 1000. And that takes us to a dangerous place. These will be weapons of terror.
But the positive potential of AI is immense. We already have machines that can do the four Ds – the dirty, the dull, the difficult and the dangerous. But there are still plenty of things that people do today that they shouldn’t have to, whether it be working down mines or in warehouses. We can get machines to do that work, and it will liberate humans to concentrate on the finer things.
If we look broadly at the arc of history, life has improved significantly since the Industrial Revolution. In Australia, life expectancy has nearly doubled. And, relatively speaking, we live like kings and queens today. We have machines that wash our clothes, and wash our dishes, and microwave ovens that can furnish food quickly, and many luxuries that would have been undreamt of 200 years ago.
Technologies like artificial intelligence can help us live even better lives – and work less. We forget that the “weekend” was an invention of the Industrial Revolution: workers in the north-east of England demanded to share some of the spoils of the industrialisation of work, to have Sundays off to go to church, and then it became Saturday afternoon off to rest, and then all of Saturday. But then we somehow stopped – we thought “that’s enough, we only need two days off out of every seven”.
There’s nothing about the Earth going around the Sun that says you can only have two days off every seven! There are some big experiments happening now in Europe and elsewhere trialling the four-day week. And these three-day weekend experiments have already discovered two interesting things. First of all, people are just as productive – they do as much work in four days as they did in five, so we can pay them just as much. And secondly, and who would have imagined this, people are happier. People spend more time with their families, spend more time doing their hobbies, volunteering in their communities, doing whatever brings them joy and satisfaction in life.
Maybe the machines will help give us that.
As told to Graem Sims for Cosmos Weekly.
Toby Walsh is a Laureate Fellow and Scientia Professor of Artificial Intelligence in the School of Computer Science and Engineering at UNSW Sydney, and also an adjunct fellow at CSIRO Data61. He is the author of several books, including Machines Behaving Badly: The Morality of AI, and 2062: The World That AI Made (Black Inc Books).