Rules to encourage well behaved artificial intelligence

My spine still shivers when I remember the nuclear stand-off between the Soviet Union and the United States in 1962. As a nine-year-old I felt helpless in the face of two leaders poised to push the button.

It was MAD – mutually assured destruction – but sanity prevailed and by the end of the 1960s we had détente.

In the decades since I have felt comfortable with the dazzling march of technology that has reduced global poverty, given us longer lives, delivered the information superhighway and created my zero-emissions Tesla.

Yes, there are disappointments – the internet, for example, has not raised the calibre of conversation but instead has created echo chambers of bigotry and forums for lies and harassment.

But now for the first time since the 1960s something is tickling my worry beads: artificial intelligence. I fear AI’s capacity to undermine our human rights and civil liberties.

While AI has been in backroom development since the 1950s and increasingly implemented by businesses and government in the past few years, I believe 2018 will go down as the year the AI future arrived.

I am well aware of previous impressive developments such as an AI named AlphaGo beating the world Go champion, but I don’t play Go. I do, however, rely on my executive assistant. So this year, when Google publicly demonstrated a digital assistant named Duplex calling a hairdressing salon to make an appointment for its boss, speaking in a mellow female voice filled with human pauses and colloquialisms, I knew AI had arrived.

Shortly afterwards IBM demonstrated Project Debater arguing an unscripted topic against a skilled human. Some in the audience judged Project Debater the winner.

The simplest definition of AI is computer technology that can do tasks that ordinarily require human intelligence. More formally, AI is the combination of machine learning algorithms, big data and a training procedure. This mimics human intelligence: the combination of innate ability, access to knowledge and a teacher. 

Also like humans, when it comes to AI there are the good, the bad and the ugly.

The good: digital assistants, medical AIs to diagnose cancer, satellite navigation that figures out the best way home and systems that somehow know that your credit card has been used fraudulently.

The bad: biases such as that discovered in the COMPAS risk-assessment software used to help judges in the US determine a sentence by forecasting the likelihood of a defendant reoffending. After two years of evaluation COMPAS was found to have overestimated re-offence rates for black defendants and underestimated re-offence rates for white defendants. Every human I know is biased, so why worry when an AI is biased? Because there is a good chance it will be replicated and sold by the millions, thus spreading the bias across the planet. 

The ugly: think Orwell’s 1984. Now look at the social credit score in China, where citizens are watched in the streets and monitored at home, losing points for littering or paying their bills late, and as a consequence being denied a bank loan or their right to travel. {%recommended 7070%}

So how can we utilise the good but avoid the bad and the ugly? We must actively manage the integration of AI into our human society like we have done with electricity, cars and medicines. Australia can lead the way, as we did for IVF by becoming the first country to collate and report on birth outcomes and the first to publish national ethics guidelines. To capture the benefits and avoid the pitfalls requires a public discussion. In July the Australian Human Rights Commission launched a project on human rights and digital technology. In my keynote speech I finished with the question: “What kind of society do we want to be?”

While the debate unfolds, here a few starting suggestions. 

First, adopt a voluntary, consumer-led certification standard for commercial AI akin to the Fairtrade stamp for coffee. I call it the ‘Turing Certificate’, in honour of Alan Turing, the persecuted father of AI. It won’t stop criminals and rogue states but it will help with the smartphones and home assistants we choose to purchase.

Second, adopt the ‘Golden Rule’ proposed by the head of Australia’s Department of Home Affairs, Michael Pezzullo: that no one should be deprived of their fundamental rights, privileges or entitlements by a computer rather than an accountable human.

Third, never forget that AI is not actually human. It is a technology. We made it. We are in charge. Hence I propose the ‘Platinum Rule’: that every AI should have an off switch.

Alan Finkel will be joining machine learning expert Professor Anton van den Hengel and data policy specialist Ellen Broad for a round table discussion at the Royal Institution of Australia’s headquarters on 5 October 2018. Click here to book a ticket.

Please login to favourite this article.