Artificial intelligence is a powerful tool, but it threatens our privacy.
Fortunately, there is a way forward. To see the way, step back with me for a moment to the era of my childhood. Back then, when I was voraciously reading science fiction, I thought the things imagined were too fantastic to ever be real. Today, I think the authors were fantastic – because so many of their ideas have come true.
The now ubiquitous office scanner and photocopier was carefully described by Isaac Asimov in his 1952 book David Starr, Space Ranger – five years before the first image scanner was invented. An electronic fingerprint scanner, like the one on my iPhone, was depicted in the same book.
Asimov also envisaged the portable computer more than 25 years before the first clunky laptops arrived in 1981.
One of the joys of looking back at these works is to follow the trail from fiction into fact. But there’s a qualifier: the science fiction in question must be credible. By credible, I mean it builds on existing technologies or is at least broadly consistent with the laws of physics.
Asimov himself agreed. In 1975, he noted: “A program that purports to be science fiction, and either scorns science or fails to understand it, can scarcely be intelligent in other directions”. And he, like yours truly, was fond of one particular television show that met these criteria.
“That’s the difference between Star Trek and all the other science fiction series that I have seen. Star Trek was the only one that insisted on people knowing something about science.”
Like Asimov’s work, Gene Roddenberry’s vision has also made its way into our everyday lives. Star Trek’s communicators are now mobile phones. The crew’s data storage discs are USB drives. Likewise, automatic sliding doors, voice activation and wireless headsets have all become boringly normal.
But, there’s one key piece of Star Trek technology that has yet to be fully realised: the starship Enterprise’s on-board computer with its powerful localised processing.
Not that our computers aren’t powerful. Since the IBM Deep Blue computer shocked computer scientists and chess disciples by defeating reigning world champion Garry Kasparov in 1997, artificial intelligence software and computer processing hardware have relentlessly progressed to the point that if you phone a restaurant to make a booking, you can’t be certain whether the pleasant receptionist at the other end is a computer pretending to be a human or the real thing.
However, these are not examples of generalised artificial intelligence. You can’t ask Deep Blue for an apple strudel recipe and you can’t engage the computer receptionist in a discussion about electric cars. If somehow you did, and you got an answer of a kind, it would no doubt be accompanied by a message from a commercial sponsor.
But Captain Kirk would freely share information with the Enterprise computer without fear of being bombarded by endless advertisements – and herein lies the technological gulf that still separates us.
When I use my iPhone, I say: “Siri, call my wife, Elizabeth Finkel”. And Siri happily replies “calling Elizabeth Finkel”. Unless I am in an underground car park. In that case, Siri goes silent before sheepishly saying “Uh oh, I’m having trouble connecting”. This tells me that the speech processing is not being performed on my iPhone. Instead, my instruction goes to a server – a gigantic computer in the US – that processes my words and sends them back to my iPhone as digital instructions rather than the original audio.
Nearly everything we do on our smartphones is stored, deconstructed and analysed by servers devoid of any morals: servers not servants. As such, they present an ethical dilemma.
I want the immense benefits that AI provides; but I am alarmed that my smart device relies on the AI in the Cloud, where companies can identify me, follow me around and potentially share my information with third-party organisations. In this new age of AI, our key challenge is to harness the power of science to enhance human lives without sanctioning practices that violate human dignity.
In the short-term, we can help guide the responsible development and use of AI systems. But there is a more permanent solution: more technology.
In the long-term, I want the software and the processor in my phone to each be a thousand times more powerful, so that my phone can take my questions, interpret them locally, and then anonymously reach out to the Cloud to get the answers I need.
AI on my device, not in the Cloud. Just like the Enterprise computer.
You would think that there would be a rush to develop this kind of localised artificial intelligence, intended to work exclusively for you, not for the benefit of others. However, the only example I know of is a patient monitoring device from Australian company Home Guardian.
What’s special is that the device is not connected to the internet. If a patient falls, the device works out by itself that there is a problem and sends a text message to a carer. No advertisements. No loss of privacy.
For AI’s future to be assured, it must be seen as an effective and safe instrument for individual empowerment, and not as an instrument vulnerable to exploitation.
We can “boldly go where no one has gone before” and put the power of the Enterprise computer in our phones so that there is simply no need to risk our privacy or security, ensuring that the tremendous possibilities of AI “live long and prosper”.
This article is published in Issue 88 of Cosmos magazine. You can subscribe here.