The potential of AI to improve our lives becomes more apparent every day. Unfortunately, the opposite is true – and we’d better wake up to the horrifying possibilities.
When I read the words “without being overly alarmist” in a scientific paper, I get a bit … alarmed. That’s not a phrase one normally comes across in scientific literature – it being, well, more than a bit alarmist. Yet sometimes such alarms appear to be thoroughly justifiable. Scientists perform their ethical duty when alerting the rest of us to the more disturbing implications of their research. The future holds dangers, they’re saying, but we’re letting you know, so we can take the proper precautions before it arrives.
For the past several years, research pharmacologists have developed AI-powered tools to aid in their ability to discover new drugs. Their AIs have the ability to permute a known chemical structure synthetically, matching these new structures against similar compounds, using those matches to estimate its potential effectiveness as a treatment. These tools mean drug researchers can move far more quickly (and cheaply) from a concept to a drug ready to be tested.
In the wrong dose, all drugs can become poisons. But what if you started out with an incredibly potent poison and use an artificial intelligence to improve that?
Before we peer into this dangerous future, let’s cast a glance back at the past – the ancient past. Between that past and our future we get a real sense of an arc of possibilities covering most of the range between heavenly and hellish.
When the ancient past speaks to us, it most often does so incompletely. Surprisingly few texts survive in anywhere near complete form before the 15th century invention of moveable type. Although a legion of monks and scribes spent a many a lifetime painstakingly duplicating manuscript copies of the few texts that survived the collapse of the Western Roman Empire and the tumultuous Middle Ages, most of this involved replication of the same handful of texts, the core of the canon: the Bible, the Church Fathers, Aristotle, Cicero, Homer, Virgil – and a few others. Nothing like the famed Library of Alexandria – with its tens of thousands of papyrus manuscripts – has survived. When there’s a significant find – such as the manuscripts discovered in a cave at Qumran that became known as the Dead Sea Scrolls – it adds so much to our understanding of the ancient past.
Even these wondrous finds are woefully incomplete. Entropy does its work, while insects and worms and weather do much of the rest. Bits fall out. Across two millennia, messages get scrambled, and, even in the best cases, are rarely ever more than partially received.
As a result, our ancient past consists primarily of fragments: bits of papyrus, one corner of a clay tablet, a parchment that’s worn through to complete transparency, or a stone inscription eroded away. We see the past through a glass darkly, and do our best to make some sense of it.
That sense has become one of the central (and most complex) tasks of those archaeologists, anthropologists and philologists studying the ancient world. They might find an inscription – incomplete, naturally – and then scour their own memories for similar inscriptions, because a similar inscription can lead them to the understanding of another that’s incomplete. But even the best human memories have limits; it took a computer database to multiply memory across a broader experience.
The Packard Humanities Institute in California created that database of ancient inscriptions – well over three million characters in Ancient Greek – and made it searchable. A researcher can type in the bits of the inscription they’ve got at hand, and the service will respond with any inscriptions that it recognises as a match. That’s better than the best memory – and a great help. But that was only the beginning.
In an earlier column I pointed to an artificial intelligence that “read” millions of code samples to create Github’s CoPilot – a tool that helps programmers write code by offering just the code snippet they need at just the moment they need it. It works – imperfectly. But with a human behind the wheel, Copilot accelerates the writing and accuracy of computer programs.
Something very similar has been going on in the field of ancient inscriptions. Google’s London-based DeepMind AI lab digested the 35,000-plus Ancient Greek inscriptions in the Packard Humanities Institute database, and built a model that did its best to fill in the blanks – the missing bits of the inscriptions. This program, known as Ithaca, hypothesises based on the large set of samples it has, and provides its own best guess as to what that missing part of the papyrus (or clay or stone etc) might have originally spelled out.
Ithaca is far from perfect. As detailed in a paper in Nature, it only makes a correct guess around two-thirds of the time. That’s far better than a researcher working on their own, who only get it right about a quarter of the time. But it turns out that when the two work together – when Ithaca and a researcher partner up, and the researcher uses Ithaca’s recommendations to guide their own efforts at filling in the blanks – it’s better than either alone, getting it right nearly three-quarters of the time.
That partnership – similar to the “pair programming” fostered by GitHub Copilot – tells us that artificial intelligence best reaches its heights when in the service of a human expert. Both do better together.
DeepMind have released all the code behind Ithaca, so other researchers can build on their work. They’ve even shared a publicly accessible version so you too can have a go at decoding your own bits of Ancient Greek. The DeepMind team promises that they’ll soon have versions for other ancient languages, including Akkadian (Ancient Mesopotamian) to Demotic (Ancient Egyptian), Hebrew, even Mayan. With a bit of luck, we could soon understand much more of what the ancients wrote.
Now back to the present: there’s a war on. Wars act as phenomenal accelerators of scientific and technological advancement: the Second World War opened with a cavalry charge and ended in a mushroom cloud. Eighty years later and our weapons have changed. Nations fight stealthy battles in cyberspace, each seeking to corrupt the others’ command, control and communications systems – or simply sow chaos and disinformation. That much we already know. In the middle of March 2022, we got a reminder that these are not the only tools we have to hand. Our most amazing tools possess a dual nature that we have either been ignorant of, or simply chose to ignore.
So back to the question I asked at the start: what if you used artificial intelligence to improve potent chemicals? That was the question troubling a group of researchers who decided to find out what the limits were.
Writing in Nature Machine Intelligence, the authors started with a hypothetical – what if you started out with VX nerve gas, possibly the most poisonous substance known: could an artificial intelligence improve it? They quickly learned that yes, it certainly could:
“In less than six hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible.”
The AI not only re-invented VX, it went on a bit of a spree, and “invented” more potentially lethal nerve agents – some already known, plus many others that no one had yet discovered. And it did all of this in just six hours.
Having the chemical structure for a drug isn’t the same thing as having a working compound in hand, much less something deployed in a weapon. There’s a huge gap between potential and realisation – fortunately. But with so much potential available so quickly and so effortlessly, accelerated almost beyond belief by the clever redirection of a tool that’s already in widespread use, the authors point to the fact that this bridge has already been crossed. This tool for generating an infinite supply of manifestly horrible weapons already exists, is already useable, and can’t simply be un-invented. While there are strong international prohibitions against the use of chemical weapons, the floor has just fallen out of the process of the discovery of those weapons.
One thing is already clear: this discovery can and will be repeated across many disciplines. The law – and our civilisation – now need to catch up. Our computers are getting very good at filling in the blanks. Partnering with them is vital to get the best out of them, and it’s also the only long-term solution to ensuring that these incredibly potent tools can be used safely and responsibly.
Mark Pesce invented the technology for 3D on the Web, has written seven books, was for seven years a judge on the ABC's "The New Inventors", founded postgraduate programs at USC and AFTRS, holds an honorary appointment at Sydney University, is a multiple-award-winning columnist for The Register, pens another column for IEEE Spectrum, and is a professional futurist and public speaker. Pesce hosts both the award-winning "The Next Billion Seconds" and "This Week in Startups Australia" podcasts.