Artificial intelligence is used across myriad disciplines to trawl through troves of data too complex for the human brain – and indeed the average computer – to process, as well as to solve seemingly unsolvable problems.
It’s posited that these technological super-brains could help us develop medicines and vaccines, solve economic problems, or engineer next-generation technology, among many other helpful applications.
But in one of science’s most difficult and often abstract fields, the power of the artificial mind is finally starting to prove itself. For the first time, scientists are using machine learning to come up with theories – rather than simply combing through the raw data – in some of the most confounding fields of mathematics.
As described in a new study in the journal Nature, researchers from the universities of Sydney and Oxford have been working with AI lab DeepMind, based in London, to apply machine learning to suggest new avenues for inquiry, and to attempt to prove mathematical theorems.
“Problems in mathematics are widely regarded as some of the most intellectually challenging problems out there,” says Geordie Williamson, director of the University of Sydney’s Mathematical Research Institute and one of the world’s foremost mathematicians. Williamson also acts as a consultant in pure mathematics for DeepMind, a subsidiary of Alphabet.
“While mathematicians have used machine learning to assist in the analysis of complex data sets, this is the first time we have used computers to help us formulate conjectures or suggest possible lines of attack for unproven ideas in mathematics.”
Williamson applied the AI, which uses the same techniques as the company’s famous AlphaGo, to his particular branch of mathematics, representation theory – the field that explores higher dimensional space using linear algebra.
Specifically, he used AlphaGo to attempt to prove an old conjecture about Kazhdan-Lusztig polynomials, which had remained unsolved for 40 years. But more on that later.
What, or who, is DeepMind?
DeepMind entered the AI stage with their program AlphaGo, the first computer program to defeat a professional human Go player.
Go is an ancient game of strategy with its origins in China some 4,000 years ago; it involves two players, and a board chequered with 19 vertical and 19 horizontal lines to form 361 intersections. Each player has 180 or 181 black or white pieces, called stones, which they can place on the point of intersection of any two lines.
The goal of Go is to gain territory by enclosing vacant points with boundaries made from their stones. The game is widely regarded as one of the most complex and intellectually challenging board games in history.
“Go is a phenomenally complex game,” agrees Williamson. “It has more board positions than there are atoms in the observable universe.”
According to DeepMind, Go was always thought far too complex for computers or even AI to compete in; standard AI models simply couldn’t handle the sheer number of possible moves, let alone evaluate the relative wisdom of each one.
AlphaGo, however, was designed to combine an advanced ‘search tree’ with deep neural networks. These networks take the description of the Go board as an input and process it through different network layers, each of which contains millions of neuron-like connections, mimicking a brain.
In 2016, AlphaGo was pitted against Lee Sedol, an 18-time world champion Go player. AlphaGo won, four games to one. Crucially, in the second game, AlphaGo did something no one thought an AI would ever do.
“Something quite remarkable happened,” says Williamson. “There’s this famous move 37; the computer played a move that humans would never play.
“There’s popular wisdom in Go that you should avoid certain moves close to certain pieces, and what the computer noticed is that you should avoid those situations in almost all cases, but sometimes you can play them in very specific cases.”
What was so striking about the AI’s choice of move was that it was deeply risky.
“Commentators said that it was, from a human point of view, an incredibly courageous move.”
The word ‘courageous’ jars with our understanding of what a computer is; it implies a leap of faith. But that, according to Williamson, is exactly what it was – a risk that was deemed worthwhile for the potentially significant reward.
In fact, programs asked AlphaGo what the probability was that a human would have played the same move – the computer responded that the chance was one in 10,000.
“So, the goal of this project was, can you imagine an AI making such a move in mathematics?” says Williamson. “And how would we use that to further our mathematical knowledge?”
Williamson says that it’s this risk-taking, almost creative behaviour that makes the AI suitable for complex mathematics.
“It’s probably not well understood in the general public, but research in mathematics is very creative, intuitive, and imaginative,” he says. “A lot of the difficulties that we have are being able to imagine something about, for example, 10-dimensional space, or imagining what life is like on the quantum scale, or imagining what life is like if you’re as large as a galaxy – these are the thought experiments that we’re conducting every day.
“And traditionally, computers are very much used on the exact side of things. Whereas this work is very interesting, because we’re using AI to suggest approaches to problems, or to point out what aspect of the following problem is important.
“So really, it’s using AI to tap into what you might think of as the more creative or intuitive aspects of mathematical research.”
What has machine learning been used for?
Williamson applied machine learning to a decades-old problem concerning Kazhdan-Lusztig polynomials, part of a branch of mathematics called representation theory, which studies abstract algebraic structures by representing their elements as linear transformations of vector spaces.
He explains Kazhdan-Lusztig polynomials by comparing them with the periodic table: “In representation theory we seek things like periodic tables, so we might call this, for example, the character table of the group.
“Where Kazhdan-Lusztig polynomials come in is they tell you something like the atomic number. So, in the periodic table you have helium and hydrogen and all the different elements, but one needs to know more fine-grained information in order to use the periodic table. Under this analogy, this fine-grained information is provided by Kazhdan-Lusztig polynomials.
“But these polynomials are still poorly understood, and difficult to compute. But there’s been a suspicion for 40 years that you can deduce this polynomial simply from a graph that’s rather easy to compute.”
Using these machine learning models, Williamson is close to potentially proving the conjecture: “we haven’t proved it yet, but we have a relationship that we’ve tested on millions of examples and it holds up.”
Meanwhile the Oxford team, and study co-authors Marc Lackeby and András Juhász, applied AlphaGo’s computing power to knot theory, the branch of mathematics concerned with mathematical knots.
Knots in mathematics are similar to, and inspired by, the type of knot you might tie with a piece of string. But unlike knots in the real world, mathematical knots have ends that are joined, so they cannot easily be undone.
“You want to know, ‘can I undo this knot, or not?’” says Williamson. “And one of the things that you do is you associate to a knot various numbers that measure how twisted it is.
“We’ve developed, over the last couple of hundred years, many different ways of associating numbers to knots, so to any knot there’s probably hundreds of different numbers that you can associate to it, and they measure its character in various ways.”
Lackeby and Juhász wanted to know whether there were relationships between these traits that hadn’t yet been seen. So, they asked the computer whether it could predict certain ‘personality traits’ by being given some of the other traits.
As it turned out, it could. With the help of AI, the Oxford researchers have now established a completely new theorem in their field.
What’s all this maths good for?
If all of this sounds confusing, that’s because it is. Pure mathematics is the kind of field that eludes most people’s understanding, because it’s associated with realms, dimensions, shapes and scales that most human brains never consciously contend with.
But Williamson says pure mathematics, while valuable in and of itself as a way to interrogate the universe of numbers, has a way of cropping up all over our day-to-day lives, whether we know it or not.
“In some sense, in pure mathematics, we’re just fascinated by the problems, but mathematics also has this reoccurring habit of being extremely useful,” he says.
“Every time you do a bank transaction, you’re using prime numbers. Every time you send a WiFi signal, you’re using coding theory. Riemann found the laws that govern curved space 50 years before Einstein needed them to write down general relativity.”
And Williamson says the mathematical applications for which the researchers are using AlphaGo may yet throw up new possibilities for practical application.
“One of the very interesting things about this work is that very similar techniques helped in both cases, but these two questions are rather different areas of mathematics. So, you’ve got to assume that these techniques are more broadly applicable, and that this will grow into a tool for the working mathematician, which could have a huge impact.”
But ultimately, for Williamson, the joy of mathematics is in the creativity it affords; something AI can now help with: “The striking thing for me is that what I love about mathematics is intuition and creativity, and this seems to be a tool that fosters that.”
In a way, he notes, working with machine learning models is a little bit like working with other mathematicians, who all have their own unique ways of seeing the world.
“We’re a very collaborative species, mathematicians, and every time you talk to someone, they’ve got a slightly different take on things.
“Interacting with this model is a little bit like that, [it provides] that slightly different take on things. It’s nothing like a human collaborator, but it’s still pushed me in unexpected ways, which I find very exciting.”
And while AlphaGo is certainly not nearly as complex as a human brain, it and other AIs possess eerily human-like characteristics.
“AI is good at the kinds of soft human tasks computers are typically very bad at, like speech recognition, image recognition,” Williamson says. “It’s good at tasks that we do effortlessly but which we struggle to program on a computer.
“These are tasks which belong to a very different part of the human experience.”