Italian mathematicians say they have shown that artificial intelligence systems can learn to recognise complex images even more quickly than they do now by using a theory they concede is novel and rather abstract.
Writing in the journal Nature Machine Intelligence, the team from the Champalimaud Centre for the Unknown in Portugal describes its application of what is called topological data analysis, or TDA.
First developed 25 years ago by co-author Patrizio Frosini, now at the University of Bologna, Italy, TDA is based on topology, a sort of extended geometry that, instead of measuring lines and angles in rigid shapes, such as triangles or squares, classifies highly complex objects according to their shape.
It has applications in everything from cosmology and theoretical physics to robotics and biology.
The point in this case, the researchers say, is that the neural networks that are the basis of AI are not very good at topology.
Systems that are essentially electronic models of networks of biological neurons can learn, for example, to recognise virtually any human face by looking at and learning from thousands of images, but they cannot apply any real-world knowledge to simplify the process.
For instance, they do not recognise rotated objects. The same object will look completely different every time it is rotated, so they have to memorise each configuration separately.
Moreover, the researchers add, much as these machines are increasingly successful at pattern recognition, nobody really knows what goes on inside them as they learn their task.
This led them to ask: is there a way to inject some knowledge into the neural network, before training, in order to cause it to explore a more limited space of possible features instead of considering them all – including those that are impossible in the real world?
“We wanted to control the space of learned features”, says first author Mattia Bergomi. “It’s similar to the difference between a mediocre chess player and an expert: the first sees all possible moves, while the latter only sees the good ones.”
Their study, he adds, addressed one simple question: “When we train a deep neural network to distinguish road signs, how can we tell the network that its job will be much easier if it only has to care about simple geometrical shapes such as circles and triangles?”
TDA can be seen, the researchers say, as a tool for finding meaningful internal structure – topological features – in any complex object that can be represented as a huge set of numbers, by looking at the data through certain well-chosen “lenses” or filters.
If the data is about faces, it becomes possible to teach a neural network to recognise faces without having to present it with each of the different orientations faces might assume in space.
To test this, they set out to teach a neural network to recognise hand-written digits, which can be very ambiguous. Depending on the writers, two different digits may prove indistinguishable, but two instances of the same digit may be seen by them as different.
They built a set of a priori features they considered meaningful and forced the machine to choose between these different “lenses” to look at the images.
They found that the number of images the TDA-enhanced neural network needed to see to learn to distinguish between 5s and 7s – and thus the time it took to do so – decreased significantly.
“What we mathematically describe in our study is how to enforce certain symmetries, and this provides a strategy to build machine learning agents that are able to learn salient features from a few examples, by taking advantage of the knowledge injected as constraints”, says Bergomi.
And, he adds, if we can allow human to drive the learning process of learning machines, we can start to “move towards a more intelligible artificial intelligence and reduce the skyrocketing cost in time and resources that current neural networks require in order to be trained”.
Nick Carne is the editor of Cosmos Online and editorial manager for The Royal Institution of Australia.