Self-learning quantum-enhanced artificial intelligence is one step closer following the invention of a prototype quantum device that can generate all possible futures in a simultaneous quantum superposition.
The device – created in a joint venture between researchers at Australia’s Griffith University and the Nanyang Technological University in Singapore – is a custom-designed photonic quantum information processor in which the potential future outcomes of a decision process are represented by the locations of photons.
“When we think about the future, we are confronted by a vast array of possibilities,” says Mile Gu of Nanyang, who was in charge of developing the quantum algorithm that underpins the device.
“These possibilities grow exponentially as we go deeper into the future. For instance, even if we have only two possibilities to choose from each minute, in less than half an hour there are 14 million possible futures. In less than a day, the number exceeds the number of atoms in the universe.”
The device constructed by the Australian-Singaporean team is considerably more modest in its abilities. It can hold just 16 possible futures in simultaneous superposition, weighted by their probability of occurrence. The algorithm which governs them, however, can hypothetically scale up without upper limit.
“Our approach is to synthesise a quantum superposition of all possible futures for each bias,” lead author Farzad Ghafari from Griffith University says.
“By interfering these superpositions with each other, we can completely avoid looking at each possible future individually. In fact, many current artificial intelligence algorithms learn by seeing how small changes in their behaviour can lead to different future outcomes, so our techniques may enable quantum enhanced AIs to learn the effect of their actions much more efficiently.”
Such outcomes, however, are still a long way off. Encapsulating 16 possible outcomes in quantum superposition is impressive, but functional applications of the prototype would need to hold more, by orders of magnitude.
It is, nevertheless, a start. And in making that start the researchers are quick to place it in its historical context.
Co-author Jayne Thompson credits the late Nobel laureate physicist Richard Feynman with providing inspiration for the research.
“When Feynman started studying quantum physics, he realised that when a particle travels from point A to point B, it does not necessarily follow a single path,” she says.
“Instead, it simultaneously transverses all possible paths connecting the points. Our work extends this phenomenon and harnesses it for modelling statistical futures.”
Co-author Geoff Pryde compares the team’s success with that of the analogue computing researchers half a century ago.
“This is what makes the field so exciting,” he says.
“It is very much reminiscent of classical computers in the 1960s. Just as few could imagine the many uses of classical computers in the 1960s, we are still very much in the dark about what quantum computers can do.
“Each discovery of a new application provides further impetus for their technological development.”
The research is published in the journal Nature Communications.