Quantum oracle: AI predicts and fixes qubit failure


Machine learning kept unstable quantum bits in line – even before they wavered. Cathal O'Connell reports.


Qubits hold great processing potential but are too delicate to be useful yet. This may change, thanks to artificial intelligence.
EQUINOX GRAPHICS / SCIENCE PHOTO LIBRARY

Imagine predicting your car will break down and being able to replace the faulty part before it becomes a problem. Now Australian physicists have found a way to do this – albeit on a quantum scale.

In Nature Communications, they enlisted machine learning to “foresee” the future failure of a quantum bit, or qubit, and makes the necessary corrections to stop it happening.

Quantum computing is a potentially world-changing technology with the potential to complete tasks in minutes what current computers take thousands of years. But achieving a practical, large-scale quantum technologies is probably a long way off.

One of the major challenges is maintaining qubits in the delicate, zen-like state of superposition they need to do their business.

Any tiny nudge from the environment – such as the jiggly atom next door – knocks the qubit off balance.

So physicists go to great lengths to stabilise qubits, cooling them to more than 200 degrees below zero to reduce atomic jiggling. Still, superposition typically lasts but a tiny fraction of a second, and this cuts quantum number-crunching time short.

A team led by Michael Biercuk at the University of Sydney found a new way of stabilising qubits against noise in the environment. It works by predicting how a qubit will behave and act pre-emptively. In a quantum computer, the technique could make qubits twice as stable as before.

The team used control theory and machine learning (a kind of artificial intelligence) to estimate how the future of a qubit would play out.

Control theory is the branch of engineering that deals with feedback systems, such as the thermostat keeping your room temperature constant. The thermostat reacts to changes in the environment, initiating warm or cool air to pump into the room.

Meanwhile, new machine learning algorithms look at how the system behaved in the past and use this information predict how it will react to future events.

First, Biercuk’s team made a qubit by trapping a single ion of ytterbium in a beam of laser light. To train their algorithm, they simulated noise, tweaking the light to disturb the atom in a controlled way. Their algorithm monitored how the qubit responded to these tweaks and made a prediction for how it would behave in future.

Next, they let events play out for the qubit to check their algorithm’s accuracy. The longer the algorithm watched the qubit, the more accurate its predictions became.

Finally, the team used the predictions to help the system self-correct. The qubit was twice as stable with the algorithm as without it.

While similar machine-learning algorithms have been used in other advanced feedback systems, such as those used to stabilise the interferometer that detected gravitational waves last year, this is their first use in a quantum technology.

Because the technique is software based, it could be easily adapted by other quantum technology efforts around the world. To help, the team is making the computer code available to other scientists.

"The quantum future is looking better all the time," Biercuk says.

Cathal 2016.png?ixlib=rails 2.1
Cathal O'Connell is a science writer based in Melbourne.
Latest Stories
MoreMore Articles