New hardware used in ultra-fast analogue deep learning device

Artificial intelligence (AI), or machine learning, is  taking  the computing world by storm although it’s been under development for decades. AI tools are changing the way we use data and computers in an array of fields from medicine to traffic control. New research shows how we can make AI even more efficient and useful.

The name “artificial intelligence” often stirs the imagination and incites imagery of sentient robots. But the reality is  different. Machine learning does not emulate human intelligence. What it does do, however, is mimic the complex neural pathways that exist in our own brains.

This mimicry is the key to which AI owes its power. But it is power that comes at great cost – both financially and in terms of the energy required to run the machines.

Research coming out of the Massachusetts Institute of Technology (MIT) and published in Science is part of a growing subset of AI research focused on AI architecture which is cheaper to build, quicker and more energy efficient.


Read more: Australian researchers develop a coherent quantum simulator


The multidisciplinary team used programmable resistors to produce “analogue deep learning” machines. Just as transistors are the core of digital processors, the resistors are built into repeating arrays to create a complex, layered network of artificial “neurons” and “synapses”. The machine can achieve complicated tasks such as image recognition and natural language processing.

Humans learn through the weakening and strengthening of the synapses which connect our neurons – the brain cells.

Whereas digital deep learning weakens and strengthens links between artificial neurons through algorithms, analogue deep learning occurs through increasing or decreasing the electrical conductance of the resistors.

Increased conductance in the resistors is achieved by pushing more protons into them, attracting more electron flow. This is done using a battery-like electrolyte which allows protons to pass, but blocks electrons.

“The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field and push these ionic devices to the nanosecond operation regime,” says senior author Bilge Yildiz, professor in the Nuclear Science and Engineering, and Materials Science and Engineering departments at MIT.

Using inorganic phosphosilicate glass (PSG) as the base inorganic compound for the resistors, the team found their analogue deep learning device could process information one million times faster than previous attempts. This makes their machine about one million times faster than the firing of our own synapses.

“The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water,” says senior author Ju Li, professor of materials science and engineering. “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.”

The resistor can run for millions of cycles without breaking down thanks to the fact that the protons don’t damage the material.

“The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them to ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons,” says lead author and MIT postdoc Murat Onen.

“The nanosecond timescale means we are close to the ballistic or even quantum tunnelling regime for the proton, under such an extreme field,” adds Li.

PSG also makes the device extremely energy efficient and is compatible with silicon fabrication techniques. It also means the device can be integrated into commercial computing hardware.

“With that key insight, and the very powerful nanofabrication techniques, we have been able to put these pieces together and demonstrate these devices are intrinsically very fast and operate with reasonable voltages,” says senior author Jesús A. del Alamo, a professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”

“Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” Onen adds.

Analogue deep learning has two key advantages over its digital cousin.

Onen says computation is performed within the memory device rather than being transferred from memory to the processors.

Analogue processors also perform operations simultaneously, rather than needing more time to do new computations.

Now that the effectiveness of the device has been shown, the team aims to engineer them for high-volume manufacturing. They also plan to remove factors which limit the voltage required to efficiently the protons.

“The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting,” Professor del Alamo says.

Please login to favourite this article.