Robo-skin mimics our neuro architecture

Researchers from the University of Singapore have created an electronic skin which will permit robots to detect temperature, pressure and any sudden unintended movement of objects in their grasp.

The membrane, dubbed Asynchronously Coded Electronic Skin, or ACES, overcomes several problems associated with previous attempts to bestow a sense of touch to robots and should, the researchers say, “enable them to work collaboratively and naturally with humans to manipulate objects in unstructured living environments.”

Current designs for pressure-sensitive robo-skin fall foul of information processing hurdles. Sensors are read serially, and thus periodically, to create a two-dimensional map of changing pressure inputs.

This approach, known as time-divisional multiple access (TDMA), results in significant delays – known as “latency” in the business – in signal receipt. The issue creates processing bottlenecks, which become worse as more sensors, and more wiring, are added to the system.

Writing in the journal Science Robotics, Wang Wei Lee and colleagues report that ACES works on a different set of engineering protocols. Their invention uses “neuromimetic architecture”, which permits the near-simultaneous transmission of information with systems thus far modelled to above 10,000 interconnected sensors.

In a proof-of-concept described in their paper, the researchers report the operation of an array of 240 mechanoreceptors, powered by a single electrical conductor. The system operated with latencies of just one millisecond – sufficient, they write, for resolving the “fine spatiotemporal features necessary for rapid tactile perception”.

Sensors with an ACES array work independently, sending a series of pulses to a central processing unit in response to stimuli. The pulses form a signature which permits a decoder unit at the receiving end to identify each transmitting sensor.

Because the sensors all act independently, or asynchronously, Lee and colleagues report, the system is able to withstand localised damage. The loss of any individual sensor, or group of sensors, does not affect the operation of the system as a whole.

This property is seen as an advantage for use in real-world situations. In industrial applications, for example, the slippage of an object being handled could result in the skin being shredded or lacerated, but not in loss of full functionality.

The deployment of ACES has yet to be scaled up to industrial levels. The researchers concede that the system will require a hefty increase in computational power, compared to TDMA models, primarily to accommodate the decoding unit.

The proof-of-concept model consumed power at a rate around the middle of the range used by TDMA applications, but Lee and colleagues predict that will change for the better.

“It is important to note that our reported power consumption should be seen as an upper boundary, because these microcontroller-based prototypes were meant to be an early demonstration of ACES using off-the-shelf components,” they write.

Image description

(A) Artificial sensors, or receptors, on electronic skin generate tactile events with spatiotemporal structures (dashed lines) that encode the stimulation sequence. (B) Pulse signatures are combined and propagated via a single conductor (wire). (C) Decoders match pulse signatures with the sensors.

A strong match with correlation exceeding a predefined decoder threshold indicates the presence of an event (like slippage of an object, pressure sensing) by the particular receptor. The schematic on the right side demonstrates structures and pathways of the human nervous system mimicked by the ACES system.

Please login to favourite this article.