‘Mind-reading’ may help those who cannot speak

Scientists are a step closer to helping people who cannot speak to communicate by thought alone.

For the first time, a research team has been able to decode brain activity in a simple question-and-answer session, recognising in real time both what a person hears and what they wish to say in response.

Edward Chang and colleagues from the University of California San Francisco, US, say their work is an important step towards the development of a speech neuroprosthesis – a device to help people who can no longer speak because of illness or injury.

Tools such as eye-tracking devices and electrodes attached to the scalp are already helping patients who cannot communicate otherwise, but their letter-by-letter approach is slow, sometimes producing fewer than eight words a minute.

So far there is no speech prosthetic system that enables users to interact at the speed of normal human conversation.

Chang and his team wanted to tackle this problem by leveraging current knowledge about the role of the cortex – the wrinkly, outermost layer of the brain – in communication.

Areas of the cortex are activated during listening and speaking, emitting neural signals that can be detected externally and potentially decoded. 

To date, research in this field has treated the functions of hearing and speaking as separate, but the new research is aimed at bringing those functions together to mimic real-world communication.

Using high-density electrocorticography, the researchers first recorded activity in the brains of three patients as they listened to a series of questions and responded verbally with a full set of prepared answers.

The question and answer sets were deliberately constrained in scope. For example, to the question “how is your room currently?” the valid answers were: bright, dark, cold, hot or fine.

Having captured the neural signals associated with the questions and all valid answers, the researchers put the questions again, but this time allowed the patients to choose whichever answer they preferred from the valid options.

By reading neural signals in the high gamma frequency range, the researchers were able to identify which question the patient was hearing 76% of the time.

Once the question was identified, the researchers knew that only certain responses were possible, which made decoding the answer from cortical activity easier. The process produced an accurate translation of the answer 61% of the time, compared with a probability of just 7%.

While the results were not perfect, the approach shows obvious potential in a care context, where a patient without speech or mobility may be asked a simple question such as their level of pain on a scale from one to 10.

Only 10 answers are possible, and if the answer can be detected in real time from neural signals the benefit to the patient may be considerable.

The instant nature of the translation is a major step towards more naturalistic applications, says Chang.

“For some impaired individuals, such as patients with locked-in syndrome, who are conscious but unable to communicate naturally due to paralysis, restoration of limited communicative capability is associated with significant increases in self-reported quality of life.”

The findings are published in the journal Nature Communications.

Earlier this year, a team led by Chang reported on a related project measuring brain activity related to the movements made by the jaw, larynx, lips, and tongue when people are attempting to speak.

CREDIT: Moses et al.

Please login to favourite this article.