How the brain processes speech in a crowded room

How well can you hear someone in a crowded room? What if they’re speaking quietly compared to the other chatter?

A study published in PLOS Biology has found two ways our brains process speech when other conversation is going on.

The answers could help develop hearing technology that allows people to focus on specific sounds in noisy environments – currently, they amplify all the sound in a room.

Read more: How cochlear implants work…and when they don’t

The researchers recruited seven study participants, all of whom were having their epilepsy treated with brain surgery.

The patients were having electrodes implanted in their brains, which means that researchers could collect brain data that can’t be found using less invasive methods.

The researchers recorded intracranial EEG data from each of the patients, while they listened to a series of male and female voices talking over each other at different volumes.

While they listened, patients were asked to focus on one voice or the other.

Sometimes the voice they had to focus on was louder than another voice (referred to as a “glimpsed” voice), sometimes it was quieter (“masked”).

The brain activity showed that masked and glimpsed speech were processed differently in two areas of the brain: the primary and secondary cortex.

Glimpsed speech was encoded in both the primary and secondary cortex, but masked speech was only encoded if it was the voice being attended to.

“When listening to someone in a noisy place, your brain recovers what you missed when the background noise is too loud,” explains lead author Vinay Raghavan, a PhD candidate at Columbia University, US.

“Your brain can also catch bits of speech you aren’t focused on, but only when the person you’re listening to is quiet in comparison.”

The findings could help to develop the emerging field of hearing neurotechnology, which allows the brain to focus on specific sounds.

Please login to favourite this article.