Echoes – while sometimes beautiful – can make speech harder to understand and tuning them of recordings is particularly challenging for audio engineers.
New research shows that the human brain appears to solve the problem by separating out the direct speech from its echo, according to a study in the journal PLOS Biology
“Echoes strongly distort the sound features of speech and create a challenge for automatic speech recognition. The human brain, however, can segregate speech from its echo and achieve reliable recognition of echoic speech,” the authors write.
Researchers from Zhejiang University in China used magnetoencephalography, (MEG) a technique that measures the magnetic fields produced by the electrical activity of neurons, to record auditory cortex responses in participants as they listened to normal and echoed speech.
They found that the brain processes and understands echoic speech with more than 95% accuracy, even with their attention diverted by a silent movie. And the MEG activity observed in the auditory cortex is best explained by a model in which the direct sound and echo are separated from one another.
“Future studies, possibly requiring intracranial neural recordings from humans or animal neurophysiology, are required to analyse where the segregation between speech and echo emerges along the auditory pathway,” they write.