The videos above show off a new speech synthesiser that converts vocal tract movements into intelligible speech.
This “articulatory-based” speech synthesiser, developed by a French team and unveiled in PLOS Computational Biology, uses deep learning algorithms. It could help build a brain-computer interface to restore speech for individuals with severe paralysis.
Speaking activates a region of the brain that controls the movement of the different organs (called articulators) of the vocal tract, including the tongue, lips and larynx.
Some people with language impairments still have this part of the brain preserved but signals between the brain and the muscles moving the articulators are interrupted.
Using deep learning algorithms, Florent Bocquelet from the French National Institute of Health and Medical Research and colleagues took these articulator movements and translated them into audible, comprehensible speech.
The study paves the way toward a brain-computer interface in which the synthesiser will be controlled directly from the brain.
Originally published by Cosmos as Voice synthesiser produces speech from muscle movement
Cosmos
Curated content from the editorial staff at Cosmos Magazine.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.