Can how we speak reflect changes in our mental health? Can ambient sound bring order to loved ones living with dementia?
Worried about an aging relative with dementia, or a friend suffering from mental illness? New help may be on its way from physicists studying sound – how we produce it, and how we interact with it.
Researchers say it should soon be possible to produce apps that use the science of sound to monitor mental health disorders, and help people living with dementia better cope with their condition.
Speaking of mental illness…
Mental illness changes the manner in which people speak, says Carol Espy-Wilson, a professor of electrical and computer engineering at the University of Maryland, in the US.
At a recent virtual meeting of the Acoustical Society of America, she explained how depression causes psychomotor slowing, which means that depressed people can’t think as fast or move as quickly as non-depressed people.
The effect is evident in the way they talk, though to the casual listener their speech is perfectly intelligible. In fact, Espy-Wilson says, their enunciation actually becomes clearer.
Normally, she says, as we move our mouth, tongue and larynx in the process of speaking, our minds are thinking ahead, planning to make the motions for the words to follow in our sentences. The result is that we tend to get ahead of ourselves and blur sounds by creating a bit of overlap.
“We anticipate upcoming sounds and start to produce them before finishing the current ones,” Espy-Wilson says. “You can actually overlap them – partially, or fully. In normal speech, they overlap considerably.” The effect is subtle, but to acoustical engineers using the right software, it’s readily apparent.
“Normally, as we move our mouth, tongue and larynx in the process of speaking, our minds are thinking ahead, planning to make the motions for the words to follow in our sentences.”
In people with severe depression, however, everything slows down. “It results in slow speech with more, and longer, pauses,” Espy-Wilson says.
These pauses also show up in slower articulatory coordination – meaning that instead of anticipating the next sound and mashing it together with the current one, depressed people tend to separate sounds more distinctly, with much less overlap.
“Words may be completely separated, like beads on a string,” Espy-Wilson says. “That is not how we normally talk.”
In people living with schizophrenia, the reverse happens. Their speech articulation becomes more complex, with sounds overlapping to a greater degree. Not because they are trying to talk more quickly, but because their minds appear to be engaged in two conversations at once.
“A lot of the times these subjects are not only answering the question the interviewer asked, they’re also talking to someone else nobody can see,” Espy-Wilson says.
In tests of people diagnosed with major depression or schizophrenia, she says, her software already has an 85–90% ability to separate them from controls, and is only likely to get better with refinement.
More importantly, she says, the technology could be included in an easy-to-use smartphone app. Future versions of the software may be expanded to assess other major mental health illnesses, such as anxiety disorder and bipolar disorder.
Not that the goal is a do-it-yourself diagnostic app – diagnosis should still be left to clinicians. Rather, she says, the goal is to create an app that clinicians can give to patients already diagnosed with major mental health problems, allowing their status to be monitored between therapy visits via changes in the way they speak.
“Sound can help people feel safe, improve their mood, and make them feel comfortable.”
When necessary, she says, the app could then alert both the patient and the clinician of the need to schedule an emergency visit. “This is particularly important between therapy appointments, when many fall through the cracks and are at increased risk of suicide,” Espy-Wilson says.
Carol Posluszny, a clinical social worker in Portland, Oregon, concurs. But she says, patients must be willing to actually turn the app on, when needed. “People with depression and schizophrenia may make lack the motivation to make this work,” she says. “If they agree to participate, it sounds great.”
Espy-Wilson also notes that the app can only work if patients are willing to use it. “Hopefully,” she says, “they would be motivated to do that because they don’t want to get into a depressed or psychotic state.”
Birdsong and percolating coffee
Meanwhile, Arezoo Talebzadeh, an architect and soundscape researcher at Ghent University, Belgium, is looking to use machine learning and acoustic software to make life easier for dementia patients. People suffering from dementia can easily become disoriented; confused not only about where they are, but even about such minor things as the time of day.
One way to help them, Talebzadeh says, is with sounds that can help anchor them to a time schedule: morning, noon, afternoon, evening, night. “Sound can help people feel safe, improve their mood, and make them feel comfortable,” she says.
To assist with this, she is using an app that plays through in-room sound systems to create personalised soundscapes for patients living with dementia.
The app uses natural environmental sounds, ranging from the song of birds or a boiling teakettle at dawn, to midday sounds such as children playing in a park. All of them are the noises of everyday activities, played very quietly in the background. Talebzadeh says: “So people don’t really realise there is something playing.”
The goal is to trigger memories that can help anchor patients in their daily cycles. “Different sounds bring different memories, especially for people with cognitive difficulties,” Talebzadeh says.
In hospital trials in a rehabilitation institute in Toronto, Canada, she reports that two-thirds of patients showed lowered stress, as monitored by their nurses or other caregivers, who provided the feedback needed for the app to refine its choice of daily sounds. “These caregivers understand the patients and what’s going on,” she says.
“Different sounds bring different memories, especially for people with cognitive difficulties.”
Additional feedback in her tests came from wristbands worn by the patients, which monitored such factors as the patients’ heart rates and sleep quality, the latter of which, Talebzadeh says, is very important for people with dementia, “especially in the later stages”.
If a nurse or caregiver feels a patient isn’t responding well to a sound, she says, the nurse provides that feedback to the app: “If a sound doesn’t get a good rating, that sound is removed and doesn’t play again.”
The patient’s family may also be able to provide important cues to the type of sounds any given patient might remember from childhood. “We have a questionnaire we ask family members or caregivers [to answer],” she says. “We ask about culture, background, life growing up. Did they have a cat? Did they live in the city or the countryside?” In the library of possible sounds, she says, “we even have a church bell”.
An unexpected benefit, she says, is that the system seems to help not only the patient, but also the caregiver. “We want to evaluate that,” she says.
Social worker Posluszny applauds the idea: “It would be particularly helpful in dementia-care facilities, where the institutional environment is the same sterile sounds, 24 hours a day.”
Originally published by Cosmos as Listen up: the sound of science
Richard A Lovett
Richard A Lovett is a Portland, Oregon-based science writer and science fiction author. He is a frequent contributor to Cosmos.