I am a walker. One of my most common routes takes me to the top of a 200-metre butte via a corkscrewing road that at one point passes through a curving tunnel with smooth, concrete-coated walls.
The tunnel itself is an intriguing enough place, but walking through it one day, I had the eerie sensation of hearing a conversation that seemed so close behind that I immediately turned back to see who was about to step on my heels. Nobody was there.
The speakers were out of sight around the curve of the tunnel, where something in its acoustics was focusing their conversation on me so perfectly that it almost felt as though it was directly in my head.
To the extent the science of acoustics ever enters most of our lives, it’s usually related to that type of experience. Why does my singing voice sound so good in the shower? Why is one concert hall or theatre so well tuned that every word, every note, is discernable in the third balcony, while others are acoustic blenders in which “indecipherable” is the only accurate description?
But there are people for whom acoustics is a lot more than music, auditoriums, or the occasional odd experience like mine in the tunnel. For them, it’s a source of endless fascination, ranging from wondering what a thunderstorm sounds like in the heart of the world’s largest organism to trying to figure out what a symphony might sound like on another planet. Twice a year, they gather under the umbrella of the Acoustical Society of America (ASA), where in addition to presenting things that are just plain cool, they share the latest in practical sonic innovations that might improve people’s lives on more than just an auditory level.
Their latest showcase of sound was in Chicago, Illinois, where one of the most attention-catching presentations did indeed involve the world’s largest organism, Pando.
Pando is a tree. But not just a tree. His name means “I spread,” and he’s a grove of male aspens (yes, aspen trees have specific sexes), derived from a single seedling that sprouted perhaps 10,000 years ago. Since then, Pando has expanded through his roots, sprouting enough trees to cover 43 hectares.
“What look like individual trees are actually branches extending from a single root system,” says Jeff Rice, an audio engineer from Seattle, Washington. “There are 47,000 of them.”
Rice has been recording the sounds of nature in the American West for two decades, so when he heard of Pando, he was drawn to it. His first target was to record the sound of a billion or so leaves all trembling to the same breeze. “This is probably what you think about when you think about aspen sounds,” he says, “a nice staticky calm noise, almost like rain.”
But that’s what we hear above ground. Below ground, Rice says, the sound of the leaves is conducted down the trunks and into the root system, where, he found, it manifests as a low, droning noise, particularly apparent in a thunderstorm when the wind is high.
“This is basically vibrations of the entire organism,” he says, calling it part of a “subterranean soundscape” that includes the entire Pando forest. “It is very resonant,” he adds.
The ultimate uses of Rice’s recordings aren’t fully clear, but he thinks he may have tapped into something of potential scientific value.
“We got this sound we weren’t expecting,” he says, “so I thought it would be interesting to come [to ASA] to present it to scientists who can maybe do some research on this type of thing.”
His recordings are also part of a growing Acoustic Atlas he directs at Montana State University, in which the sounds of ecosystems are preserved for possible use as baselines against which to assess the health of future ecosystems. “From an ecological perspective, there’s a lot you can learn,” he predicts.
Meanwhile, biomedical engineer Ashley Alva of Georgia Institute of Technology is working on a shorter-term acoustical project: attaching tiny gas-filled bubbles to a class of tumour-attacking white blood cells called macrophages.
It’s an important project because microbubbles easily reflect ultrasound, making it easy to map where they wind up – a method that’s long been used by injecting them into the blood and using ultrasound to track their flow through the heart. Putting them on macrophages offers the opportunity to do the same with macrophages, though in this case, the idea is that the tagged macrophages will migrate to tumours, carrying the microbubbles along for the ride.
“We got this sound we weren’t expecting, so I thought it would be interesting to come [to ASA] to present it to scientists who can maybe do some research on this type of thing.”Jeff Rice
There are, of course, technical difficulties, starting with figuring out how to get the bubbles to attach to the macrophages and stay attached long enough to be useful. But Alva’s team has managed to do it in the lab and is now preparing to test whether it works when they are injected into the bloodstream. If it does, it would be a bit like doing an echocardiogram for cancer and could be used for everything from monitoring the effectiveness of cancer treatments to looking for metastases too small to spot by conventional methods.
Meanwhile, other researchers are probing into the fine details of an entirely different type of sound: speech. One of them is Dr Georgia Zellou, a linguist at the University of California, Davis, who has found that in this age of increasingly good voice recognition programs like Siri, Alexa, and Google Assistant, people are developing a unique way of talking to them, involving an exaggerated mode called clear speech, a bit like what we sometimes use when talking to visitors from foreign countries.
“People talking to a device produce louder speech, slower speech, and speech with higher pitch [sometimes] in a narrower pitch range,” she says.
She compares it to how we talk to pets. “In certain contexts, we produce speech differently,” she says, noting that we often do this for our own reasons, not for the benefit of the pets.
In the case of computer devices, she says, “We have a conceptualization that a device is going to have a hard time understanding us.” Though, she adds, we may also be subconsciously choosing to be “a little bit robotic, like the machine.”
Our change in speech when talking to computers is important, Zellou says, because, among other things, it means that devices need to be trained to recognize the type of speech being directed at them, rather than the more naturalistic speech that may be in their training sets. It may also be a factor for programs designed to teach second languages. It’s possible that the software package you bought to prepare for an upcoming trip to Italy might inadvertently be inducing you to speak like an Italian robot.
“People talking to a device produce louder speech, slower speech, and speech with higher pitch in a narrower pitch range…In certain contexts, we produce speech differently.”Dr Georgia Zellou
At the same time, Zellou’s studies have found that people say they better comprehend their device when it also uses a more robotic means of speaking, even though there are text-to-speech programs available today that can produce extremely naturalistic speech. Why people react this way, isn’t fully clear, because other studies have shown that they react differently to the same recordings depending on whether they are told they come from a human or a machine.
“Simply thinking you’re talking to a human involved better comprehension,” she says.
The bottom line is that acoustics is complex, interdisciplinary, and sometimes unexpected, like my experience in the tunnel, or Rice’s discovery that Pando’s roots vibrate in a complex resonance across 43 hectares. And in this case, maybe it’s just a reminder that however sophisticated our instruments are for measuring it in the lab, what really matters for our own processing of sound isn’t the laboratory equipment, but the mishmash of experience, expectations, and emotions that is the human mind.