Think you can hear a smile? Check your cheeks


Facial muscles are better than eyes, ears or brains when it comes to detecting smiles. Lydia Hales reports.


Detecting a smile doesn't require visual or auditory cues.
Detecting a smile doesn't require visual or auditory cues.
Katy Martincak / EyeEm / Getty Images

Your zygomaticus major muscles – the ones that pull the corners of the mouth up in a smile – might be better than your brain at detecting a smile in a voice.

A team of French scientists created software that can simulate how speech sounds with lips stretched to make a smile. They took recordings of 20 voices, manipulating them to sound more or less smiley, and had 35 participants rate 120 recordings each, ranking them on a scale from ‘unsmiling’ to ‘smiling’.

Importantly, other aspects of the voices, such as pitch contour and speed, which can provide emotional clues, were left unchanged. They also placed sensors on the listeners’ cheeks, to measure the electrical activity of their zygomaticus major.

In cases where the voice had been manipulated using the ‘smiling’ algorithm but still had negative pitch contours or speech rate, some participants said it was ‘not smiling’ – but their own smile muscles still responded.

“That’s activity that we call unconscious imitation,” says researcher Pablo Arias, an audio engineer and cognitive scientist from the Institute for Research and Coordination in Acoustics/Music in Paris, who led the study.

“It’s like saying the zygomatic muscle is very tuned to these acoustic cues that we manipulate. They followed the manipulation algorithm.”

So why should we care about tiny twitches in people’s cheek muscles?

Understanding the emotions being displayed by other people has always been important to humans – whether you’re deciding if that stranger is friend or foe, or showing you’re really paying attention to the person you love.

Arias says there are two significant aspects to their study.

“The first is the idea that there might be shared neural mechanisms to process auditory and visual emotional cues, so this mimicry mechanism may be shared across modalities,” he explains.

But most studies of mimicry, or mirroring the expressions of others, have been visual studies. While visual cues, such as a smile, usually go with noticeable changes in a person’s voice, not much is known about how we process or react to these ‘heard’ smiles.

To investigate the second aspect, the researchers plan to explore whether people who are blind from birth have these same unconscious mimicking behaviours, despite never having seen a smile.

“That would allow us to say that these mechanisms that we observe are actually visually independent, they can exist without being mediated by visual experience,” Arias says.

Their new ‘smiling voice effect’ could also be useful in other areas – while it’s not their focus, Arias expects it would be possible to apply the software to medical devices or voice assistants.

“To do this work we developed software to transform emotional cues onto the voice in real time, so that transformation can also be applied to other types of voice synthesis,” he explains.

“No one has done it on real time, and using transformations. What people used to do was more synthesis, so to create a voice from scratch. But when you create voices from scratch you often have a lot of artefacts.”

One current focus is a collaboration with the first paediatric hospital in the world, the Hôpital Necker – Enfants Malades, also in Paris, where they are making plans to extend the research to people with autism spectrum disorders.

“We also have software that manipulates the video,” Arias says. “We are building congruent and incongruent audio-visual smiles – so, for instance, voices that convey a smile but the face doesn’t, or the opposite, or both conveying a smile and both conveying the opposite.”

“And using eye-tracking, measuring your gaze and your pupil size, and by measuring these during the presentation of the stimuli, we can see by the strategies of face exploration whether people are perceiving these incongruences or not.”

The work may also help understand more about the evolution of smiling in humans.

“Smiles are recognised across cultures. It’s a gesture that human cultures across the globe use to communicate different things,” Arias notes.

“What we know now is that it can have several functions – you can do several kinds of smiles: it can be more dominant, it can be more affiliative, it can be more submissive, rewarding, but every culture uses this, so that’s very intriguing. Why do people contract their zygomatic muscles to communicate emotions across cultures? Why on Earth does that happen? We don’t have a clue.

“And babies, long before they know how to talk, they are already producing these smiles, so it seems that this gesture is very profound in the human behavioural repertoire, so that’s the theoretical basis of our work, that’s why we’re exploring these auditory smiles.”

You can hear two pairs of manipulated voices speaking in English below (the differences are subtle, so pop your headphones on to hear them more clearly). The first sentence of each pair was altered to damp its smiling tone; the second, to heighten it.

The paper was published in the journal Cell Biology.

  1. https://www.cell.com/action/showPdf?pii=S0960-9822%2818%2930752-8
Latest Stories
MoreMore Articles