Appreciating a good song, it turns out, is very much a whole-brain business, with research revealing specific regions in the left and right hemispheres decode words and music separately.
The fact that different sides of the brain are responsible for processing the lyrical and melodic content of music has been known for years, but until now the mechanisms involved have been unknown.
The simplest explanation – that somehow areas of the left and right hemispheres respond to auditory cues – has long been regarded as more a generalisation than a penetrating insight.
To try to clarify things a bit more, a team of researchers headed by Phillipe Albouy from McGill University in Montreal, Canada, recruited two small cohorts of French and English speakers and plunged into the realms of a capella.
The researchers constructed 10 sentences across both languages and composed 10 nifty melodies. These were all then paired up and cross-matched, resulting in a collection of 100 spoken-word tunes.
The volunteers were then asked to listen to selections while strapped into functional magnetic resonance imaging machines. As they did so, Albouy and colleagues progressively degraded either the recorded integrity of the words themselves or the melody to which they were sang, and then watched what happened.
The distinction between music and sung words goes deeper, neurologically speaking, than the way they each encode different types of information.
Processing words, the researchers found, relies on temporal – time-based – stimuli, while music is processed in relation to its “spectral” components – the range of fluctuating signals involved. (Indeed, some researchers have suggested that analysing the spectral elements of music is all that is required to construct a precise classification system for music genres.)
The research revealed that words and music were processed in a region next to the left and right auditory cortices of the brain, specifically an area known as the lateral anterior superior temporal gyrus.
Fooling around with the precision of the words in the a capella samples resulted in decreased function in the left hemisphere while the melody-detecting right one remained at full strength. Leaving the words crisp and clear but degrading the melody had the opposite outcome.
The results were identical for the French and English speakers, indicating that the process is innate and not a product of culture.
“Humans have developed two means of auditory communication: speech and music,” the researchers write in the journal Science.
“Our study suggests that these two domains exploit opposite extremes of the spectrotemporal continuum, with a complementary specialization of two parallel neural systems, one in each hemisphere, that maximises the efficiency of encoding of their respective acoustical features.”