Brain activity from 29 people listening to Pink Floyd’s 1979 song, “Another Brick in the Wall, Pt. 1” has been used to recreate a “recognisable” version of the track.
The research might lead to new technologies that allow users to connect with computers directly through thought patterns.
The University of California, Berkeley study sought to understand the neural representations of how auditory experience is perceived in the human brain. It is published in the journal PLOS Biology.
Apart from recreating the ‘70s rock classic, the scientists identified a new subregion of the brain underlying rhythm recognition – in this case the rhythm of the guitar.
Participants had 2,668 electrodes attached to their brains. Researchers found that 347 of the electrodes fired, mostly located in three regions of the brain, predominantly in the right hemisphere: the Superior Temporal Gyrus (STG), the Sensory-Motor Cortex (SMC), and the Inferior Frontal Gyrus (IFG).
Then, non-linear modelling algorithms were used to decode the brain activity and reconstruct the song.
Previously, computer modelling has been used to decode and reconstruct speech from brain activity. But a predictive model for music which could pick out elements such as pitch, melody, harmony and rhythm has not been achieved until now.
The researchers were able to identify which regions of the brain were most important for different musical elements by isolating that element and looking at the brain activity.
Removing electrodes on the brain’s right hemisphere most affected the reconstruction. Removal of electrodes in locations related to sound onset or rhythm also degraded the accuracy of the reconstruction, highlighting their importance to the perception of music.
Original song waveform transformed into a magnitude-only auditory spectrogram, then transformed back into a waveform. Credit: Bellier et al., 2023, PLOS Biology, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
Reconstructed song excerpt using non-linear models fed with all 347 significant electrodes from all 29 patients. Credit: Bellier et al., 2023, PLOS Biology, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
Reconstructed song excerpt using non-linear models fed with the 61 significant electrodes from a single patient. Credit: Bellier et al., 2023, PLOS Biology, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
The authors suggest users might one day be able to connect with computers directly through thought patterns using music. Such devices could be in the form of prosthetics aimed to help improve prosody perception – the recognition of rhythm and melody in speech.
“Our findings show the feasibility of applying predictive modelling on short datasets acquired in single patients, paving the way for adding musical elements to brain–computer interface (BCI) applications,” they write.