AI Recreates Pink Floyd Song From Brain Scans: How Did Researchers Do It?
With an AI tool detailed in a study published in the journal PLOS Biology, a University of California, Berkeley team managed to do the unthinkable.
Artificial intelligence recently recreated a Pink Floyd song through brain scans. Yep, you read that right! Usually when we hum a song within the confines of our head, nobody can hear it. But with an AI tool detailed in a study published in the journal PLOS Biology, a University of California, Berkeley team managed to do the unthinkable.
Using brain activity to recreate music
The team developed a machine learning model that decodes brain activity to recreate songs. They were able to recreate the Pink Floyd song "Another Brick in the Wall, Part 1." Scientists think this tool may be used for brain-computer interface technologies to recreate music using the mind in the future.
"We reconstructed the classic Pink Floyd song ¡°Another Brick in the Wall¡± from direct human cortical recordings, providing insights into the neural bases of music perception and into future brain decoding applications," said Ludovic Bellier, a computational research scientist at UC Berkeley, in a press release.
The brain decoder is able to recreate people's thoughts based on fMRI scans. The Berkeley team created the model to recreate various elements of a song, including the pitch, melody, and rhythm, The Daily Beast reported.
Also read: Scientists Trying To Merge Human Brain Cells With Computer Chips For Advanced AI
The researchers attached 2,668 intracranial electroencephalography (iEEG) electrodes to the brains of 29 patients while they listened to "Another Brick in the Wall." Researchers found that electrodes placed specifically at the Superior Temporal Gyrus (STG) region of the brain were doing the most auditory processing and rhythm perception, providing the best data for recreating the music.
The team was actually able to recreate the version of the song that sounded somewhat similar. In the process, they also discovered that feeding electrode data into the model made the reconstruction more accurate. "Combining unique iEEG data and modeling-based analysis, we provided the first recognizable song reconstructed from direct brain recordings," the authors wrote.
Also read: Meta Unveils AudioCraft, An AI Tool To Create Music And Audio From Text Prompts
"We showed the feasibility of applying predictive modeling on a relatively short dataset, in a single patient, and quantified the impact of different methodological factors on the prediction accuracy of decoding models."
Such tech could help people with paralysis or neurological issues to create music on their own. What do you think about this development? Let us know in the comments below. For more in the world of technology and science, keep reading Indiatimes.com.