In a crowded room, you can hold a conversation while ignoring the cacophony of noise that surrounds you. Dubbed the cocktail party effect, researchers figured out how your brain does this and said the observation may lead to better hearing aids, cochlear implants and smartphones.
The brain processes all the sound you hear but can filter out the unnecessary parts and allow the listener to select which speaker to listen to, and the researchers said it's one of the remarkable things that humans can do naturally.
To figure out how people can parse out speech from a crowd of noise, researchers turned to three volunteers who had electrodes implanted in their brains to monitor epileptic seizures. The electrodes could also monitor how their brains process speech.
Speech cannot be tracked by conventional brain scan methods because of its speed, researchers noted in their study.
Study participants listened to two different speech samples simultaneously and had to focus on one voice while ignoring the other. After combining the data with participants' brain scans, the researchers found they could predict what speaker the participant was listening to and when their attention strayed to the other one.
The algorithm worked so well that we could predict not only the correct responses, but also even when they paid attention to the wrong word, Dr. Edward Chang, study coauthor and assistant professor of neuroscience at the University of California San Francisco, said in a statement.
Researchers coined the term cocktail party effect in 1953, and scientists extensively studied the phenomenon.
The findings could help scientists better understand why people with conditions such as attention deficit disorder or autism are unable to focus on a particular speaker.
People with these disorders have problems with the ability to focus on a certain aspect of the environment, Chang told ABC News. They can't always hear things correctly.
In addition to paving the way for better hearing aids and cochlear implants, the findings may also lead to better smartphone technology, according to the study authors. Voice recognition programs such as Apple's Siri are unable to process sounds the way humans do and can be unreliable or unusable in noisy environments. A better understanding of how humans process sound could lead to better designed programs, researchers said.
Following one speaker in the presence of another can be trivial for a normal human listener, but remains a major challenge for state-of-the-art automatic voice recognition algorithms, the authors wrote.
The journal Nature published the study on Wednesday.