Scientists develop brain-controlled hearing aid that amplifies the voices you WANT to hear
Scientists have created a hearing aid which relies on the user’s own brain waves to tune into specific people and things, drowning out background noise.
The device, developed at Columbia University in New York, uses speech-separation algorithms with neural networks, complex mathematical models that imitate the brain’s natural abilities.
The system first separates out the voices of individual speakers from a group, then compares the voices of each speaker to the brain waves of the person listening.
Whichever voice pattern most closely matches the listener’s brain waves will then be amplified over the rest.
It is still in the early stages of development, but experts say the technology is a huge step for people hard of hearing to better communicate with the people around them.
The device developed at Columbia University in New York attempts to overcome the ‘cocktail party problem’, when voices mix together and become a mass of noise
‘The brain area that processes sound is extraordinarily sensitive and powerful; it can amplify one voice over others, seemingly effortlessly, while today’s hearings aids still pale in comparison,’ said Nima Mesgarani, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper’s senior author.
‘By creating a device that harnesses the power of the brain itself, we hope our work will lead to technological improvements that enable the hundreds of millions of hearing-impaired people worldwide to communicate just as easily as their friends and family do.’
Modern hearing aids amplify speech and suppress background noise like traffic.
But that’s as precise as they get.
They cannot boost the volume of an individual voice over others – leading to the classic ‘cocktail party problem’, when multiple voices blend together in loud parties.
‘In crowded places, like parties, hearing aids tend to amplify all speakers at once,’ said Dr Mesgarani, who is also an associate professor of electrical engineering at Columbia Engineering.
‘This severely hinders a wearer’s ability to converse effectively, essentially isolating them from the people around them.’
Dr Mesgarani’s device, described today in the journal Science Advances, attempts to overcome that by focusing on the listener’s own brain waves as well as external input.
‘Previously, we had discovered that when two people talk to each other, the brain waves of the speaker begin to resemble the brain waves of the listener,’ said Dr Mesgarani.
In 2017, the team produced a version that did the same thing, but it could not adapt spontaneously, it had to be trained to selectively amplify pre-selected voices.
‘If you’re in a restaurant with your family, that device would recognize and decode those voices for you,’ explained Dr. Mesgarani. ‘But as soon as a new person, such as the waiter, arrived, the system would fail.
The new version uses a speech-separation algorithm that ‘could recognize and decode a voice – any voice – right off the bat.’
In one study on patients with epilepsy, it worked well, shifting the levels of volume depending on where the user shifted their attention.
However, they have currently only tested it indoors, and it is currently a hefty device.
Dr Mesgarani hopes to develop an easy-to-wear device and check that the algorithm so it works outside, too.
‘So far, we’ve only tested it in an indoor environment,’ said Dr Mesgarani. ‘But we want to ensure that it can work just as well on a busy city street or a noisy restaurant, so that wherever wearers go, they can fully experience the world and people around them.’