Avoiding the sound of silence

“At the age of 80, about 80% of people are hard of hearing, so most people will need a hearing device in old age.” For Werner Hemmert of the Technical University of Munich (TUM), this is a growing issue for society. EU projections reinforce his point: in 1960, just 1.4% of Europeans were over 80, but by 2060 the proportion is projected to rise to 11.5%. Directly or
indirectly, hearing impairment will soon affect everyone.

Thomas Behrens, head of audiology at Danish hearing-aid manufacturer Oticon points out the wider significance. “The consequences of hearing loss are underestimated, especially with regard to accelerated cognitive decline”, he says. At least 12% of the population over the age of 70 will suffer from either mild cognitive impairment or dementia, and among them the proportion of hearing-impaired people is much higher than average.

Connected hearing aids

About 90% of diagnosed hearing loss is sensorineural. This means impairment occurs when sensitive hair cells in the inner ear are damaged or die, limiting the amount of acoustic information that can be transmitted to the brain. Consequently, high-frequency sounds, such as female or children’s voices, may become difficult to hear. It may also be harder to hear sounds such as “s”, “f” and “th”. Torsten Dau of the Technical University of Denmark (DTU) explains: “Modern hearing aids attempt to compensate by pre-processing the signal – amplifying only one particular voice in a noisy restaurant – before the information bottleneck, reducing the brain’s workload”, he says.

In the area of connectivity, Behrens and his colleagues are leading the way. “Our Oticon Opn devices have two different wireless radios: one optimised to communicate with other devices using a low-energy version of Bluetooth, and another optimised for communicating over short distances using near-field magnetic radio technology”, he explains. The latter allows hearing aids on either side of the head to communicate with one another. Oticon Opn is also compatible with the “If This Then That” platform, which allows synching of functions of connected devices. For instance, a notification might be set to sound when someone arrives at the front door.

Researchers in the US have shown how neural networks can be used to remove much of the noise from incoming sound, allowing people with poor hearing to better comprehend what they hear. The networks are trained to recognise a multitude of features within short slices of sound that distinguish speech from background chatter and other sources of noise. However, as pointed out by Jan Larsen of DTU, people with hearing aids want to do more than simply have conversations. They also want to watch TV and listen to concerts – environments that demand different settings on the hearing aid. To address this need, he is developing what he calls “user-in-the-loop” systems, which means involving patients in setting up their own hearing aids.

Larsen’s approach is to use a machine-learning algorithm designed to work out which of a pair of sound bites a patient is likely to hear better, given a certain mix of their hearing aid’s various parameters (such as frequency response and noise reduction). The sound bites are duly played to the patient, and if the algorithm guesses correctly it remains unaltered; if not, it is tweaked and then presented with another pair of sound bites. The process is repeated until the optimal setting has been found.

Algorithms in cochlear implants

For all their wizardry, hearing aids have real limitations if neurons and receptor cells are already dead. By contrast, cochlear implants circumvent the damaged hair cells entirely, stimulating different parts of the auditory nerve via electrodes. Bernhard Seeber and his colleagues at TUM are particularly interested in studying how well people with these implants can work out the direction of specific sounds when surrounded by lots of noise.

The ability to pinpoint where a sound is coming from allows us to pick out that one voice in a crowded room. Usually, we do this by having two ears positioned on opposite sides of our heads. The signals arriving at each ear will differ slightly in terms of arrival time and intensity. But Seeber and his team have shown that people wearing cochlear implants struggle to make out these differences in a noisy environment.

To overcome this problem, the group has been developing a new algorithm to convert incoming sound into the electrical signals sent to the brain. The algorithm shapes the signals so that they have a sharper leading edge preceded by a slight delay, allowing the brain to pick up the difference in timing. The researchers have successfully tested the algorithm on people with normal hearing and on a small number of patients with implants; they are now preparing for a larger study.

While some groups espouse the benefits of one type of device over another, the key to good results lies with audiologist’s skill in identifying an individual’s needs. In fact, the next trend may combine hearing aids and cochlear implants. “The challenge here is that acoustic stimulation (hearing aids) vs. electric stimulation (cochlear implants) lead to different representations of information and we need to know more about how this ‘mismatch’ is combined and processed by the brain”, explains DTU’s Dau. “Researchers have recently demonstrated that combining the two device types in the same ear can lead to substantial performance improvements.”


by