‘Hear More’ seminar, Lima, Peru

On Thursday I had the pleasure of addressing GN Hearing’s ‘Hear More’ seminar in Lima, Peru, via Skype. It turns out I am “very famous in Latin America”, no doubt thanks to the Spanish version of this video. At any rate, when I was revealed onscreen, an enormous cheer went up from around 100 Latin American audiologists, so I suppose that must mean something!

I was interviewed by Paula Duarte for about an hour. I told my story first of all, and then went on to report on my recently completed research project into the consequences of Ménière’s disease for musicians. This included some very interesting findings, such as the fact that all the Ménière’s musicians I interviewed had diplacusis (even if they had never heard that word before) and the consequences of that and other symptoms for musical perception. The resulting paper should be published soon and I will include a link to it here when that happens.

I passed on to the audience some of the comments about hearing care and hearing technologies from the musicians I interviewed. I always have to tread carefully when discussing this, because musicians generally are rather frustrated by audiology and hearing aids, whereas audiologists tell me repeatedly that musicians can be very challenging clients! The way I describe it, there is a difference in expectations between musicians and audiologists. Musicians are generally disillusioned with the shortcomings of hearing aids, frustrated by the lack of consideration given to sound quality (rather than just amplification), disappointed that hearing tests restrict themselves to frequencies in the middle and upper range, and downhearted by an apparent lack of empathy. Audiologists, on the other hand, have to deal with an array of new and unfamiliar terminologies (the languages of music and hearing science are really quite different) and the fact that they have certain professional priorities which are not necessarily those of the musician/client. Their training does not fully equip them to deal with the kind of questions musicians frequently raise.

My solution to this, as always with interdisciplinary exchanges, is to try to find common areas and develop a shared language and understanding. This is not easy: audiological training does not generally study music (any more than ophthalmologists study painting) and musical training can be surprisingly indifferent to both sound and hearing. But there is evidently a will amongst audiologists to move towards better and more supportive care for musicians, which is great. With that in mind I shared a few musical aspirations:

Let’s give users more control of their hearing aids (e.g. full EQ, sound mixing, filtering capabilities);
Why can’t hearing aids reduce sound as well as amplifying it?
Improvements to localisation perception would be great, especially for those with uneven hearing loss;
Could a hearing aid correct diplacusis?
Please can we have benchmark consistency in everything that is heard!

Hearing aids are designed mainly for speech, as everybody knows, but increasing their potential for music is becoming more important all the time. I also suggested some more creative uses…how about a hearing aid that could identify birds when they sing in nearby trees? Or how about some kind of hearing aid-based Pokémon Go? Then it would be really cool to wear a hearing aid! AI seems to offer a way forwards here.

After all this, I talked about the Aural Diversity project, which everybody found fascinating and very valuable, to judge by the comments I have received subsequently.

Questions form the floor focused on some of the technical details. They were very interested in the extent to which the hearing aids have really helped me to hear music again. This is something I followed up with some individuals subsequently in chat. The essence of my response is that I am still finding out. Listening to music without hearing aids is now more or less impossible for me. It is unpleasant and the pitch distortions turn it into a kind of acoustic mush. The hearing aids improve on this: they ‘flatten out’ the diplacusis – not by removing it, but by lessening it and making it more predictable. Also, the increased flow of information means that my brain can fill in the gaps and make better sense of the music. So, for example, pitches below the octave below middle C become more audible thanks to the increased upper headroom. This seems crazy: how can more high frequencies improve perception of missing low frequencies? I think it is because the available overtones provide my brain with enough information to be able to figure out what the bass note should be. This combines with the residual hearing in my good ear to create a pretty convincing bass note.

However, I would not want to overstate the case here. Hearing aids create an artificial listening experience. I am aware that I am not hearing what is really there. And the sound is still pretty thin compared to natural acoustics. But I am so grateful for any meaningful sound input I can get. I become emotionally overwhelmed quite quickly, just listening through the music programme on my hearing aids, so thank you GN! Whereas I had given up listening to music altogether, I do now listen more, even though I tend to stick to fairly simple music that does not become too muddy. Also I cannot listen for long periods without making the tinnitus worse, so I have to be careful.