Aural Diversity: re-thinking the concert experience.

I was recently approached by a leading music venue, wanting to discuss how to improve concert experiences for “deaf and hearing impaired people”. They have been looking at the Aural Diversity project and evidently reckon there are things we could usefully discuss.

It’s really great that large venues are taking an interest in these issues. I think our ideas could scale up well into such a situation. The mantra of Aural Diversity is that “everybody hears differently”, so this should have a wider benefit for all, not just those who are deaf or hearing impaired.
All this got me thinking. What would tempt me back into a concert hall after more than a decade of mostly avoiding them? How could we re-think the concert experience from an aurally diverse perspective?

This post has in mind a typical classical/contemporary venue capable of accommodating an orchestra, but my comments could equally well apply in pop/rock or other contexts. I’ll outline three main challenges – the people, the music, the environment – and then propose some solutions.


The People
People are so diverse, ranging from D/deaf people whose hearing may be absent from birth or profoundly impaired in some way, to people with hyperacusis (an extreme sensitivity to sound), for whom everyday noises such as clattering cutlery can be extremely painful. There is a vast array of hearing types, including: tinnitus (ringing in the ears), unbalanced hearing impairment (different levels of loss in each ear), diplacusis (hearing two different pitches from a single note), presbycusis (age-related loss), notch losses (hearing deficits in selected frequency bands), acoustic shock (trauma to the head or hearing mechanism) and all sorts of other sensorineural losses, auditory processing disorders, conductive impairments, and mixtures of all the above. Given that every human being’s hearing begins to decline after adolescence, almost all the people involved in a concert could benefit from some kind of re-thinking of the experience. But how do we accommodate all these different hearing types at the same time?

The Music
Most concerts contain a simple message for the listener: the only way to enjoy this experience is for your ‘ears’ (i.e. you) to measure up to the music. In some music, this is explicitly built in to the listening experience. It’s merciless. These days, there is a prevailing preference to give listeners a hard time by treating listening almost as a sporting feat. How many concerts are programmed with questions like loudness, intensity, granularity, variation, texture, frequency ranges, in mind? Not many, I suspect. Instead we have great long symphonies which give the ears almost no break. In between movements, the silence is oppressive, requiring nearly as much concentration as the music. And bear in mind that this applies to musicians as well as audiences! Musicians are 40% more likely than non-musicians to develop hearing disorders. It’s pretty obvious why that might be. But why should we have to adapt to music? Why can’t music adapt to us?

The Environment
People assume that a concert is what happens when you enter a concert hall, but of course the truth is that the concert experience includes everything from the moment you arrive at the venue to the moment you leave. Most venue environments are terrible for hearing impaired people. No quiet spaces. Too much loud conversation. Cafe/bar dispensing noise. Horrendous lighting. Pinging gongs and tannoy announcements. Confusing etiquette. For those on the autism spectrum, the sensory overload can be completely debilitating. And once we enter the auditorium, a different set of rules apply. There’s no escape without incurring the wrath of those around you. Silence must be maintained. The pressure on the listener is intense. If this week’s Autism Hour is teaching us anything, it is that small but significant environmental changes can benefit not just neurodivergent people but also the wider population. So, how can we make the environment more suitable for aurally diverse people?

Possible Solutions

Alternative listening strategies
There are so many more ways to listen than just the conventional synchronous acoustic experience in a shared space. Even within that situation there is room for variation. People should be able to move around both between and during pieces to improve the listening experience. Consider the possibilities of listening acoustically in other spaces outside the concert hall, in neighbouring rooms, even outside. How about streaming to hearing aids or wireless headphones, allowing audiences to wander about? Maybe pipe the music to listening stations outside? For D/deaf members of the audience, there should be BSL interpretation and live captioning throughout as standard. This can be as musical as anything done with sound. Consideration should be given to cochlear implants wearers and hearing aid users, and how these devices affect the listening experience. Every piece should come with video interpretation, viewable somehow (perhaps on mobile phones). Then, there should be an array of tactile and haptic interfaces to enable full-body listening. People could touch instruments as they are played, perhaps, or at least touch objects attached to instruments. Vibrating floors. Wearable sensors. And how about non-cochlear listening that relies on verbal descriptions or evocations of the music, rather than anything ‘heard’ in the conventional sense? 

Diversity-friendly programming
This is for the musicians just as much as the audience…Each piece on the programme should be analysed for its loudness, texture, intensity, instrumentation, duration, frequency ranges, etc. Those details should be presented in the programme so that audiences can decide how best to listen. It’s a bit like the spiciness recommendations in an Indian restaurant. D/deaf people may enjoy a piece that features a lot of sub-bass, whereas Ménière’s people will probably prefer something with lots of mid to high frequencies, while cochlear implant wearers might prefer music that has less complicated or ‘muddy’ textures. There should be plenty of time between pieces so that people can relocate accordingly. The programme should consider the needs of its performers and audiences much more carefully. Anything that involves listening for more than 40 minutes should be risk-assessed for its acoustic impact. Pieces that contain inbuilt aural rests should be programmed alongside other material. How often does a programmer consider that there might be too much piccolo, or too much brass, or whatever, in a given piece? This year’s “accessible Prom” programmed Tchaikovsky and Rachmaninov. Why? What made those choices particularly suitable to that audience? Music that can adapt to the needs of its performers and audiences should be the goal.

Relaxed etiquette
One thing we can learn from the D/deaf and autistic communities is that applause is very painful for many people. ‘Flapplause’, or ‘jazz hands’ or whatever we may call it, may attract howls of derision from certain quarters, but I can guarantee that it makes an enormous difference to aurally diverse listeners and is far preferable to clapping. More generally, there should be respect for the listening needs of others and less fierceness in insisting on the ‘right’ way to listen. Concerts need to relax and become more approachable for all sorts of people. This means also more tolerance of audience behaviour. Here we hit a real difficulty, because of course some audience behaviours (e.g. shouting out suddenly) may have a negative impact on others. In my experience, there are always common-sense solutions to such problems that may deploy some of the listening strategies described above.

Reconfigured environment
The concert hall itself, with its rows of fixed seats, may not easily be reconfigured. But even so, more attention could be paid to sensory issues such as light, smell, touch etc. Flat-attenuation earplugs should be provided free, and there could be access to noise-cancelling headphones too. The noises made by chairs can be a particular problem, so these need to be silenced somehow. But the main environmental improvements would come outside the auditorium. A quiet room would be a great advantage, especially if it can also be used for silent listening to the performance. Attention should be paid to noises in cafes and bars, and in general the environment should not feel like a waiting room but rather a destination in its own right, given that not every concertgoer will enter the auditorium. Acoustic design of this space could even include a musical component that provides a unique listening experience aimed at aurally diverse audiences. This is not simply a matter of ‘coping with disability’ but rather of giving such audiences a musical experience that does not solely depend on their ability to sit still in a concert hall for 90 minutes.

New technologies
Some of the solutions described above rely on new technologies that are still being developed. Mobile phones and similar smart portable technologies provide the platform for many of these, but some (e.g. vibrating floors) are bespoke, purpose-built pieces of equipment. One thing about ‘disabled’ people is that they are frequently, perforce, users and even developers of new technologies, often built around their own needs. These needs should be taken into account by the venue. When buying tickets, audiences can be asked whether they need to bring technologies and then consideration given as to how these would be plugged in to the infrastructure. In general, venues should connect with engineers and designers to support and innovate. This will prove mutually beneficial in the long run. For example, neural interfaces are increasingly entering the real world, but how many concerts include a capacity for their use? Less unusually, cochlear implants and hearing aids are a staple of hearing impairment, but their capacity as listening devices are rarely exploited by venues beyond the required ‘hearing loop’ compliance.

If music is to be a shared experience, we need to think about what ‘sharing’ means. Aural Diversity is committed to the live concert. Standard recordings and reproductions simply will not do, because they reinforce the requirement for a pair of otologically ‘normal’ ears that are perfectly balanced. So, listening to a broadcast or recording of an Aural Diversity concert is n unsatisfactory substitute for the experience of attending the live event. This emphasis on liveness should be welcome to concert venues, but to be credible it has to be more than just an exercise in making things a bit more accessible to deaf and hearing impaired people. It really is a complete re-think of what a ‘concert’ might be and how this shared experience might be collectively understood by people whose perceptual apparatus varies so widely.

‘Hear More’ seminar, Lima, Peru

On Thursday I had the pleasure of addressing GN Hearing’s ‘Hear More’ seminar in Lima, Peru, via Skype. It turns out I am “very famous in Latin America”, no doubt thanks to the Spanish version of this video. At any rate, when I was revealed onscreen, an enormous cheer went up from around 100 Latin American audiologists, so I suppose that must mean something!

I was interviewed by Paula Duarte for about an hour. I told my story first of all, and then went on to report on my recently completed research project into the consequences of Ménière’s disease for musicians. This included some very interesting findings, such as the fact that all the Ménière’s musicians I interviewed had diplacusis (even if they had never heard that word before) and the consequences of that and other symptoms for musical perception. The resulting paper should be published soon and I will include a link to it here when that happens.

I passed on to the audience some of the comments about hearing care and hearing technologies from the musicians I interviewed. I always have to tread carefully when discussing this, because musicians generally are rather frustrated by audiology and hearing aids, whereas audiologists tell me repeatedly that musicians can be very challenging clients! The way I describe it, there is a difference in expectations between musicians and audiologists. Musicians are generally disillusioned with the shortcomings of hearing aids, frustrated by the lack of consideration given to sound quality (rather than just amplification), disappointed that hearing tests restrict themselves to frequencies in the middle and upper range, and downhearted by an apparent lack of empathy. Audiologists, on the other hand, have to deal with an array of new and unfamiliar terminologies (the languages of music and hearing science are really quite different) and the fact that they have certain professional priorities which are not necessarily those of the musician/client. Their training does not fully equip them to deal with the kind of questions musicians frequently raise.

My solution to this, as always with interdisciplinary exchanges, is to try to find common areas and develop a shared language and understanding. This is not easy: audiological training does not generally study music (any more than ophthalmologists study painting) and musical training can be surprisingly indifferent to both sound and hearing. But there is evidently a will amongst audiologists to move towards better and more supportive care for musicians, which is great. With that in mind I shared a few musical aspirations:

Let’s give users more control of their hearing aids (e.g. full EQ, sound mixing, filtering capabilities);
Why can’t hearing aids reduce sound as well as amplifying it?
Improvements to localisation perception would be great, especially for those with uneven hearing loss;
Could a hearing aid correct diplacusis?
Please can we have benchmark consistency in everything that is heard!

Hearing aids are designed mainly for speech, as everybody knows, but increasing their potential for music is becoming more important all the time. I also suggested some more creative uses…how about a hearing aid that could identify birds when they sing in nearby trees? Or how about some kind of hearing aid-based Pokémon Go? Then it would be really cool to wear a hearing aid! AI seems to offer a way forwards here.

After all this, I talked about the Aural Diversity project, which everybody found fascinating and very valuable, to judge by the comments I have received subsequently.

Questions form the floor focused on some of the technical details. They were very interested in the extent to which the hearing aids have really helped me to hear music again. This is something I followed up with some individuals subsequently in chat. The essence of my response is that I am still finding out. Listening to music without hearing aids is now more or less impossible for me. It is unpleasant and the pitch distortions turn it into a kind of acoustic mush. The hearing aids improve on this: they ‘flatten out’ the diplacusis – not by removing it, but by lessening it and making it more predictable. Also, the increased flow of information means that my brain can fill in the gaps and make better sense of the music. So, for example, pitches below the octave below middle C become more audible thanks to the increased upper headroom. This seems crazy: how can more high frequencies improve perception of missing low frequencies? I think it is because the available overtones provide my brain with enough information to be able to figure out what the bass note should be. This combines with the residual hearing in my good ear to create a pretty convincing bass note.

However, I would not want to overstate the case here. Hearing aids create an artificial listening experience. I am aware that I am not hearing what is really there. And the sound is still pretty thin compared to natural acoustics. But I am so grateful for any meaningful sound input I can get. I become emotionally overwhelmed quite quickly, just listening through the music programme on my hearing aids, so thank you GN! Whereas I had given up listening to music altogether, I do now listen more, even though I tend to stick to fairly simple music that does not become too muddy. Also I cannot listen for long periods without making the tinnitus worse, so I have to be careful.