Thoughts before the Aural Diversity conference

Tomorrow is the first day of the Aural Diversity conference, which also includes the second Aural Diversity concert. I thought this would be a good moment to reflect on how far I have come and what lies ahead.

It was back in September 2017 that I attended the Hearing Aids for Music conference at University of Leeds. This was a very important event for me, because it showed very clearly that there should be no obstacle to my talking freely about my own condition and that, furthermore, there were positive benefits in so doing. Shortly afterwards I wrote my text ‘Ménière’s and me‘ which attracted a lot of attention and revealed for the first time that I had severe hearing loss, tinnitus, balance problems and all the other symptoms of Ménière’s. This was a pretty big step, because I had kept it a secret for over ten years, out of a mix of professional pride and fear.

At the same conference, I first made the acquaintance of Miguel Angel Aranda de Toro, Director of External Relations at GN Hearing, and through him a whole range of people at GN Resound, including audiologists, engineers and many people working in hearing care and hearing technologies. I was fitted with Linx Quattro hearing aids, which enabled me to consider making music once again.

My approach to adversity has always been to seek to understand through research and then to try to turn it – whatever ‘it’ might be – into a creative opportunity. So, the first thing to do was to research Ménière’s and its consequences for musicians. I undertook a qualitative study, interviewing several musicians with Ménière’s and several with other forms of hearing loss. The results of this will be presented in my keynote at the conference.

However, as is my way, I wanted to do something larger and more strategic too, that also offered opportunities for others. ‘Auraldiversity’ was a term coined by Professor John Levack Drever as a kind of auditory corollary of ‘neurodiversity’. He elaborated it most recently in this Organised Sound article. I’ve known John for many years and have always enjoyed his ideas about hearing and listening in relation to sound studies and acoustic ecology.

I thought that Aural Diversity sums up the differences in hearing between individuals, both in a musical context but also in terms of daily life. I decided to start a project that would explore these differences in a musical context. How can musicians with a range of hearing conditions play together? And how can audiences with a range of hearing conditions experience such music? What does this mean for music itself? I recorded interviews with BBC Radio 3 and BBC Radio Leicester that explain these ideas.

GNResound very generously provided financial support for the project and this was further enhanced when I was awarded an Arts Council England grant. With that funding in place, we were able to stage the first ever Aural Diversity concert at a wonderful venue near Bath: the Old Barn, on Kelston Roundhill. This was a fabulous and memorable event, which is summarised in this video. We tried out many different ways of listening and performed a wide variety of music, with musicians ranging from profoundly deaf to hyperacusis and everything in between.

The potential of ‘Aural Diversity’ is so strong that a call for papers produced a remarkable international response. Perusal of the conference programme will reveal a fascinating and diverse collection of topics coming from a range of disciplines, including: medicine, hearing sciences, acoustics, engineering, creative computing, psychology, therapy, various arts and humanities fields, and of course music and sound studies. This diversity reflects the diversity inherent in the speakers themselves and the field as a whole.

The conference, which takes place at the University of Leicester, is accompanied by a second concert at the Attenborough Arts Centre, a venue which has a long and noble tradition of supporting disability and access to the arts. Once again, there will be many ways of listening and an aurally diverse collection of musicians. We have also worked with local groups such as the Hearing Impaired Unit at Beauchamp College. The concert will follow our set of conventions and includes BSL as well as video interpretations alongside streaming to remote headphones, haptic (touch) interfaces and vibrating floors.

I am hoping that the conference will provide both the foundations of a research network and a collection of future directions for the Aural Diversity project. I will be working to develop a concept map to define aims and objectives within each line of research. The delegates represent a self-defined grouping that will no doubt provide plenty of energy and momentum for our future endeavours.

Whilst Aural Diversity has come from my own experiences and interests, I know very well that it is not, and could never be, a project just about me. It relies on active participation and engagement by a cohort of musicians and researchers and therein lies the future, I think. Advocating for change in respect of aural diversity is important not just in music but for society as a whole. This is a topic that is barely discussed, but which affects all of us to some degree. I hope that in future we can achieve changes in attitude and indeed in policy in respect of all this, as well as re-evaluating how music works. Music should adapt to us as individuals and our hearing needs, and not require people to measure up to the standard of a pair of “normal” and perfectly balanced ears.

Aural Diversity: re-thinking the concert experience.

I was recently approached by a leading music venue, wanting to discuss how to improve concert experiences for “deaf and hearing impaired people”. They have been looking at the Aural Diversity project and evidently reckon there are things we could usefully discuss.

It’s really great that large venues are taking an interest in these issues. I think our ideas could scale up well into such a situation. The mantra of Aural Diversity is that “everybody hears differently”, so this should have a wider benefit for all, not just those who are deaf or hearing impaired.
All this got me thinking. What would tempt me back into a concert hall after more than a decade of mostly avoiding them? How could we re-think the concert experience from an aurally diverse perspective?

This post has in mind a typical classical/contemporary venue capable of accommodating an orchestra, but my comments could equally well apply in pop/rock or other contexts. I’ll outline three main challenges – the people, the music, the environment – and then propose some solutions.

Challenges

The People
People are so diverse, ranging from D/deaf people whose hearing may be absent from birth or profoundly impaired in some way, to people with hyperacusis (an extreme sensitivity to sound), for whom everyday noises such as clattering cutlery can be extremely painful. There is a vast array of hearing types, including: tinnitus (ringing in the ears), unbalanced hearing impairment (different levels of loss in each ear), diplacusis (hearing two different pitches from a single note), presbycusis (age-related loss), notch losses (hearing deficits in selected frequency bands), acoustic shock (trauma to the head or hearing mechanism) and all sorts of other sensorineural losses, auditory processing disorders, conductive impairments, and mixtures of all the above. Given that every human being’s hearing begins to decline after adolescence, almost all the people involved in a concert could benefit from some kind of re-thinking of the experience. But how do we accommodate all these different hearing types at the same time?

The Music
Most concerts contain a simple message for the listener: the only way to enjoy this experience is for your ‘ears’ (i.e. you) to measure up to the music. In some music, this is explicitly built in to the listening experience. It’s merciless. These days, there is a prevailing preference to give listeners a hard time by treating listening almost as a sporting feat. How many concerts are programmed with questions like loudness, intensity, granularity, variation, texture, frequency ranges, in mind? Not many, I suspect. Instead we have great long symphonies which give the ears almost no break. In between movements, the silence is oppressive, requiring nearly as much concentration as the music. And bear in mind that this applies to musicians as well as audiences! Musicians are 40% more likely than non-musicians to develop hearing disorders. It’s pretty obvious why that might be. But why should we have to adapt to music? Why can’t music adapt to us?

The Environment
People assume that a concert is what happens when you enter a concert hall, but of course the truth is that the concert experience includes everything from the moment you arrive at the venue to the moment you leave. Most venue environments are terrible for hearing impaired people. No quiet spaces. Too much loud conversation. Cafe/bar dispensing noise. Horrendous lighting. Pinging gongs and tannoy announcements. Confusing etiquette. For those on the autism spectrum, the sensory overload can be completely debilitating. And once we enter the auditorium, a different set of rules apply. There’s no escape without incurring the wrath of those around you. Silence must be maintained. The pressure on the listener is intense. If this week’s Autism Hour is teaching us anything, it is that small but significant environmental changes can benefit not just neurodivergent people but also the wider population. So, how can we make the environment more suitable for aurally diverse people?

Possible Solutions

Alternative listening strategies
There are so many more ways to listen than just the conventional synchronous acoustic experience in a shared space. Even within that situation there is room for variation. People should be able to move around both between and during pieces to improve the listening experience. Consider the possibilities of listening acoustically in other spaces outside the concert hall, in neighbouring rooms, even outside. How about streaming to hearing aids or wireless headphones, allowing audiences to wander about? Maybe pipe the music to listening stations outside? For D/deaf members of the audience, there should be BSL interpretation and live captioning throughout as standard. This can be as musical as anything done with sound. Consideration should be given to cochlear implants wearers and hearing aid users, and how these devices affect the listening experience. Every piece should come with video interpretation, viewable somehow (perhaps on mobile phones). Then, there should be an array of tactile and haptic interfaces to enable full-body listening. People could touch instruments as they are played, perhaps, or at least touch objects attached to instruments. Vibrating floors. Wearable sensors. And how about non-cochlear listening that relies on verbal descriptions or evocations of the music, rather than anything ‘heard’ in the conventional sense? 

Diversity-friendly programming
This is for the musicians just as much as the audience…Each piece on the programme should be analysed for its loudness, texture, intensity, instrumentation, duration, frequency ranges, etc. Those details should be presented in the programme so that audiences can decide how best to listen. It’s a bit like the spiciness recommendations in an Indian restaurant. D/deaf people may enjoy a piece that features a lot of sub-bass, whereas Ménière’s people will probably prefer something with lots of mid to high frequencies, while cochlear implant wearers might prefer music that has less complicated or ‘muddy’ textures. There should be plenty of time between pieces so that people can relocate accordingly. The programme should consider the needs of its performers and audiences much more carefully. Anything that involves listening for more than 40 minutes should be risk-assessed for its acoustic impact. Pieces that contain inbuilt aural rests should be programmed alongside other material. How often does a programmer consider that there might be too much piccolo, or too much brass, or whatever, in a given piece? This year’s “accessible Prom” programmed Tchaikovsky and Rachmaninov. Why? What made those choices particularly suitable to that audience? Music that can adapt to the needs of its performers and audiences should be the goal.

Relaxed etiquette
One thing we can learn from the D/deaf and autistic communities is that applause is very painful for many people. ‘Flapplause’, or ‘jazz hands’ or whatever we may call it, may attract howls of derision from certain quarters, but I can guarantee that it makes an enormous difference to aurally diverse listeners and is far preferable to clapping. More generally, there should be respect for the listening needs of others and less fierceness in insisting on the ‘right’ way to listen. Concerts need to relax and become more approachable for all sorts of people. This means also more tolerance of audience behaviour. Here we hit a real difficulty, because of course some audience behaviours (e.g. shouting out suddenly) may have a negative impact on others. In my experience, there are always common-sense solutions to such problems that may deploy some of the listening strategies described above.

Reconfigured environment
The concert hall itself, with its rows of fixed seats, may not easily be reconfigured. But even so, more attention could be paid to sensory issues such as light, smell, touch etc. Flat-attenuation earplugs should be provided free, and there could be access to noise-cancelling headphones too. The noises made by chairs can be a particular problem, so these need to be silenced somehow. But the main environmental improvements would come outside the auditorium. A quiet room would be a great advantage, especially if it can also be used for silent listening to the performance. Attention should be paid to noises in cafes and bars, and in general the environment should not feel like a waiting room but rather a destination in its own right, given that not every concertgoer will enter the auditorium. Acoustic design of this space could even include a musical component that provides a unique listening experience aimed at aurally diverse audiences. This is not simply a matter of ‘coping with disability’ but rather of giving such audiences a musical experience that does not solely depend on their ability to sit still in a concert hall for 90 minutes.

New technologies
Some of the solutions described above rely on new technologies that are still being developed. Mobile phones and similar smart portable technologies provide the platform for many of these, but some (e.g. vibrating floors) are bespoke, purpose-built pieces of equipment. One thing about ‘disabled’ people is that they are frequently, perforce, users and even developers of new technologies, often built around their own needs. These needs should be taken into account by the venue. When buying tickets, audiences can be asked whether they need to bring technologies and then consideration given as to how these would be plugged in to the infrastructure. In general, venues should connect with engineers and designers to support and innovate. This will prove mutually beneficial in the long run. For example, neural interfaces are increasingly entering the real world, but how many concerts include a capacity for their use? Less unusually, cochlear implants and hearing aids are a staple of hearing impairment, but their capacity as listening devices are rarely exploited by venues beyond the required ‘hearing loop’ compliance.

Conclusion
If music is to be a shared experience, we need to think about what ‘sharing’ means. Aural Diversity is committed to the live concert. Standard recordings and reproductions simply will not do, because they reinforce the requirement for a pair of otologically ‘normal’ ears that are perfectly balanced. So, listening to a broadcast or recording of an Aural Diversity concert is n unsatisfactory substitute for the experience of attending the live event. This emphasis on liveness should be welcome to concert venues, but to be credible it has to be more than just an exercise in making things a bit more accessible to deaf and hearing impaired people. It really is a complete re-think of what a ‘concert’ might be and how this shared experience might be collectively understood by people whose perceptual apparatus varies so widely.

‘Thirty Minutes’ for diplacusis piano

The Aural Diversity conference is fast approaching. This includes the second Aural Diversity concert, which is being curated by Duncan Chapman. I have been asked to contribute a performance on the diplacusis piano. The idea is that my performance should be done as an installation in the intervals between the more formal sections (three of them) of the concert. I like this format very much. The audience may come and go as they please, and there is less pressure on me and my hearing to deliver a typical concert performance.

Which brings me to the composition itself. Previous blog entries have detailed just how hard it is for me to compose for this instrument. The ‘diplacusis piano’ is a digital instrument that reproduces what I actually hear when I play a normal piano. In the low to mid register, notes are unevenly ‘split’ between the actual pitch and a detuned pitch, which may be anything up to a minor third flat. In the low register, I cannot hear fundamentals, which means that the overtone structures that I do hear are similarly pitch-distorted. High register is not too bad, although the top two octaves sound increasingly harsh. And the whole thing is unbalanced by the fact that my right ear has much less hearing than my left, and everything is heard through a wall of ever-changing tinnitus (which I do not reproduce on the instrument).

Not surprisingly, therefore, composing for this instrument is hard because it sounds like endlessly self-reflecting mirrors. It is psychologically and acoustically distressing. My objective is to make something beautiful out of this, so I persist. But it is very hard to do.

My solution this time is to compose thirty one-minute pieces that may be played in any order. This way, I only need to listen for short periods, and I can vary the range of listening required, which makes it easier for me. I am forcing the music (and the instrument) to adjust to what I can do, rather than trying to push myself to meet the demands of the instrument. I hope that this kinder, gentler approach will reflect in music that is more approachable for another listener. At any rate, if someone does not like a particular piece, they only have to wait one minute for something different. That’s aural diversity!

As before, I am using a visual composition method, involving a scrolling spectrogram (see below). However, I have also included now a Lissajous vectorscope, which shows the behaviour of the various notes within the stereo field. You can get the idea from this video.

Spectrogram display

The music is very diverse: everything from Feldman-esque pianissimo minimalism to textural builds, pretty melodies, tintinnabulations and even the occasional silent piece. The visual display will be projected throughout and a poster will explain what is going on to the audience.

‘Hear More’ seminar, Lima, Peru

On Thursday I had the pleasure of addressing GN Hearing’s ‘Hear More’ seminar in Lima, Peru, via Skype. It turns out I am “very famous in Latin America”, no doubt thanks to the Spanish version of this video. At any rate, when I was revealed onscreen, an enormous cheer went up from around 100 Latin American audiologists, so I suppose that must mean something!

I was interviewed by Paula Duarte for about an hour. I told my story first of all, and then went on to report on my recently completed research project into the consequences of Ménière’s disease for musicians. This included some very interesting findings, such as the fact that all the Ménière’s musicians I interviewed had diplacusis (even if they had never heard that word before) and the consequences of that and other symptoms for musical perception. The resulting paper should be published soon and I will include a link to it here when that happens.

I passed on to the audience some of the comments about hearing care and hearing technologies from the musicians I interviewed. I always have to tread carefully when discussing this, because musicians generally are rather frustrated by audiology and hearing aids, whereas audiologists tell me repeatedly that musicians can be very challenging clients! The way I describe it, there is a difference in expectations between musicians and audiologists. Musicians are generally disillusioned with the shortcomings of hearing aids, frustrated by the lack of consideration given to sound quality (rather than just amplification), disappointed that hearing tests restrict themselves to frequencies in the middle and upper range, and downhearted by an apparent lack of empathy. Audiologists, on the other hand, have to deal with an array of new and unfamiliar terminologies (the languages of music and hearing science are really quite different) and the fact that they have certain professional priorities which are not necessarily those of the musician/client. Their training does not fully equip them to deal with the kind of questions musicians frequently raise.

My solution to this, as always with interdisciplinary exchanges, is to try to find common areas and develop a shared language and understanding. This is not easy: audiological training does not generally study music (any more than ophthalmologists study painting) and musical training can be surprisingly indifferent to both sound and hearing. But there is evidently a will amongst audiologists to move towards better and more supportive care for musicians, which is great. With that in mind I shared a few musical aspirations:

Let’s give users more control of their hearing aids (e.g. full EQ, sound mixing, filtering capabilities);
Why can’t hearing aids reduce sound as well as amplifying it?
Improvements to localisation perception would be great, especially for those with uneven hearing loss;
Could a hearing aid correct diplacusis?
Please can we have benchmark consistency in everything that is heard!

Hearing aids are designed mainly for speech, as everybody knows, but increasing their potential for music is becoming more important all the time. I also suggested some more creative uses…how about a hearing aid that could identify birds when they sing in nearby trees? Or how about some kind of hearing aid-based Pokémon Go? Then it would be really cool to wear a hearing aid! AI seems to offer a way forwards here.

After all this, I talked about the Aural Diversity project, which everybody found fascinating and very valuable, to judge by the comments I have received subsequently.

Questions form the floor focused on some of the technical details. They were very interested in the extent to which the hearing aids have really helped me to hear music again. This is something I followed up with some individuals subsequently in chat. The essence of my response is that I am still finding out. Listening to music without hearing aids is now more or less impossible for me. It is unpleasant and the pitch distortions turn it into a kind of acoustic mush. The hearing aids improve on this: they ‘flatten out’ the diplacusis – not by removing it, but by lessening it and making it more predictable. Also, the increased flow of information means that my brain can fill in the gaps and make better sense of the music. So, for example, pitches below the octave below middle C become more audible thanks to the increased upper headroom. This seems crazy: how can more high frequencies improve perception of missing low frequencies? I think it is because the available overtones provide my brain with enough information to be able to figure out what the bass note should be. This combines with the residual hearing in my good ear to create a pretty convincing bass note.

However, I would not want to overstate the case here. Hearing aids create an artificial listening experience. I am aware that I am not hearing what is really there. And the sound is still pretty thin compared to natural acoustics. But I am so grateful for any meaningful sound input I can get. I become emotionally overwhelmed quite quickly, just listening through the music programme on my hearing aids, so thank you GN! Whereas I had given up listening to music altogether, I do now listen more, even though I tend to stick to fairly simple music that does not become too muddy. Also I cannot listen for long periods without making the tinnitus worse, so I have to be careful.

Aural Diversity: the first concert

The first Aural Diversity concert took place at the Old Barn, Kelston Roundhill, on Saturday July 6th.

The old barn pre-concert
Outside seating

This was an extraordinary and unique event of musical performances by aurally diverse people for an aurally diverse audience. The audiences included people who are deaf/blind, profoundly deaf, hearing impaired, autistic, tinnitus sufferers, and many other hearing types. The concert offered ways for all of them to access the music, including video and BSL signing, vibrating floors and haptic interaction, and streaming to radio headphones.

The concert was “relaxed”, meaning that people could sit anywhere, move about during performances, listen outside (the weather was great), or adopt any other listening mode that suited them. Our audiences for the two concerts took full advantage of these opportunities. For some individuals, it was a very powerful experience. One deaf/blind person said that, for the first time in his life, his cochlea had responded to music, as a result of combining the vibrating floor with the input stream.

Vibrating floors.
Vibrating floors in use.

My personal feelings are one of great pride that we managed to pull off such an extraordinary event, and great excitement about the possibilities of Aural Diversity as a project for the future. The next event is the conference and concert in November.

Here is a picture of some of the musicians and me rehearsing in the barn. In the background you can see our terrific BSL interpreter, Elizabeth Oliver.

The musical programme offered an enormous diversity of music that reflected the diversity of hearing approaches of the composers and musicians. I have severe hearing loss, tinnitus and diplacusis due to Meniere’s Disease. Anya experiences hyperacusis. John has notch losses and tinnitus. Matthew’s hearing was severely damaged by childhood meningitis and has worsened over time. Simon has lost much upper frequency hearing due to head trauma. Sam has a notch loss in the higher register. Ruth was deaf from birth and wears cochlear implants.

The concert began with Arbometallurgism by Anya Ustaszewski. This is an electroacoustic piece featuring some exquisitely delicate sounds. I listened on the roving headphones and it worked brilliantly outdoors.

Ruth Mallalieu and her husband Jonathan performed some jazz standards on clarinet and piano. Ruth’s profound deafness and cochlear implants mean that she has to transpose the music within a range that works for her limited hearing, and the accompaniment must be pared down. It was quite moving to witness her performance.

I played my own piece “Where two rivers meet, the water is never calm” for my specially constructed “diplacusis piano” (see previous postings). This used a rolling spectrogram to convey, both to myself and the audience, what I cannot hear. The sound of the instrument is quite disturbing, so this was a very minimal piece. The virtuosity was in the listening. It was extremely hard to compose and perform this piece.

Matthew then sang some lovely and sad Cornish folksongs that he had composed, accompanying himself on banjo and guitar and with lyrics signed by Elizabeth.

Simon Allen’s ‘Map Fragments’ introduced a fascinating range of sounds, including two home-built instruments, rubbed fishbowls, gongs, rustling leaves, rubbed surfaces, viol, piano, and percussion. Elizabeth also signed a poem which was understood only by those who could read BSL. There was a video accompaniment too. The piece had a memorable and lasting impression.

John Drever had recorded the musicians imitating hand-dryers, which then played back while the musicians made further imitations of the imitations through radio microphones, allowing them to wander around the performance space and outside. The result was partly comical and partly mysterious, but underpinned by a powerful message about the way hand dryers are damaging the hearing of children, in particular.

‘Meditations on Hildegard’ by Matthew Spring comprised himself singing, playing hardy-gurdy and handbells. The piece evokes the 12th Century music of Hildegard von Bingen, but adds a new interpretation. It worked brilliantly in an environment made largely of stone, and seemed to connect with ancient history of Kelston Roundhill.

Anya Ustaszewski’s Vox Random is another electroacoustic piece, this time using vocal sounds. It evoked very effectively an attempt at communication across hearing limitations.

The matinée performance had continued for so long that I was obliged to drop the next piece, my “St. George’s Island Revisited”, but I was able to include it in the evening performance, when we had speeded up a bit. This little chorale has sentimental value for me and Matthew Spring, whose parents loved it. It evokes Looe Island. It was also the hardest piece to perform well, because it relied on quite conventional musical abilities which are normally taken for granted: the ability to hear in tune, to stay in time, to produce good tone. These are always challenged by hearing impairment. Nevertheless, it sounded good!

My own “Kelston Birdsong” gave people the opportunity to listen outside, or to watch a slideshow of the featured birds. Each bird triggers a particular musician who plays a call. Hearing the call, the other musicians play a response. This process repeats. All the birds, calls and responses sit within the comfortable hearing range of a particular musician. The piece feels quite profound and beautiful as it meditatively pays homage to these creatures that are steadily disappearing.

The final event was Sensonic by Sam Sturtivant. This is a low-frequency/sub bass installation that gets the most out of the vibrating floors. This was very popular with those who enjoy vibrations and ‘feeling’, rather than ‘hearing’ music and sound.

All in all this was a very successful first concert. There were things that went wrong or were less effective, of course. We had a couple of moments of feedback with the roving mikes in John’s piece which were disturbing, and an unexpected crash of an object falling over during the final rendition of Kelston birdsong. The “silent disco” headphones worked very well, but unfortunately the ear pieces were not big enough to sit around hearing aids. “Streaming to hearing aids” was advertised but did not work (nobody asked for it, in fact). We really needed a loop system and some dedicated devices for other forms of streaming.

No doubt there will be more issues raised when we read the feedback forms from the audiences, but that was part of the purpose of this event: to learn and develop.

Here’s to many more Aural Diversity concerts! I very much hope that other people will get involved and start staging similar events elsewhere.

Composing for Aural Diversity

The first Aural Diversity concert is now approaching fast. I have composed three pieces for this concert.

“Where two rivers meet, the water is never calm” is written for my diplacusis piano and reflects my hearing without aids.

“St. George’s Island Revisited” and “Kelston Birdsong”, on the other hand, show what I can do when I wear my GNResound Linx Quattro hearing aids.

This video explains the Aural Diversity concept, but I wanted to reflect on the composition of the three pieces and the challenges they involved in this blog post.

The main challenge for me as a composer with severe hearing impairment is whether to compose ‘normal’ music whose sound I can imagine (if not hear), or to compose music that reflects my hearing as it actually is.

“Where two rivers meet, the water is never calm” adopts the latter path and was extremely difficult to compose. First I had to build an instrument that accurately reproduces my hearing. This includes severe hearing loss, fluctuating tinnitus, and diplacusis (wherein you hear two different pitches when a single note is played). Composing for such an instrument is laborious and painful, because I hear my own diplacusis with diplacusis! It’s like endlessly receding mirrors. I developed a visual method using a scrolling spectrogram to enable me to match frequencies from the overtone structures of each sound. What I found was that very minimal music works best, because otherwise the results get muddy very quickly and sound simply like an out-of-tune piano. I have tried to make something beautiful out of what is quite a distressing flow of two different information streams, hence the title.

“Kelston Birdsong” is written with the hearing aids, which reduce the diplacusis and increase the audibility of the sounds as far as the Ménière’s will allow (lower pitches are still lost). I composed the piece to theatricalise the listening of the great musicians who are taking part in the concert: Simon Allen (percussion), John Drever (digital sound), Ruth Mallalieu (clarinet), Matthew Spring (viol), Anya Ustaszewski (flute). The way the piece works is that a birdsong is played from a pool of 35 songs. Each birdsong is assigned to one of the musicians and sits within their comfortable hearing range. On hearing the song, they play a ‘call’ from a sheet. When the rest of the band hear that musician’s call, they play a response from a menu that is geared to each individual’s hearing range. The process then repeats until all 35 birds have been heard.

The idea is that the audience can go for a walk outside during the piece, wearing radio controlled heaphones which stream the music to them. They can then hear the sounds the kind of birds encountered on Kelston Roundhill.

Finally, “St. George’s Island Revisited” features Matthew Spring on viol. It is a simple but lovely tune for the entire ensemble to play. Matthew and I go back a long way together and I have always admired his great musicality and his cheerful disregard of his own hearing limitations, which he has had for much longer than I.

Anyway, I do hope we have a good audience for the concerts. There will be two performances, one at 2.30 and one at 6 pm. Do come along!

Creating a visual language for the diplacusis piano

In previous posts I have discussed the construction of a “diplacusis piano”, a digital instrument that reproduces accurately what I actually hear. Diplacusis is a phenomenon in which you hear two different pitches, one in each ear. In my case, the left ear is mostly in tune, whereas the right ear is mostly out of tune, by fairly random amounts.

The problem with composing for the resulting instrument is twofold: firstly, because of my hearing loss I cannot hear the (quiet) sounds it produces very well; secondly, what I do hear I hear with diplacusis, so diplacusis on diplacusis!

How then to compose for this instrument, given that I have only a poor idea of what a person with normal hearing would hear? My solution is to develop a visual language based on the spectrograms of each note. I have been steadily learning about the character of each spectrogram as I go.

Here are some stills of most of the keyboard. The image quality has been reduced for speed of upload, but they are clear enough for you to be able to see how they vary. It’s really intriguing. My idea now is to start to connect together the various overtones to begin to create some kind of “harmony”. You’ll see that I have put gridlines on each image to help with this.

These are static images (generated with Pierre Couprie’s wonderful EAnalysis software). In the live performance, I will work with spectrograms that continuously evolve over time. This, I hope, will act both as a kind of score but also, for listeners who have even less hearing than myself, a visual version of the music that can be enjoyed without necessarily hearing everything.

So, here is a selection of the keyboard, just to give you an idea:

And here are just two notes for comparison at higher quality. You can see how different they are in terms of both structure and behaviour over time. This gives me a starting point for composition.

C4 (middle C)
C5

Building the “Diplacusis Piano”, Part 3/3: Making Music!

In the last two posts (here and here) I have described the process of building a digital “piano” that reproduces my diplacusis. Having constructed the instrument with the help of Professor Craig Vear, I have begun to muse on the creative possibilities that this has revealed.

It is immediately clear that this is not really a piano at all, despite having piano sounds as its raw material. If I play a common chord, or attempt to play some classical piano music, all one hears is an out-of-tune piano. It’s a bit like a honky-tonk but worse – some kind of abandoned instrument. Interestingly, the brain filters out the “rubbish” from the signal and quickly the out-of-tuneness recedes into a normal piano.

So, to avoid sounding like I’m just trying to write piano music for a bad instrument, I must find a new way of thinking about composing for this diplacusis piano. This echoes my experience with diplacusis and hearing loss generally. I need to find new ways of listening if I am to appreciate and enjoy music now. My aim is to create something beautiful, despite the supposed limitations imposed by my condition.

Craig was keen to describe how each note, each adjusted sample, made a different sonic journey lasting 10 seconds. What he could hear was a fascinating mixture of rhythmical beats, emerging harmonics, clusters of partials, percussive noise, all evolving over time. Every single note has its own character, which he was able to describe to me in some detail, waving his arms expressively as he did so. So this is not a piano, but rather an 88-note composition with a total duration of just under 15 minutes!

The problem is, of course, that I cannot hear them! To me, each sample lasts about 3 seconds, and I do not trust what I hear even within that time frame. So, how can I possibly write music for this instrument if I cannot hear it properly?

Once again, new digital technologies come to my aid. Firstly, there are my wonderful GNResound Linx Quattro hearing aids. During the building of the instrument, I removed the hearing aids, so as to capture as accurately as possible my diplacusis. Now, by reinserting them, I can gain a much better impression of the sounds of the instrument. I can hear them for longer and understand the complex shifting interactions between the higher partials. However, the hearing aids alone are insufficient, especially in the lower registers. Even with my unvented mould, which prevents sound escaping from my right ear, the low end response is not enough.

As we worked on the instrument, we used a spectrogram to understand what was happening in each sample. This was fascinating, because it conveyed rich information about each note’s “story”, showing the strange rhythmic pulsations that arise from beats, the emergence and withdrawal of various overtones, the intensity of different registers, and so on.

So, my way of composing is becoming clear: I must familiarise myself with the story that each of my 88 mini compositions tells. Then I can string these together in ways which create a convincing musical narrative. There may be many such narratives – that remains to be seen – but each will have its own unique and engaging storyline that listeners can perceive.

To help them in this, I plan to add a video component to the performance, showing the spectrograms as they change, any musical descriptions (in text) or notations that are relevant, and perhaps a more imaginative interpretative layer. Multiple windows on a single screen, conveying the story of the piece.

This will help people in the Aural Diversity concert (where this will be premiered) whose hearing diverges from my own. They will be able to experience the composition in several ways at once. My performance will not resemble a traditional piano recital much. The keys on the keyboard are merely triggers for sonic navigations to begin. But it will hopefully prove engaging as I convey the emotional nature of the discoveries described in these posts and combine that with an informative and stimulating visual display.

Building the “Diplacusis Piano”, Part 2/3: In the studio

In the previous post I described the background to this project to construct a digital piano that renders my diplacusis audible to others. This post describes my studio session with Craig Vear, during which we assembled the entire instrument.

We worked in the Courtyard Studio at De Montfort University, which was the very first space I constructed when I started up the Music Technology programme back in 1997. Craig Vear is a former student of mine who is now a Professor. I’ve known him from the days of the BA Performing Arts (Music) degree at Leicester Polytechnic, where I started my academic career in 1986. It seems that past investments are repaying me handsomely! Here’s Craig in the studio, attempting to describe to me how one of the notes unfolds:

First we created middle C (C4) using Bosendorfer samples. This was something I had already done in my previous attempt, but the difference this time is that Craig’s ears were able to hear the interesting journey the difference tones take as the edited and filtered sample unfolds. This is the first clue about the creative possibilities that will subsequently emerge.

We matched the extent of my hearing loss in the right channel, in particular, and panned the stereo channels hard left and hard right. We introduced some filters to take out the lower frequencies as appropriate (it gets much more extreme in the lower registers) and some high ones too, using my audiogram as a guide. Finally, we detuned the samples. In most cases this was an adjustment only to the right channel, but sometimes it also entailed adjusting the left. Detuning meant converting frequency information in Hertz into cents (i.e. hundredths of a semitone). It’s a bit hard to make out in this photo, but the two high screens show an online hertz/cents converter on the left and my original diplacusis chart on the right. The desktop screens show the samples on the left and the filters and tuning information on the right.

I had already decided that none of the sounds will rise above piano (i.e. soft). This is because my hyperacusis also means that I find any loud sounds distressing nowadays. Having tried to play a conventional piano recently, I realised that the mechanical sound of hammers hitting strings is too painful for me, regardless of the diplacusis. So this will be a soft and gentle instrument.

So, to give an idea what this sounds like, here is the original sample plus its “diplacusis” version:

Untreated C4
Diplacusis-adjusted C4

We repeated this process across the entire 88-note range of the piano, following the findings described in the previous post. Here are some more C-diplacusis notes, to give an idea of the sheer range and variety of sounds that resulted:

C1
C2
C3
C5
C6 (N.B. – this is unaffected by diplacusis)
C7
C8

The final step in the building process is to create an instrument in Logic (my sequencer of choice) using the ESX24 sampler. This maps the various samples across the whole instrument. In the range that I had specified using my singing method, we made individual samples for each note. In the other ranges we transposed samples up or down across a minor 3rd.

Building the “Diplacusis Piano”, Part 1/3: Background

Introduction

In a previous post I described my struggles with diplacusis and my intention to build a “piano” that could reproduce the sounds that I actually hear for the benefit (?) of others. This series of posts will document the progress I have made so far and the exciting compositional possibilities that are opening up as a result.

Diplacusis is a disturbing phenomenon in which the two ears hear a given musical note at two different pitches. It is yet one more from the smorgasbord of symptoms associated with Ménière’s Disease (see this post for a detailed account of my Ménière’s experiences), alongside vertigo, hearing loss, tinnitus and aural fullness.

I decided to try to build a musical instrument that would convey to others what this sounds like. I wanted this to offer me a creative opportunity to make some beautiful music. What it is in fact providing is not just that, but a whole new direction for my composition.

This post is a detailed account of the first steps in building this instrument. It is necessarily a digital instrument: there is no way this could be done using traditional technologies. I have been greatly helped by my GNResound Linx Quattro hearing aids and by my friend, the composer and Professor Craig Vear, who provided not just technical fluency in the studio and an otologically “normal” pair of ears, but also the ability to describe each sound to me as it emerged from this new instrument.

Starting Points

I decided to start with a piano simply because that is the instrument I used to play back in the days when I regularly made music. Piano sounds also have a pleasing decay which I instinctively felt would work well with this phenomenon. Nobody wants to listen to sustained diplacusis!

In my previous scientific study of my own diplacusis, I mapped the differences in pitch across my own singing range by laboriously stopping the good ear and singing the pitch I heard in Hertz, then comparing it with the correct pitch. This gave me a verified chart from F#2 (~92Hz) to C4 (~261Hz). To understand what comes next, you need to see my audiogram:

Andrew Hugill’s audiogram (July 2017)

This one is a little bit out of date, but my hearing has not changed much since then. Observe that (as is usual in audiology) the right and left ears are reversed in the image. You will also notice that audiology takes no interest in frequencies below 125Hz or above 8kHz. This is because audiology is mainly interested in speech and, frustratingly, takes little account of music.

Anyway, you will see quite clearly that my right ear (in red) is way below my left ear. This is what severe hearing loss looks like. My left ear has normal hearing (above 10dB) in the region between 1500 Hz and 4000 Hz. This is my salvation in speech situations. But there is quite a lot of hearing loss around that. Nevertheless, my pitch perception in that ear is tolerable.

One other thing to notice is that the lower frequencies show a marked decline in both ears. This is typical of Ménière’s Disease, where the bass disappears first. By contrast, in age-related hearing loss (presbycusis) the high frequencies deteriorate first, which is why so many hearing aids concentrate on the high end.

First efforts

Now you can see why the next step in preparing for the instrument was so daunting and has taken me many months of struggle to figure out. I could no longer rely on either my audiogram or my singing voice to help me understand my own pitch perception, because the rest of the piano keyboard is simply out of range. To make matters worse, every time I tried it was like working in a hall of endlessly reflecting mirrors. I would listen to my diplacusis with my diplacusis… it was very uncomfortable and very tiring.

So with considerable effort, I worked on trying to understand my own hearing by feeling my way with trial and error. Gradually a number of key features emerged:

  1. There is an octave between F#5 (~698Hz) and F#6 (~1397Hz) where there is no diplacusis at all. In other words, I hear a piano just like a normal piano, as anyone else would, albeit with greatly reduced hearing in one ear.
  2. In the range above that, the diplacusis gradually reappears, getting worse the higher up you go. However, since the piano sounds pretty metallic in that register anyway the effect is not as disturbing as you might expect.
  3. The range from C4 (~261Hz) down to F2 (~87Hz) is affected by random amounts of diplacusis as per the chart from the earlier study.
  4. Below E2 (~82Hz) this random diplacusis effect continues, but now a new phenomenon enters, presumably resulting from the general loss in low frequency hearing. The fundamental frequencies of each note and then the first and second partials, gradually disappear, leaving a thudding sound and a collection of higher overtone frequencies. This complex spectrum is then subject to precisely the same diplacusis that affects the higher register, resulting in a perceptible shift in spectrum but no discernible change in pitch.
  5. And this is, I think, a novel finding: every diplacusis induced detuning is flat! This seems to contradict the received wisdom that diplacusis notes are sharp. I need to do more research into this.

Given the difficulties of translating the above into any kind of instrument, I eventually had to admit defeat and seek help. This is where Craig Vear enters the picture and the account of our building session yesterday will be the subject of my next post.