‘Thirty Minutes’ for diplacusis piano

The Aural Diversity conference is fast approaching. This includes the second Aural Diversity concert, which is being curated by Duncan Chapman. I have been asked to contribute a performance on the diplacusis piano. The idea is that my performance should be done as an installation in the intervals between the more formal sections (three of them) of the concert. I like this format very much. The audience may come and go as they please, and there is less pressure on me and my hearing to deliver a typical concert performance.

Which brings me to the composition itself. Previous blog entries have detailed just how hard it is for me to compose for this instrument. The ‘diplacusis piano’ is a digital instrument that reproduces what I actually hear when I play a normal piano. In the low to mid register, notes are unevenly ‘split’ between the actual pitch and a detuned pitch, which may be anything up to a minor third flat. In the low register, I cannot hear fundamentals, which means that the overtone structures that I do hear are similarly pitch-distorted. High register is not too bad, although the top two octaves sound increasingly harsh. And the whole thing is unbalanced by the fact that my right ear has much less hearing than my left, and everything is heard through a wall of ever-changing tinnitus (which I do not reproduce on the instrument).

Not surprisingly, therefore, composing for this instrument is hard because it sounds like endlessly self-reflecting mirrors. It is psychologically and acoustically distressing. My objective is to make something beautiful out of this, so I persist. But it is very hard to do.

My solution this time is to compose thirty one-minute pieces that may be played in any order. This way, I only need to listen for short periods, and I can vary the range of listening required, which makes it easier for me. I am forcing the music (and the instrument) to adjust to what I can do, rather than trying to push myself to meet the demands of the instrument. I hope that this kinder, gentler approach will reflect in music that is more approachable for another listener. At any rate, if someone does not like a particular piece, they only have to wait one minute for something different. That’s aural diversity!

As before, I am using a visual composition method, involving a scrolling spectrogram (see below). However, I have also included now a Lissajous vectorscope, which shows the behaviour of the various notes within the stereo field. You can get the idea from this video.

Spectrogram display

The music is very diverse: everything from Feldman-esque pianissimo minimalism to textural builds, pretty melodies, tintinnabulations and even the occasional silent piece. The visual display will be projected throughout and a poster will explain what is going on to the audience.

‘Hear More’ seminar, Lima, Peru

On Thursday I had the pleasure of addressing GN Hearing’s ‘Hear More’ seminar in Lima, Peru, via Skype. It turns out I am “very famous in Latin America”, no doubt thanks to the Spanish version of this video. At any rate, when I was revealed onscreen, an enormous cheer went up from around 100 Latin American audiologists, so I suppose that must mean something!

I was interviewed by Paula Duarte for about an hour. I told my story first of all, and then went on to report on my recently completed research project into the consequences of Ménière’s disease for musicians. This included some very interesting findings, such as the fact that all the Ménière’s musicians I interviewed had diplacusis (even if they had never heard that word before) and the consequences of that and other symptoms for musical perception. The resulting paper should be published soon and I will include a link to it here when that happens.

I passed on to the audience some of the comments about hearing care and hearing technologies from the musicians I interviewed. I always have to tread carefully when discussing this, because musicians generally are rather frustrated by audiology and hearing aids, whereas audiologists tell me repeatedly that musicians can be very challenging clients! The way I describe it, there is a difference in expectations between musicians and audiologists. Musicians are generally disillusioned with the shortcomings of hearing aids, frustrated by the lack of consideration given to sound quality (rather than just amplification), disappointed that hearing tests restrict themselves to frequencies in the middle and upper range, and downhearted by an apparent lack of empathy. Audiologists, on the other hand, have to deal with an array of new and unfamiliar terminologies (the languages of music and hearing science are really quite different) and the fact that they have certain professional priorities which are not necessarily those of the musician/client. Their training does not fully equip them to deal with the kind of questions musicians frequently raise.

My solution to this, as always with interdisciplinary exchanges, is to try to find common areas and develop a shared language and understanding. This is not easy: audiological training does not generally study music (any more than ophthalmologists study painting) and musical training can be surprisingly indifferent to both sound and hearing. But there is evidently a will amongst audiologists to move towards better and more supportive care for musicians, which is great. With that in mind I shared a few musical aspirations:

Let’s give users more control of their hearing aids (e.g. full EQ, sound mixing, filtering capabilities);
Why can’t hearing aids reduce sound as well as amplifying it?
Improvements to localisation perception would be great, especially for those with uneven hearing loss;
Could a hearing aid correct diplacusis?
Please can we have benchmark consistency in everything that is heard!

Hearing aids are designed mainly for speech, as everybody knows, but increasing their potential for music is becoming more important all the time. I also suggested some more creative uses…how about a hearing aid that could identify birds when they sing in nearby trees? Or how about some kind of hearing aid-based Pokémon Go? Then it would be really cool to wear a hearing aid! AI seems to offer a way forwards here.

After all this, I talked about the Aural Diversity project, which everybody found fascinating and very valuable, to judge by the comments I have received subsequently.

Questions form the floor focused on some of the technical details. They were very interested in the extent to which the hearing aids have really helped me to hear music again. This is something I followed up with some individuals subsequently in chat. The essence of my response is that I am still finding out. Listening to music without hearing aids is now more or less impossible for me. It is unpleasant and the pitch distortions turn it into a kind of acoustic mush. The hearing aids improve on this: they ‘flatten out’ the diplacusis – not by removing it, but by lessening it and making it more predictable. Also, the increased flow of information means that my brain can fill in the gaps and make better sense of the music. So, for example, pitches below the octave below middle C become more audible thanks to the increased upper headroom. This seems crazy: how can more high frequencies improve perception of missing low frequencies? I think it is because the available overtones provide my brain with enough information to be able to figure out what the bass note should be. This combines with the residual hearing in my good ear to create a pretty convincing bass note.

However, I would not want to overstate the case here. Hearing aids create an artificial listening experience. I am aware that I am not hearing what is really there. And the sound is still pretty thin compared to natural acoustics. But I am so grateful for any meaningful sound input I can get. I become emotionally overwhelmed quite quickly, just listening through the music programme on my hearing aids, so thank you GN! Whereas I had given up listening to music altogether, I do now listen more, even though I tend to stick to fairly simple music that does not become too muddy. Also I cannot listen for long periods without making the tinnitus worse, so I have to be careful.

Composing for Aural Diversity

The first Aural Diversity concert is now approaching fast. I have composed three pieces for this concert.

“Where two rivers meet, the water is never calm” is written for my diplacusis piano and reflects my hearing without aids.

“St. George’s Island Revisited” and “Kelston Birdsong”, on the other hand, show what I can do when I wear my GNResound Linx Quattro hearing aids.

This video explains the Aural Diversity concept, but I wanted to reflect on the composition of the three pieces and the challenges they involved in this blog post.

The main challenge for me as a composer with severe hearing impairment is whether to compose ‘normal’ music whose sound I can imagine (if not hear), or to compose music that reflects my hearing as it actually is.

“Where two rivers meet, the water is never calm” adopts the latter path and was extremely difficult to compose. First I had to build an instrument that accurately reproduces my hearing. This includes severe hearing loss, fluctuating tinnitus, and diplacusis (wherein you hear two different pitches when a single note is played). Composing for such an instrument is laborious and painful, because I hear my own diplacusis with diplacusis! It’s like endlessly receding mirrors. I developed a visual method using a scrolling spectrogram to enable me to match frequencies from the overtone structures of each sound. What I found was that very minimal music works best, because otherwise the results get muddy very quickly and sound simply like an out-of-tune piano. I have tried to make something beautiful out of what is quite a distressing flow of two different information streams, hence the title.

“Kelston Birdsong” is written with the hearing aids, which reduce the diplacusis and increase the audibility of the sounds as far as the Ménière’s will allow (lower pitches are still lost). I composed the piece to theatricalise the listening of the great musicians who are taking part in the concert: Simon Allen (percussion), John Drever (digital sound), Ruth Mallalieu (clarinet), Matthew Spring (viol), Anya Ustaszewski (flute). The way the piece works is that a birdsong is played from a pool of 35 songs. Each birdsong is assigned to one of the musicians and sits within their comfortable hearing range. On hearing the song, they play a ‘call’ from a sheet. When the rest of the band hear that musician’s call, they play a response from a menu that is geared to each individual’s hearing range. The process then repeats until all 35 birds have been heard.

The idea is that the audience can go for a walk outside during the piece, wearing radio controlled heaphones which stream the music to them. They can then hear the sounds the kind of birds encountered on Kelston Roundhill.

Finally, “St. George’s Island Revisited” features Matthew Spring on viol. It is a simple but lovely tune for the entire ensemble to play. Matthew and I go back a long way together and I have always admired his great musicality and his cheerful disregard of his own hearing limitations, which he has had for much longer than I.

Anyway, I do hope we have a good audience for the concerts. There will be two performances, one at 2.30 and one at 6 pm. Do come along!

Creating a visual language for the diplacusis piano

In previous posts I have discussed the construction of a “diplacusis piano”, a digital instrument that reproduces accurately what I actually hear. Diplacusis is a phenomenon in which you hear two different pitches, one in each ear. In my case, the left ear is mostly in tune, whereas the right ear is mostly out of tune, by fairly random amounts.

The problem with composing for the resulting instrument is twofold: firstly, because of my hearing loss I cannot hear the (quiet) sounds it produces very well; secondly, what I do hear I hear with diplacusis, so diplacusis on diplacusis!

How then to compose for this instrument, given that I have only a poor idea of what a person with normal hearing would hear? My solution is to develop a visual language based on the spectrograms of each note. I have been steadily learning about the character of each spectrogram as I go.

Here are some stills of most of the keyboard. The image quality has been reduced for speed of upload, but they are clear enough for you to be able to see how they vary. It’s really intriguing. My idea now is to start to connect together the various overtones to begin to create some kind of “harmony”. You’ll see that I have put gridlines on each image to help with this.

These are static images (generated with Pierre Couprie’s wonderful EAnalysis software). In the live performance, I will work with spectrograms that continuously evolve over time. This, I hope, will act both as a kind of score but also, for listeners who have even less hearing than myself, a visual version of the music that can be enjoyed without necessarily hearing everything.

So, here is a selection of the keyboard, just to give you an idea:

And here are just two notes for comparison at higher quality. You can see how different they are in terms of both structure and behaviour over time. This gives me a starting point for composition.

C4 (middle C)

Building the “Diplacusis Piano”, Part 3/3: Making Music!

In the last two posts (here and here) I have described the process of building a digital “piano” that reproduces my diplacusis. Having constructed the instrument with the help of Professor Craig Vear, I have begun to muse on the creative possibilities that this has revealed.

It is immediately clear that this is not really a piano at all, despite having piano sounds as its raw material. If I play a common chord, or attempt to play some classical piano music, all one hears is an out-of-tune piano. It’s a bit like a honky-tonk but worse – some kind of abandoned instrument. Interestingly, the brain filters out the “rubbish” from the signal and quickly the out-of-tuneness recedes into a normal piano.

So, to avoid sounding like I’m just trying to write piano music for a bad instrument, I must find a new way of thinking about composing for this diplacusis piano. This echoes my experience with diplacusis and hearing loss generally. I need to find new ways of listening if I am to appreciate and enjoy music now. My aim is to create something beautiful, despite the supposed limitations imposed by my condition.

Craig was keen to describe how each note, each adjusted sample, made a different sonic journey lasting 10 seconds. What he could hear was a fascinating mixture of rhythmical beats, emerging harmonics, clusters of partials, percussive noise, all evolving over time. Every single note has its own character, which he was able to describe to me in some detail, waving his arms expressively as he did so. So this is not a piano, but rather an 88-note composition with a total duration of just under 15 minutes!

The problem is, of course, that I cannot hear them! To me, each sample lasts about 3 seconds, and I do not trust what I hear even within that time frame. So, how can I possibly write music for this instrument if I cannot hear it properly?

Once again, new digital technologies come to my aid. Firstly, there are my wonderful GNResound Linx Quattro hearing aids. During the building of the instrument, I removed the hearing aids, so as to capture as accurately as possible my diplacusis. Now, by reinserting them, I can gain a much better impression of the sounds of the instrument. I can hear them for longer and understand the complex shifting interactions between the higher partials. However, the hearing aids alone are insufficient, especially in the lower registers. Even with my unvented mould, which prevents sound escaping from my right ear, the low end response is not enough.

As we worked on the instrument, we used a spectrogram to understand what was happening in each sample. This was fascinating, because it conveyed rich information about each note’s “story”, showing the strange rhythmic pulsations that arise from beats, the emergence and withdrawal of various overtones, the intensity of different registers, and so on.

So, my way of composing is becoming clear: I must familiarise myself with the story that each of my 88 mini compositions tells. Then I can string these together in ways which create a convincing musical narrative. There may be many such narratives – that remains to be seen – but each will have its own unique and engaging storyline that listeners can perceive.

To help them in this, I plan to add a video component to the performance, showing the spectrograms as they change, any musical descriptions (in text) or notations that are relevant, and perhaps a more imaginative interpretative layer. Multiple windows on a single screen, conveying the story of the piece.

This will help people in the Aural Diversity concert (where this will be premiered) whose hearing diverges from my own. They will be able to experience the composition in several ways at once. My performance will not resemble a traditional piano recital much. The keys on the keyboard are merely triggers for sonic navigations to begin. But it will hopefully prove engaging as I convey the emotional nature of the discoveries described in these posts and combine that with an informative and stimulating visual display.

Building the “Diplacusis Piano”, Part 2/3: In the studio

In the previous post I described the background to this project to construct a digital piano that renders my diplacusis audible to others. This post describes my studio session with Craig Vear, during which we assembled the entire instrument.

We worked in the Courtyard Studio at De Montfort University, which was the very first space I constructed when I started up the Music Technology programme back in 1997. Craig Vear is a former student of mine who is now a Professor. I’ve known him from the days of the BA Performing Arts (Music) degree at Leicester Polytechnic, where I started my academic career in 1986. It seems that past investments are repaying me handsomely! Here’s Craig in the studio, attempting to describe to me how one of the notes unfolds:

First we created middle C (C4) using Bosendorfer samples. This was something I had already done in my previous attempt, but the difference this time is that Craig’s ears were able to hear the interesting journey the difference tones take as the edited and filtered sample unfolds. This is the first clue about the creative possibilities that will subsequently emerge.

We matched the extent of my hearing loss in the right channel, in particular, and panned the stereo channels hard left and hard right. We introduced some filters to take out the lower frequencies as appropriate (it gets much more extreme in the lower registers) and some high ones too, using my audiogram as a guide. Finally, we detuned the samples. In most cases this was an adjustment only to the right channel, but sometimes it also entailed adjusting the left. Detuning meant converting frequency information in Hertz into cents (i.e. hundredths of a semitone). It’s a bit hard to make out in this photo, but the two high screens show an online hertz/cents converter on the left and my original diplacusis chart on the right. The desktop screens show the samples on the left and the filters and tuning information on the right.

I had already decided that none of the sounds will rise above piano (i.e. soft). This is because my hyperacusis also means that I find any loud sounds distressing nowadays. Having tried to play a conventional piano recently, I realised that the mechanical sound of hammers hitting strings is too painful for me, regardless of the diplacusis. So this will be a soft and gentle instrument.

So, to give an idea what this sounds like, here is the original sample plus its “diplacusis” version:

Untreated C4
Diplacusis-adjusted C4

We repeated this process across the entire 88-note range of the piano, following the findings described in the previous post. Here are some more C-diplacusis notes, to give an idea of the sheer range and variety of sounds that resulted:

C6 (N.B. – this is unaffected by diplacusis)

The final step in the building process is to create an instrument in Logic (my sequencer of choice) using the ESX24 sampler. This maps the various samples across the whole instrument. In the range that I had specified using my singing method, we made individual samples for each note. In the other ranges we transposed samples up or down across a minor 3rd.

Building the “Diplacusis Piano”, Part 1/3: Background


In a previous post I described my struggles with diplacusis and my intention to build a “piano” that could reproduce the sounds that I actually hear for the benefit (?) of others. This series of posts will document the progress I have made so far and the exciting compositional possibilities that are opening up as a result.

Diplacusis is a disturbing phenomenon in which the two ears hear a given musical note at two different pitches. It is yet one more from the smorgasbord of symptoms associated with Ménière’s Disease (see this post for a detailed account of my Ménière’s experiences), alongside vertigo, hearing loss, tinnitus and aural fullness.

I decided to try to build a musical instrument that would convey to others what this sounds like. I wanted this to offer me a creative opportunity to make some beautiful music. What it is in fact providing is not just that, but a whole new direction for my composition.

This post is a detailed account of the first steps in building this instrument. It is necessarily a digital instrument: there is no way this could be done using traditional technologies. I have been greatly helped by my GNResound Linx Quattro hearing aids and by my friend, the composer and Professor Craig Vear, who provided not just technical fluency in the studio and an otologically “normal” pair of ears, but also the ability to describe each sound to me as it emerged from this new instrument.

Starting Points

I decided to start with a piano simply because that is the instrument I used to play back in the days when I regularly made music. Piano sounds also have a pleasing decay which I instinctively felt would work well with this phenomenon. Nobody wants to listen to sustained diplacusis!

In my previous scientific study of my own diplacusis, I mapped the differences in pitch across my own singing range by laboriously stopping the good ear and singing the pitch I heard in Hertz, then comparing it with the correct pitch. This gave me a verified chart from F#2 (~92Hz) to C4 (~261Hz). To understand what comes next, you need to see my audiogram:

Andrew Hugill’s audiogram (July 2017)

This one is a little bit out of date, but my hearing has not changed much since then. Observe that (as is usual in audiology) the right and left ears are reversed in the image. You will also notice that audiology takes no interest in frequencies below 125Hz or above 8kHz. This is because audiology is mainly interested in speech and, frustratingly, takes little account of music.

Anyway, you will see quite clearly that my right ear (in red) is way below my left ear. This is what severe hearing loss looks like. My left ear has normal hearing (above 10dB) in the region between 1500 Hz and 4000 Hz. This is my salvation in speech situations. But there is quite a lot of hearing loss around that. Nevertheless, my pitch perception in that ear is tolerable.

One other thing to notice is that the lower frequencies show a marked decline in both ears. This is typical of Ménière’s Disease, where the bass disappears first. By contrast, in age-related hearing loss (presbycusis) the high frequencies deteriorate first, which is why so many hearing aids concentrate on the high end.

First efforts

Now you can see why the next step in preparing for the instrument was so daunting and has taken me many months of struggle to figure out. I could no longer rely on either my audiogram or my singing voice to help me understand my own pitch perception, because the rest of the piano keyboard is simply out of range. To make matters worse, every time I tried it was like working in a hall of endlessly reflecting mirrors. I would listen to my diplacusis with my diplacusis… it was very uncomfortable and very tiring.

So with considerable effort, I worked on trying to understand my own hearing by feeling my way with trial and error. Gradually a number of key features emerged:

  1. There is an octave between F#5 (~698Hz) and F#6 (~1397Hz) where there is no diplacusis at all. In other words, I hear a piano just like a normal piano, as anyone else would, albeit with greatly reduced hearing in one ear.
  2. In the range above that, the diplacusis gradually reappears, getting worse the higher up you go. However, since the piano sounds pretty metallic in that register anyway the effect is not as disturbing as you might expect.
  3. The range from C4 (~261Hz) down to F2 (~87Hz) is affected by random amounts of diplacusis as per the chart from the earlier study.
  4. Below E2 (~82Hz) this random diplacusis effect continues, but now a new phenomenon enters, presumably resulting from the general loss in low frequency hearing. The fundamental frequencies of each note and then the first and second partials, gradually disappear, leaving a thudding sound and a collection of higher overtone frequencies. This complex spectrum is then subject to precisely the same diplacusis that affects the higher register, resulting in a perceptible shift in spectrum but no discernible change in pitch.
  5. And this is, I think, a novel finding: every diplacusis induced detuning is flat! This seems to contradict the received wisdom that diplacusis notes are sharp. I need to do more research into this.

Given the difficulties of translating the above into any kind of instrument, I eventually had to admit defeat and seek help. This is where Craig Vear enters the picture and the account of our building session yesterday will be the subject of my next post.

Aural Diversity


Most music is made and reproduced on the assumption that all listeners hear in the same way. Psychologists generally write about aural perception as though it is a single standardised thing. Acousticians normally design the sonic environment using uniform measures. Musicologists typically discuss music at it is meant to be heard, not as it actually is heard.

The reality, of course, is that almost all people hear differently from one another. BS ISO 226:2003 is the standard for otological normality and is taken to be the hearing of an 18-25 year old. After this age, presbycusis (age-related hearing loss) usually sets in, at rates that vary from person to person. On top of this comes a range of other potential losses, from noise-induced hearing loss to sensorineural disorders, from genetic problems to losses caused by trauma or medication. In other words, every single person is likely to have at least some hearing loss after the age of 25 and very many people have significant hearing difficulties. I am  willing to bet that a substantial number of 18-25 year olds also have hearing problems!

Given this state of affairs, it is surprising that more is not spoken about aural diversity. In an era when diversity is such a hot topic in so many aspects of society and life in general, why is aural diversity so neglected? My friend Professor John Levack Drever has written about it quite a lot, but otherwise there seems to be a dearth of discussion of the subject. There is plenty on disability, of course, which is great, but for those who would not classify as disabled but nevertheless are aurally diverse: not so much. This affects musicians as much as anyone else. I am aware of many musicians and composers (myself included) who struggle with their hearing, but who nevertheless continue to make music that sounds as it should to “normal” ears. Perhaps it is time that we started to reflect more honestly on our own limitations and present these in our music?

I certainly find myself at a compositional crossroads. If I continue to create normal music, I will have to revert to writing dots on paper because I can no longer hear digital sound accurately enough. At least my aural imagination is intact. If, on the other hand, I want my music to reflect my own experiences, then I have to start engaging with my aural limitations by introducing into my sound world those elements that I actually hear (including such disturbing things as diplacusis and tinnitus). How to do this yet still create beautiful music is a real challenge.

In the meantime, I can envisage a series of musical events that celebrate aural diversity. Surely there are composers and musicians out there (including those with normal hearing!) who would wish to make music that reflects on or addresses itself to a range of hearing types? Perhaps this opens up a new possibility of bespoke music that is more than just the result of users fiddling with EQ and is intrinsically designed for the individual listener’s hearing abilities.

Hearing (my) hearing: pitch perception

Here is an attempt to model the way musical pitches sound to me nowadays. I hear two notes in place of the usual one. This phenomenon is called diplacusis. It is one of the many consequences of the hearing loss resulting from my Ménière’s Disease.

In future posts I will explore how I perceive timbre, localisation, idiom, etc. The changes are quite profound.

Digital technology is enabling us to do something that was previously impossible: to present to people with ‘normal’ hearing how someone like myself with hearing loss actually hears. It reminds me of that Marcel Duchamp note in the Box of 1914: “One can look at seeing, but one cannot hear hearing”. We are now in a position where this is no longer true!

Marcel Duchamp: Box of 1914.

 The following experiment is quite raw, but nevertheless gives a pretty good idea of how the diplacusis works, at least within my singing range. Bear in mind that the hearing loss is severe in my right ear and mild in my left. The experiment ignores tinnitus, which also intrudes.

First I played a piano note at a given frequency (rounded to a whole number). I checked with a fine pitch meter that the tuning was correct before proceeding.

I then blocked my right ear and sang the note I heard, checking against the pitch meter. The left ear (my ‘good’ ear) gives generally very accurate pitch, with a few slight deviations towards the lower and upper ends of my singing range.

I then blocked my left ear and performed the same exercise using my right. The following chart shows the pitches, and the difference. Pitch was not the only difference: for example, the perceived amplitude was considerably softer  from 138Hz downwards and fell away steadily. This is a typical hearing loss pattern in Ménière’s.

NoteFrequency in Hz (rounded)Left Ear difference (Hz)Left Ear Perception (Hz)Right Ear Perception (Hz)Right Ear difference (Hz)
 F#2/Gb2 92-19186-6
 G#2/Ab2 103-1102100-2
 A#2/Bb2 116-2114110-4
 C#3/Db3 138138132-6
 D#3/Eb3 156156151-4
 F#3/Gb3 185185180-5
 G#3/Ab3 207207204-3
 A#3/Bb3 233233229-2

What we can see quite clearly from this is that my diplacusis is active at all frequencies, but also variable. For some pitches, it is more than a semitone! For others, rather less. I hear these two pitches combined, with the out-of-tune one being softer than the in-tune.

What does this sound like? Here is my attempt to emulate this using appropriately detuned piano sounds. The ‘in tune’ note is louder than the ‘out of tune’ note, to reflect the hearing loss. The combination is pretty accurate, though, and by listening to this with by bad ear blocked I can get a better idea of what a normally hearing person would hear. There is a scale first, then the Bach Prelude No. 1:

 I then tried the right ear only wearing my new GNResound hearing aid. What is fascinating is that this does not remove the diplacusis, but it does reduce it and. strangely, makes it into a smooth deviation, getting larger as the pitch descends.

I’m not entirely sure what to make of all this, but it is interesting!

Ménière’s Disease and me

In 2009, I was diagnosed with Ménière’s Disease. MD is an incurable condition that combines four distressing symptoms: vertigo, hearing loss, tinnitus and aural fullness. My MD is bilateral (affects both ears) and is particularly virulent. It has been treated with a chemical labyrinthectomy, which intentionally destroyed the balance function in my right ear, and a range of drugs and dietary modifications. It has also led to severe hearing loss and varying levels of tinnitus. This has had consequences for both my personal and professional life and especially my relationship with music.

The aim of this document is to describe the history and development of my condition. I have to admit that I have kept it mostly secret for many years until now. MD is an invisible disability that is hard for non-sufferers to understand. A person with MD can seem to be either drunk or incapable. The medical profession is often similarly baffled. I was told more than once by doctors that MD is “over-diagnosed” and that my symptoms were more than likely something else. In fact, I have the classic MD profile which has evolved over time in exactly the way that those who know about the condition would have predicted.

My secrecy was partly a matter of personal pride (not wanting to be perceived as weak) and partly a matter of professional fear. The latter arose from the high-powered position I held at De Montfort University and a strong suspicion that some of my colleagues and managers would take a dim view of a Professor of Music and Director of the Institute Of Creative Technologies with these shortcomings. To what extent my fears were justified I will never know, but I am immensely grateful to my dear wife Louise, my friend Simon Atkinson, and my two personal assistants during this period, Rebekah Harriman and Jos Atkin. They provided the personal support and professional cover without which I would probably have given up work altogether. I also thank my consultant, Professor Peter Rea, of the Leicester Balance Clinic, whose interventions have restored my life to some kind of normality, and Dr Ian Cross, a GP who made the referral to Professor Rea’s clinic in the first place.

I have decided to write this document now following my attendance at the ‘Hearing Aids for Music’ conference at the University of Leeds, September 14th/15th 2017. This inspiring event opened my eyes to the possibility that one could admit to these things in public and still maintain a professional career of distinction. In fact, I have been opening up to people about my MD more often in past few years anyway, because the professional environment at Bath Spa University is rather more sympathetic than I found it at De Montfort. Nevertheless, the majority of people I know are unaware that I have this disability, and I think it only right now to present a full account of it, both for their benefit and for the benefit of any other MD sufferers who may be reading this. I found myself desperately searching for other people with a similar condition in the early days, and derived some comfort from the knowledge that I was not alone. I hope this will help other people, and I am always keen to hear from fellow-sufferers.

About Ménière’s Disease

MD is named after Prosper Ménière, a French doctor who first identified the condition in the early 1800s. For those who know me and my work on ’Pataphysics, it is quite ironically appropriate that I should have an obscure French condition that affects the only region of the body that contains a spiral (the cochlea)! For those mystified by that comment, my book ’Pataphysics: A Useless Guide will provide an explanation.

MD is a disorder of the inner ear, which is shown in brown in the next illustration:

Since this contains balance functions and hearing mechanisms, both of these are affected. Nobody knows what causes MD, nor how it builds up once it has begun. However, all MD sufferers have endolymphatic hydrops, which is an excess of fluid in the semicircular canals. The fluid flows throughout the inner ear and contains various salts or electrolytes which are critical to the balance function in particular. An excess of this fluid causes a feeling of fullness in the ears and episodes of vertigo, as well as damage to the tiny hairs (cilia) that enable hearing.

MD affects either one or both ears. I am bilateral but, mercifully, the left ear is less affected than the right. MD varies from person to person and even from day to day or hour to hour. The condition always progresses over time, but its severity varies. There is consequently a “ladder” of treatment that is deployed depending on patient need. I have climbed this ladder all the way to the top.

The first step is diet modification. Salt is a major trigger for MD symptoms, so the aim is to reduce salt intake to under 2.5 grams per day (the recommended amount for a normal adult is 6 grams). There are then various drugs that are taken to relieve symptoms, including diuretics, betahistine and anti-nausea medicine. Early surgical interventions may include the fitting of a grommet to relieve fluid pressure in the inner ear. Transtympanic micropressure pulses may be helpful, and self-administered using a machine. Alternative therapies include acupuncture and herbal remedies, massage and meditation techniques. With every one of these, I experienced some temporary benefits, but not sufficient to substantially alter the progress of the disease.

I didn’t have steroid treatment, but in recent years this has become a preferred next step, with steroids being injected or soaked into the ear. I also did not have a saccus decompression operation, which involves surgically releasing fluid from the inner ear. I have heard mixed reports about how effective this might be. Instead, the severity of my condition meant that I went on to the top rung of the ladder with a labyrinthectomy. The idea of this is to remove the labyrinth of the inner ear so that there is nothing for the MD to “work with” in order to create vertigo. It does not cure MD, but it does put a stop to the debilitating dizziness. Unfortunately, it also destroys your balance function, so it is an extreme measure.

There are two ways of performing a labyrinthectomy: physical removal via surgery, or chemical removal with drugs. In my case, I had a series of gentamicin injections into the inner part of my right ear. Once that was completed and I had recovered sufficiently, which took about a year, I needed no more treatment for vertigo. Even so, I do still sometimes get short-lived dizzy episodes. But overall, my focus has changed to managing my other symptoms of hearing loss and tinnitus. Aural fullness has also largely dissipated, thanks to the gentamicin but also the grommet which I had fitted in 2011. I am now deemed to be at a stage called ‘burn out’, when the MD is still active but has little or no effect on balance. This is just as well: if my left ear were to worsen, then a further labyrinthectomy would be impossible because I would have no balance at all. So, there is little or no further treatment available and my hearing continues to deteriorate. I still maintain a low salt diet, but otherwise I just monitor my progress with my consultant, while trying to intervene with hearing aids and psychological adjustment.

My history with Ménière’s Disease

One of the worst aspects of MD is uncertainty. After each attack, I would try to explain why it had happened, blaming food, stress, my prescriptions, etc. In the end, none of these explanations was convincing, and I came to the conclusion that the disease just does what it does and there is no particular rhyme or reason. But it always meant being uncertain whether an attack would occur that day, whether the hearing would get worse, whether this or that treatment is right at a given time. The uncertainty continues to this day, since the condition never goes away.

I first started to get vertigo attacks in about 2007. I don’t remember the first one, but I do remember a couple of early ones: one in my studio which completely knocked me over and sent the world spinning around wildly; and another in a café in central Leicester. This occurred just after I had received some Chinese medicine treatment, involving acupuncture and herbs. Naturally, I blamed the treatment, but in retrospect I know that had nothing to do with it. However, it is significant that I had been seeking treatment in the first case, because I was already aware that there was some kind of problem but did not know what it might be.

At this stage, I was not aware of any hearing loss or tinnitus and, like so many people, put the vertigo down to tiredness, or stress, or a virus, or anything else I could find to blame. The months went by and the vertigo became steadily more frequent and more violent and now tinnitus and hearing loss started to become apparent to me. As a composer and Professor of Music this was, of course, very alarming indeed. Being a researcher, I started to investigate my symptoms myself. It became increasingly clear to me that Ménière’s Disease was a real possibility, so I sought advice on that basis, and experienced the scepticism of the medical profession. It was only when I was referred to Professor Rea’s practice in Leicester that I finally had confirmation that all four of the necessary symptoms for MD were present in my case.

So, I began climbing the treatment ladder with a low-salt diet. This was an interesting challenge, because the amount of salt that is added to food is unbelievable! The situation has improved in recent years, but back in 2009 it was almost impossible to find meals that did not include large quantities of additional salt. I had to stop eating curries, which was heart-rending, although later on I was able to find one or two curry restaurants in Leicester which were willing to prepare me a salt-free meal. Certain other foods seemed to trigger vertigo attacks, such as caffeine (which I have not taken since 2010), bread and, strangely, rice, to which I seemed to be allergic for a time. These days, I can eat rice without a problem, but back then it would cause an instant attack.

It is hard to describe a full vertigo attack. There is some advance warning, as the ears fill with fluid, the hearing plunges to muffled, and the tinnitus increases to a screech. This preliminary period could last anything from a couple of minutes to half an hour, although it tends to get shorter as the disease progresses. After that, the rotation begins quickly and sustains itself for up to 5 hours. The sensation is of the whole world spinning around you. This is accompanied by violent vomiting. I would be unable to move and could not stand even the slightest motion. I would stare at a fixed point for hours, trying to stop the spinning. My eyes were affected, a bit like in migraine (there is some overlap between MD and migraine), with any bright lights or striations causing distress. I would hold a piece of white card in front of my eyes so that there was nothing to look at. I also could not tolerate noise, or any sound at all.

Some of the worst attacks happened while out and about. Since I do not drive (thank goodness!) it was my darling wife Louise who would be called and would have to come and rescue me. I would go home in the back of the car being violently ill, and then stagger into the house hoping the neighbours would not see me. On one occasion, I was attending a lecture at the Leicester Literary and Philosophical Society at New Walk Museum. Car access is not great there. The lecture was on birds, and I was very interested, so when I felt an attack coming on I stayed in the hall longer than I should have done. The result was that I staggered out in the middle of the lecture and then down the street towards the railway station, texting Louise on the way. I reached the station and had to stand in the pick-up area, swaying perilously while staring at the ground. Anyone who saw me that night would have assumed I was drunk.

On other occasions, an attack would happen while I was at work. I would retreat to my office while my PA held the fort and refused to allow people in to see me. Meanwhile, Louise would be called again to come and surreptitiously take me away. I was determined not to reveal my condition, because the Institute Of Creative Technologies was at an early and fragile stage of development and any sign of weakness on my part could have damaged both my and its future. I’m pleased to say that it still flourishes to this day.

These are just a few examples of the effects of an attack. I had so many that I lost count. As you can tell, having a supportive partner is a wonderful thing. Louise did what was required, but she also would not allow me to sink into self-pity or give up my work. This level-headed approach was vitally important, to keep the thing in perspective and maintain a positive mental attitude. As a result, I did not become too obsessed with what was happening and managed to cope fairly well with the psychological effects of losing my hearing, and the ever-increasing tinnitus.

An attack would usually end with my falling asleep, which was a blessed relief. I would wake up when the vertigo had stopped, but sometimes it continued after I awoke. Either way, the end of vertigo would lead on to “brain fog”, a ghastly condition in which the brain seems to be wrapped in cotton wool and refuses to function adequately. It would also be accompanied by screeching tinnitus. This would last several hours, but always cleared in the end, at which point I felt fine and would embrace life and work again with gusto, trying to pack in as much as possible before the next attack.

The treatment ladder led on to SERC, a betahistine that I then took for ten years in varying doses. It certainly does help, by dilating the blood vessels in the inner ear and increasing permeability. It also seems to be a completely benign drug in other respects. Even so, it was insufficient to treat the condition fully, so I then had a grommet fitted in 2011, to relieve aural fullness. This is a common surgical procedure with few risks or side effects. Mine was a T-tube grommet, which stayed in place until I had it removed in 2016. The reason for its removal was that I was getting repeated ear infections, as the eardrum tried to reject the foreign body. Having taken it out, I now have a permanent hole in my eardrum, which has the disadvantage that I must wear protection in the shower or when swimming, but the great advantage that I don’t experience any ear pressure problems when flying!

For a time, I had some success with a Meniett device. This is a machine that transmits acoustic pulses through the eardrum via a hand-held tube. The idea is that it stimulates circulation of fluid in the inner ear and so slows build up of the hydrops. This was quite pleasantly comforting, and did relieve symptoms, but in the end did not do so in a sustainable manner.

Having MD is expensive! I was able to have private treatment and I paid for this to happen as quickly as possible. Since the condition is not life-threatening, it tends to get relegated down the priority list in the NHS. I took the view that treating my condition was an investment in the future, and so it has proved, but this option is not available for everyone. I spent a lot of time on the MDUK message board, sympathising with and trying to encourage others. I know from the inside how destructive of lives and jobs this thing can be.

After about three years of worsening symptoms, Professor Rea finally advised gentamicin treatment. This involved a series of injections of this powerful antibiotic into my inner right ear, which chemically destroyed my balance function. At first, this made things worse, as I could barely walk down the road. I tried Vestibular Rehabilitation Therapy, but the exercises just triggered vertigo attacks, so I had to stop. But gradually things improved to the point where life was, to all intents and purposes, normal again. I now balance with my eyes, which means that if I shut them and walk I fall over. I also hate the dark, because I cannot see to balance properly, but fortunately there is so much light pollution in modern life that I do not encounter a problem very often.

Most of the time now, I do not have vertigo at all. I have also opened up a bit more about the condition to various friends. Occasionally I become aware that the MD is still doing its thing and, if I still had a balance function, I should be spinning. But even this is ignorable. I have had some mild attacks, for example when my medication for cholesterol was changed. These seem to be the result of the salts that modern pills are ‘cut’ with, any change in which affects my whole system. Once I went back to my previous prescription, everything was normal again. Sometimes, there is no obvious cause and I have to lie down and sleep for an hour or so, after which I am fine again.

Hearing Loss and Tinnitus

The most powerful consequences of MD for me today are hearing loss and tinnitus, rather than vertigo and aural fullness. During the vertigo phase, I was somewhat less bothered about these because, frankly, nothing is worse than vertigo. But these days they are increasingly important to me.

Tinnitus is reaching epidemic proportions as a generation has been exposing itself to prolonged loud noises both in daily life and via amplified music. I have always been aware of the dangers and taken precautions, indeed I wrote a warning section about this in my book The Digital Musician. It is somewhat ironic, therefore, that I suffer from it today. Tinnitus seems to be a cognitive problem, as the brain tries to hear sounds it cannot hear and consequently generates noise in compensation. But its true pathophysiology is really obscure and unknown. It can be a most debilitating condition, especially for musicians, and has frequently resulted in the depression or even suicide of the sufferer.

Even though my tinnitus is at times very loud, I have always managed to “hear past it” and am not psychologically troubled by it. I will never know silence again, but I accept this as part of my life now. The tinnitus, like everything else in MD, fluctuates wildly in both intensity and pitch, and is worse in my right ear than my left ear. At the time of writing, I have a fluctuating sizzle sound in my left ear and a large whooshing in my right that sounds like a distant aeroplane, combined with a small collection of continuous high pitches. However, this will doubtless have changed by tonight.

To get to sleep, I find human speech is normally sufficient to distract me from the tinnitus, so I listen to the radio, or podcasts, or audiobooks, using an under-the-pillow loudspeaker. I have also on occasions used an app such as WhiteNoise or the Relief app that comes with my hearing aid. I turn to these when the tinnitus is especially loud, because the frequency masking techniques they use are more effective at cancelling out the tinnitus noises.

Hearing loss is a more challenging problem. My right ear has severe loss, and my left ear mild. This does mean that I can function apparently normally in most situations, but there are nevertheless some limitations which have to be overcome. The hearing aid in my right ear assists the relatively good hearing in my left ear, adding up to a viable hearing system which lets me engage effectively in one-to-one conversation as long as I can see the other person to lip read, and have my left side turned towards them. Group conversation is another matter and is generally extremely challenging. People wonder why I am constantly leaning in towards them, or manoeuvring around them to get as much a sound as possible into my ‘good’ ear.

The profile of the hearing loss is bizarre, with the low frequencies having gone first (this is typical of MD) and a smorgasbord of peaks and troughs in other frequency bands. Furthermore, this changes on a daily basis and even in real time in response to external stimuli. Therefore, it is quite different from age-related hearing loss, or indeed the loss of someone who is profoundly deaf.

This has naturally had huge consequences for my music. I decided to stop making live music altogether when it became apparent that I could not perform adequately. However, that decision had much to do with vertigo at the time, so it may yet change. My composition has moved more or less exclusively to digital work, where I can control levels effectively and do not cause problems for other musicians. However, this is also not entirely satisfactory. It removes the social dimension of music, which is so important. Also, it still presents real challenges to my ears. Electroacoustic composition, for example, relies on fine and discriminating listening, and is invariably linked to a minimum of two channels and often as many as eight. I have found myself somewhat alienated from this form of music, which I have done so much to create and promote, by its intolerance of anything less than perfect hearing.

Somewhat masochistically, I have worked on some directional compositions and analyses (trying to avoid showing weakness again!) and have ended up relying on PhD students to be my ‘ears’ and confirm that what I imagine will be heard is accurate. What I actually hear is distressingly weird, with individual tones splitting into a multiple frequencies like a ring modulator, unwanted sonic artefacts coming either from tinnitus or from the ear itself, very limited perception of dynamic range, no ability to localise, and a general tendency to hear all pitches as sharp. My ‘good’ ear saves me from total disaster, but even this cannot be trusted completely. In earlier years, I was renowned for having accurate ‘ears’, but now I have to face the fact that this is no longer the case. This is hard to accept.

The situation is equally bad with regards to speech. I really struggle in any conversation where there is any kind of background noise. On very many occasions, people have spoken to me and I have not heard them. I am sure that I have also said the wrong thing many times in situations where I cannot hear someone properly and have chosen to pretend that I can hear, rather than constantly saying ‘I beg your pardon’. This is really rude on my part, and I have tried to adopt a policy of explaining the problem to people first, but sometimes this is simply not possible. My job involves talking to people all the time, so many conversations happen “on the fly”

and in uncontrolled environments. Meetings present particular challenges, especially in large rooms with prominent echoes. I am pretty good at lip-reading. I am largely self-taught but did attend some classes at DeafPlus in Bath and also used the LipReader training software by David Smith. And I have my hearing aid (of which more in a moment). Even so I am afraid I miss things. Women’s voices, which are softer, are often a big problem.

I got my first hearing aid back in 2012, under the excellent advice of Claire Marshall, my audiologist at the Leicester Hearing Centre. This was a Siemens Tek, which at the time was the bee’s knees. It was programmable via a small handheld device and offered special programmes for different situations (outdoors, restaurant, music, etc). It did help quite a bit with these, but was mostly poor for music. Hearing aids are engineered primarily for speech. Music tends to present a much bigger challenge, with wider dynamic and frequency ranges, etc. The compression and noise reduction functions in digital hearing aids actually work against the enjoyment of music, or at least live music. The Tek could not distinguish, for example, between a flute and feedback, so would suppress them both just the same.

For a time, I moved to an NHS hearing aid which is free (or nearly free). Despite the proud boasts of the NHS audiologist that this device was every bit as good as anything I could get privately, it was not very well suited to an MD sufferer. This is because NHS hearing aids are designed for people with age-related hearing loss, and so boost the treble frequencies and tend to ignore the lower end. Since the worst of my hearing loss is the other way around, this resulted in a highly tinny and sibilant sound which was ok for speech but completely hopeless for music. So, in both instances, I would usually take out my hearing aid when making music.

My most recent aid has coincided with a further decline in my hearing, which occurred in late 2016/early 2017. The new HA is a GNResound3D. I am still getting to grips with this system, but it seems to offer better possibilities for music. The speech functions are excellent, and the whole thing is controlled from my iPhone via an app which allows me to adjust directional focus and edit broad frequency bands. It also provides a direct link to my audiologist. I have supplemented this with some free software called EQHearAid, which offers a full graphic equaliser that can be adjusted on the fly. This is great because my hearing changes so much on a daily basis. There are still problems with holes appearing as sound transitions from one frequency band to another, and bass remains an issue because the ear mould I wear has a ventilation shaft which immediately reduces the capacity for low frequency management. Nevertheless, this HA is a great improvement on anything I’ve encountered so far.

Living with Ménière’s Disease

To those who have read this far: thank you and well done! I don’t spend my days lamenting and suffering with MD. It is just a condition that I live with and, since the vertigo has more or less ended, is one that does not intrude too much on daily life. Yet there is still plenty to grapple with in the future. In particular, I feel the need to continue to address my hearing issues. In the first instance, I want to be more open and honest about the problem, which is partly why I have written this document. But also I want to try to make a contribution to improving life for other people. For that reason, I am developing a research project that will focus on music and MD. It will

involve a cross-disciplinary team including clinicians, audiologists, psychologists and musicians. I hope to work with a hearing aid manufacturer too. This is an unresearched field at present that would greatly benefit from an increased scientific understanding.

To anyone who has recently been diagnosed with MD, I would say that it is not the end of the world (although at times it might seem so). It will however have a transformative effect on your life, so finding out as much as you can about it is important. There are online resources, but do check their reliability because there is also quite a lot of rubbish on the internet. Various purported “cures” are not to be trusted (and I have tried one or two of them). Join the Ménière’s Society. Every MD sufferer is different and has a different experience, but treatments exist for all levels, as my story demonstrates. In the past year I have travelled abroad frequently to deliver papers at conferences, including trips to Australia, Poland, Belgium and Canada. Life can return to something like normality!

One strange fact is that I now live aboard a narrowboat. During the vertigo phase, I certainly could not have done this (even though I wanted to), but now I am happily living in a house that rocks slightly. Whether it is because I have no balance function on one side, or whether it is because my brain has to work harder at balancing me on a boat, I find it more stable than being on dry land. MD is full of strange paradoxes like this. No doubt, if serious vertigo ever returned I would have to get back to dry land, but for now I am more than happy in my ‘mobile’ home!

Andrew Hugill

September 2017