Creating a visual language for the diplacusis piano

In previous posts I have discussed the construction of a “diplacusis piano”, a digital instrument that reproduces accurately what I actually hear. Diplacusis is a phenomenon in which you hear two different pitches, one in each ear. In my case, the left ear is mostly in tune, whereas the right ear is mostly out of tune, by fairly random amounts.

The problem with composing for the resulting instrument is twofold: firstly, because of my hearing loss I cannot hear the (quiet) sounds it produces very well; secondly, what I do hear I hear with diplacusis, so diplacusis on diplacusis!

How then to compose for this instrument, given that I have only a poor idea of what a person with normal hearing would hear? My solution is to develop a visual language based on the spectrograms of each note. I have been steadily learning about the character of each spectrogram as I go.

Here are some stills of most of the keyboard. The image quality has been reduced for speed of upload, but they are clear enough for you to be able to see how they vary. It’s really intriguing. My idea now is to start to connect together the various overtones to begin to create some kind of “harmony”. You’ll see that I have put gridlines on each image to help with this.

These are static images (generated with Pierre Couprie’s wonderful EAnalysis software). In the live performance, I will work with spectrograms that continuously evolve over time. This, I hope, will act both as a kind of score but also, for listeners who have even less hearing than myself, a visual version of the music that can be enjoyed without necessarily hearing everything.

So, here is a selection of the keyboard, just to give you an idea:

And here are just two notes for comparison at higher quality. You can see how different they are in terms of both structure and behaviour over time. This gives me a starting point for composition.

C4 (middle C)
C5

Building the “Diplacusis Piano”, Part 3/3: Making Music!

In the last two posts (here and here) I have described the process of building a digital “piano” that reproduces my diplacusis. Having constructed the instrument with the help of Professor Craig Vear, I have begun to muse on the creative possibilities that this has revealed.

It is immediately clear that this is not really a piano at all, despite having piano sounds as its raw material. If I play a common chord, or attempt to play some classical piano music, all one hears is an out-of-tune piano. It’s a bit like a honky-tonk but worse – some kind of abandoned instrument. Interestingly, the brain filters out the “rubbish” from the signal and quickly the out-of-tuneness recedes into a normal piano.

So, to avoid sounding like I’m just trying to write piano music for a bad instrument, I must find a new way of thinking about composing for this diplacusis piano. This echoes my experience with diplacusis and hearing loss generally. I need to find new ways of listening if I am to appreciate and enjoy music now. My aim is to create something beautiful, despite the supposed limitations imposed by my condition.

Craig was keen to describe how each note, each adjusted sample, made a different sonic journey lasting 10 seconds. What he could hear was a fascinating mixture of rhythmical beats, emerging harmonics, clusters of partials, percussive noise, all evolving over time. Every single note has its own character, which he was able to describe to me in some detail, waving his arms expressively as he did so. So this is not a piano, but rather an 88-note composition with a total duration of just under 15 minutes!

The problem is, of course, that I cannot hear them! To me, each sample lasts about 3 seconds, and I do not trust what I hear even within that time frame. So, how can I possibly write music for this instrument if I cannot hear it properly?

Once again, new digital technologies come to my aid. Firstly, there are my wonderful GNResound Linx Quattro hearing aids. During the building of the instrument, I removed the hearing aids, so as to capture as accurately as possible my diplacusis. Now, by reinserting them, I can gain a much better impression of the sounds of the instrument. I can hear them for longer and understand the complex shifting interactions between the higher partials. However, the hearing aids alone are insufficient, especially in the lower registers. Even with my unvented mould, which prevents sound escaping from my right ear, the low end response is not enough.

As we worked on the instrument, we used a spectrogram to understand what was happening in each sample. This was fascinating, because it conveyed rich information about each note’s “story”, showing the strange rhythmic pulsations that arise from beats, the emergence and withdrawal of various overtones, the intensity of different registers, and so on.

So, my way of composing is becoming clear: I must familiarise myself with the story that each of my 88 mini compositions tells. Then I can string these together in ways which create a convincing musical narrative. There may be many such narratives – that remains to be seen – but each will have its own unique and engaging storyline that listeners can perceive.

To help them in this, I plan to add a video component to the performance, showing the spectrograms as they change, any musical descriptions (in text) or notations that are relevant, and perhaps a more imaginative interpretative layer. Multiple windows on a single screen, conveying the story of the piece.

This will help people in the Aural Diversity concert (where this will be premiered) whose hearing diverges from my own. They will be able to experience the composition in several ways at once. My performance will not resemble a traditional piano recital much. The keys on the keyboard are merely triggers for sonic navigations to begin. But it will hopefully prove engaging as I convey the emotional nature of the discoveries described in these posts and combine that with an informative and stimulating visual display.

Building the “Diplacusis Piano”, Part 2/3: In the studio

In the previous post I described the background to this project to construct a digital piano that renders my diplacusis audible to others. This post describes my studio session with Craig Vear, during which we assembled the entire instrument.

We worked in the Courtyard Studio at De Montfort University, which was the very first space I constructed when I started up the Music Technology programme back in 1997. Craig Vear is a former student of mine who is now a Professor. I’ve known him from the days of the BA Performing Arts (Music) degree at Leicester Polytechnic, where I started my academic career in 1986. It seems that past investments are repaying me handsomely! Here’s Craig in the studio, attempting to describe to me how one of the notes unfolds:

First we created middle C (C4) using Bosendorfer samples. This was something I had already done in my previous attempt, but the difference this time is that Craig’s ears were able to hear the interesting journey the difference tones take as the edited and filtered sample unfolds. This is the first clue about the creative possibilities that will subsequently emerge.

We matched the extent of my hearing loss in the right channel, in particular, and panned the stereo channels hard left and hard right. We introduced some filters to take out the lower frequencies as appropriate (it gets much more extreme in the lower registers) and some high ones too, using my audiogram as a guide. Finally, we detuned the samples. In most cases this was an adjustment only to the right channel, but sometimes it also entailed adjusting the left. Detuning meant converting frequency information in Hertz into cents (i.e. hundredths of a semitone). It’s a bit hard to make out in this photo, but the two high screens show an online hertz/cents converter on the left and my original diplacusis chart on the right. The desktop screens show the samples on the left and the filters and tuning information on the right.

I had already decided that none of the sounds will rise above piano (i.e. soft). This is because my hyperacusis also means that I find any loud sounds distressing nowadays. Having tried to play a conventional piano recently, I realised that the mechanical sound of hammers hitting strings is too painful for me, regardless of the diplacusis. So this will be a soft and gentle instrument.

So, to give an idea what this sounds like, here is the original sample plus its “diplacusis” version:

Untreated C4
Diplacusis-adjusted C4

We repeated this process across the entire 88-note range of the piano, following the findings described in the previous post. Here are some more C-diplacusis notes, to give an idea of the sheer range and variety of sounds that resulted:

C1
C2
C3
C5
C6 (N.B. – this is unaffected by diplacusis)
C7
C8

The final step in the building process is to create an instrument in Logic (my sequencer of choice) using the ESX24 sampler. This maps the various samples across the whole instrument. In the range that I had specified using my singing method, we made individual samples for each note. In the other ranges we transposed samples up or down across a minor 3rd.

Building the “Diplacusis Piano”, Part 1/3: Background

Introduction

In a previous post I described my struggles with diplacusis and my intention to build a “piano” that could reproduce the sounds that I actually hear for the benefit (?) of others. This series of posts will document the progress I have made so far and the exciting compositional possibilities that are opening up as a result.

Diplacusis is a disturbing phenomenon in which the two ears hear a given musical note at two different pitches. It is yet one more from the smorgasbord of symptoms associated with Ménière’s Disease (see this post for a detailed account of my Ménière’s experiences), alongside vertigo, hearing loss, tinnitus and aural fullness.

I decided to try to build a musical instrument that would convey to others what this sounds like. I wanted this to offer me a creative opportunity to make some beautiful music. What it is in fact providing is not just that, but a whole new direction for my composition.

This post is a detailed account of the first steps in building this instrument. It is necessarily a digital instrument: there is no way this could be done using traditional technologies. I have been greatly helped by my GNResound Linx Quattro hearing aids and by my friend, the composer and Professor Craig Vear, who provided not just technical fluency in the studio and an otologically “normal” pair of ears, but also the ability to describe each sound to me as it emerged from this new instrument.

Starting Points

I decided to start with a piano simply because that is the instrument I used to play back in the days when I regularly made music. Piano sounds also have a pleasing decay which I instinctively felt would work well with this phenomenon. Nobody wants to listen to sustained diplacusis!

In my previous scientific study of my own diplacusis, I mapped the differences in pitch across my own singing range by laboriously stopping the good ear and singing the pitch I heard in Hertz, then comparing it with the correct pitch. This gave me a verified chart from F#2 (~92Hz) to C4 (~261Hz). To understand what comes next, you need to see my audiogram:

Andrew Hugill’s audiogram (July 2017)

This one is a little bit out of date, but my hearing has not changed much since then. Observe that (as is usual in audiology) the right and left ears are reversed in the image. You will also notice that audiology takes no interest in frequencies below 125Hz or above 8kHz. This is because audiology is mainly interested in speech and, frustratingly, takes little account of music.

Anyway, you will see quite clearly that my right ear (in red) is way below my left ear. This is what severe hearing loss looks like. My left ear has normal hearing (above 10dB) in the region between 1500 Hz and 4000 Hz. This is my salvation in speech situations. But there is quite a lot of hearing loss around that. Nevertheless, my pitch perception in that ear is tolerable.

One other thing to notice is that the lower frequencies show a marked decline in both ears. This is typical of Ménière’s Disease, where the bass disappears first. By contrast, in age-related hearing loss (presbycusis) the high frequencies deteriorate first, which is why so many hearing aids concentrate on the high end.

First efforts

Now you can see why the next step in preparing for the instrument was so daunting and has taken me many months of struggle to figure out. I could no longer rely on either my audiogram or my singing voice to help me understand my own pitch perception, because the rest of the piano keyboard is simply out of range. To make matters worse, every time I tried it was like working in a hall of endlessly reflecting mirrors. I would listen to my diplacusis with my diplacusis… it was very uncomfortable and very tiring.

So with considerable effort, I worked on trying to understand my own hearing by feeling my way with trial and error. Gradually a number of key features emerged:

  1. There is an octave between F#5 (~698Hz) and F#6 (~1397Hz) where there is no diplacusis at all. In other words, I hear a piano just like a normal piano, as anyone else would, albeit with greatly reduced hearing in one ear.
  2. In the range above that, the diplacusis gradually reappears, getting worse the higher up you go. However, since the piano sounds pretty metallic in that register anyway the effect is not as disturbing as you might expect.
  3. The range from C4 (~261Hz) down to F2 (~87Hz) is affected by random amounts of diplacusis as per the chart from the earlier study.
  4. Below E2 (~82Hz) this random diplacusis effect continues, but now a new phenomenon enters, presumably resulting from the general loss in low frequency hearing. The fundamental frequencies of each note and then the first and second partials, gradually disappear, leaving a thudding sound and a collection of higher overtone frequencies. This complex spectrum is then subject to precisely the same diplacusis that affects the higher register, resulting in a perceptible shift in spectrum but no discernible change in pitch.
  5. And this is, I think, a novel finding: every diplacusis induced detuning is flat! This seems to contradict the received wisdom that diplacusis notes are sharp. I need to do more research into this.

Given the difficulties of translating the above into any kind of instrument, I eventually had to admit defeat and seek help. This is where Craig Vear enters the picture and the account of our building session yesterday will be the subject of my next post.

‘The Digital Musician’: Third Edition

The third edition of ‘The Digital Musician’ has now been published! See the publisher’s website and here also is the link to the book’s own website.

This third edition has been updated to reflect developments in an ever-changing musical landscape—most notably the proliferation of mobile technologies—covering topics such as collaborative composition, virtual reality, data sonification and digital scores, while encouraging readers to adapt to continuous technological changes. It includes:

  • Additional case studies, with new interviews exclusive to the third edition
  • Revised chapter structure with an emphasis on student focus and understanding, featuring additional and expanded chapters
  • Reinstatement of selected and updated first edition topics, including mixing, mastering and microphones
  • Companion website featuring case study interviews, a historical listening list, biblio­graphy and many additional projects.

Return to Looe Island

Back in 1995, I spent several exhilarating and highly creative months living on Looe (or St. George’s) Island and in Looe itself, where I composed Island Symphony (the story is told here). I also wrote ‘Les Origines humaines’ during the same period.

Island Symphony was written at the request of Babs and Evelyn (Attie) Atkins, who owned the island at that time. They invited me to live in Smuggler’s Cottage while I created the piece. They wanted a proper Symphony, with an orchestra, but this could never be played on the island (not enough room!) so I made the work using orchestral samples, mixed with synthesised and recorded sounds. I also used the internet to gather sounds (this was before the World Wide Web!). It was a bit like gathering virtual driftwood.

Revisiting the island today has therefore been quite an emotional experience, filled with memories of the place and the late sisters who were its spirit. I am delighted to report that the Cornwall Wildlife Trust, in the persons of John and Claire Ross, have done a brilliant job of making the island into exactly what Babs and Attie wanted: a nature reserve.

Here are some of the photos I took during today’s visit:

East Looe Quayside

The island from the boat

Disembarkation

View up the path from the beach

View back across to Hannafore and Looe

Smuggler’s Cottage

John was kind enough to show me inside my old dwelling. It was very damp when I lived there. They’ve now had to take up the floor completely and are trying to make it habitable once more.

John Ross

These next three photos show the location of Babs’ grave and memorial stone. Despite the sadness, it is good to know she is at rest on her beloved island.

View up to the door of the old craft centre, now a private dwelling.

Island House, also privately owned.

After all this time, I have still never seen inside the house!

The new craft centre, next to the generator shed.

To my great surprise, they are still selling CDs of Island Symphony! A snip at £5.

Island Symphony!

Claire, with a bottle of island apple juice.

Moon raker returning to pick us up.

Farewell, Looe Island…until the next time.

After such an enjoyable but emotional trip, there was only one place to go for lunch: the Salutation Inn. Also full of memories: Dick Butters sat at the bar; the long games of chess with Peter Warden…

Reflecting on the whole experience, I would like to return one day and make another Island Symphony. This one would eschew the orchestra and concentrate instead on field recordings. The use of the internet would change too: it would become the location of the piece. The new Island Symphony would be an ever-evolving web installation, a site that is always there and can be visited at any time, just like the island itself.

In the meantime, here is the Virtual Tour I made back in 1996.

 

 

Aural Diversity

deerears

Most music is made and reproduced on the assumption that all listeners hear in the same way. Psychologists generally write about aural perception as though it is a single standardised thing. Acousticians normally design the sonic environment using uniform measures. Musicologists typically discuss music at it is meant to be heard, not as it actually is heard.

The reality, of course, is that almost all people hear differently from one another. BS ISO 226:2003 is the standard for otological normality and is taken to be the hearing of an 18-25 year old. After this age, presbycusis (age-related hearing loss) usually sets in, at rates that vary from person to person. On top of this comes a range of other potential losses, from noise-induced hearing loss to sensorineural disorders, from genetic problems to losses caused by trauma or medication. In other words, every single person is likely to have at least some hearing loss after the age of 25 and very many people have significant hearing difficulties. I am  willing to bet that a substantial number of 18-25 year olds also have hearing problems!

Given this state of affairs, it is surprising that more is not spoken about aural diversity. In an era when diversity is such a hot topic in so many aspects of society and life in general, why is aural diversity so neglected? My friend Professor John Levack Drever has written about it quite a lot, but otherwise there seems to be a dearth of discussion of the subject. There is plenty on disability, of course, which is great, but for those who would not classify as disabled but nevertheless are aurally diverse: not so much. This affects musicians as much as anyone else. I am aware of many musicians and composers (myself included) who struggle with their hearing, but who nevertheless continue to make music that sounds as it should to “normal” ears. Perhaps it is time that we started to reflect more honestly on our own limitations and present these in our music?

I certainly find myself at a compositional crossroads. If I continue to create normal music, I will have to revert to writing dots on paper because I can no longer hear digital sound accurately enough. At least my aural imagination is intact. If, on the other hand, I want my music to reflect my own experiences, then I have to start engaging with my aural limitations by introducing into my sound world those elements that I actually hear (including such disturbing things as diplacusis and tinnitus). How to do this yet still create beautiful music is a real challenge.

In the meantime, I can envisage a series of musical events that celebrate aural diversity. Surely there are composers and musicians out there (including those with normal hearing!) who would wish to make music that reflects on or addresses itself to a range of hearing types? Perhaps this opens up a new possibility of bespoke music that is more than just the result of users fiddling with EQ and is intrinsically designed for the individual listener’s hearing abilities.

The Winds are complete

After five days in the studio, and quite a lot of advance preparation on the boat, the winds for Movement 1 have been completed. Each wind comprises a collection of sound files which add up to its character. The sounds include natural/environmental recordings, instrumental and synthetic timbres. All of them are treated in some way, at the very least embedding directionality as described in previous posts, but in some cases spectrally treated and processed.

The bass clarinettist behaves like a kind of weather vane in the performance. He will face a given wind and play from a menu (yet to be composed) of material in response to the sounds that emanate from that direction. When he tires of a particular wind, he will turn to face another direction.

Meanwhile, the computer will trigger anything between one and the maximum number of sound files available in a given wind folder. The triggering will occur randomly within a 30 second window. Since some of the sound files last more than a minute, it is likely that the ‘tail’ of one wind will still be playing when a new one is faced. This should add richness to the musical experience.

The material in the wind folders is unified according to the timbral map given in a previous post, i.e. by shape, timbres, pitch centre (where appropriate), gesture, envelope, etc. Even so, there is a lot of diversity. It will be a blowy and slightly chaotic piece, just like the experience of standing on Kelston Round Hill!

The Movement will begin with the North West wind, which consists entirely of unpitched sounds from the computer and from the bass clarinet, who makes various air noises and clicks. After that, the shape of the composition is determined on the fly by the performers  (Roger Heaton and myself, in this case).

Timbral Map

Planning of Movement 1 continues by trying to map the hill timbrally. I have devised the table below, showing how the various winds are translated into timbres for clarinet and the loudspeaker orchestra. Some of the latter sounds are electronic, some natural recordings, some instrumental samples. Pretty soon now the real work of actually making the sounds will begin!

bassclarinet

Wind: N
Pitch: A
Character: compressing – stretching
Time delimited? yes
Phrase 1: Compressing: discontinuous and erratic.
Phrase 2: Stretching out: globally uniform
Semantics: First there is a feeling of compression (as if we strongly pressed on an obstacle) then the barrier is suddenly overcome, suppressing all resistance and releasing the power. It is a sudden change from localized energy to scattered energy.
Timbres: Aeolian sounds, wind and brass instruments, plus electronic compression
Bass Clarinet: rapid staccatissimo, then tenuto

Wind: NE
Pitch: E
Character: moving forward, propulsion
Time delimited? yes
Phrase 1: first phase is quite a sustained fulcrum: a prolonged or homogeneous sound or slow iteration, globally uniform
Phrase 2: a brief acceleration of intensity, pitch, or any other morphological trait
Phrase 3: a typical resonance or silence
Semantics: We feel the application of a force to a steady state, resulting in an accelerated movement. Projection from a starting point.
Timbres: bowed tam-tam, bowed gamelan, bowed vibes, piano clusters, drone sound, resonance is spatial
Bass Clarinet: sustain – acceleration – resonance

Wind: E
Pitch: B
Character: waves, braking
Time delimited? no
Phrase 1: slow repetition of an increasing then decreasing sound motif. The shape of the profile can concern different morphological criteria (mass, dynamics, grain, etc.)
Semantics: Each cycle conveys the feeling of being pushed forward, and then driven back until the end. We get the impression that we are stagnating through this unit although we feel motion within each cycle.
Timbres: wind in trees, granular synthesis, additive/subtractive synthesis, filtering
Bass Clarinet: rhythmic articulations, vibrato

Wind: SE
Pitch: F
Character: divergent, chaotic
Time delimited? no
Phrases: ad lib.
Semantics: No description required. The title is self-explanatory.
Timbres: birdsong, natural rustling, strings
Bass Clarinet: multiphonics, microtones, extended techniques, slap tongue

Wind: S
Pitch: C
Character: endless trajectory, heaviness, in suspension
Time delimited? no
Phrase 1: a linear and usually slow evolution of a sound parameter
Semantics: The process must be oriented in a direction (for example, upwards or downwards) and however, it seems to never finish. The sound phenomenon must be long enough to be perceived as a process and not an ephemeral event.
Timbres: spectral processing, time stretching, Shepard tone, distant aeroplane
Bass Clarinet: circular breathing, glissando, crescendo

Wind: SW
Pitch: G
Character: spinning, stationary, obsessive
Time delimited? no
Phrase 1: a parameter (pitch, timbre) is driven by a quick cyclic repetition along with a thrust in each cycle, or with a quick and possibly varied repetition of a pulsed element.
Semantics: We feel constrained by a mechanical process in which we cannot seem to act. We have the feeling of an object spinning on itself or in space.
Timbres: mechanical sounds, throbbing, music box, piano
Bass Clarinet: Double/triple tonguing

Wind: W
Pitch: D
Character: floating, falling, fading away
Time delimited? no
Phrase 1: a sound parameter (pitch, dynamic, etc) floats and then falls away.
Semantics: We feel sustained for a considerable period of time before drifting away, either falling or fading.
Timbres: decay instruments (prepared piano, harp, celesta, vibraphone, gong, etc.) that decay at wrong or unusual rates, or have pitch shift
Bass Clarinet: glissandi, decrescendi

Wind: NW
Pitch: always unpitched
Character: suspending-questioning, wanting to start
Time delimited? no
Phrase 1: ad lib.
Phrase 2: ad lib.
Phrase 3: ad lib.
Semantics: no description necessary
Timbres: unpitched percussion, acoustic effects, wind sounds
Bass Clarinet: Breath sounds, key clicks, etc.

 

An invitation to contribute to the Kelston Roundhill Symphony!

Movement 4 of the Kelston Roundhill Symphony is to be called “People and Buildings”. People can upload sounds to a website, like some kind of aural patchwork quilt. Any sounds may be used, but obviously it would be good to link them to themes of the symphony and the round hill. I will then put together a piece using these sounds, to create what will hopefully be a vigorous and rousing end to the symphony. During the composition process, people can comment and make suggestions via the message board on the site.

If you wish to take part, please do make a contribution here http://andrewhugill.com/kelston-four/