In the last two posts (here and here) I have described the process of building a digital “piano” that reproduces my diplacusis. Having constructed the instrument with the help of Professor Craig Vear, I have begun to muse on the creative possibilities that this has revealed.
It is immediately clear that this is not really a piano at all, despite having piano sounds as its raw material. If I play a common chord, or attempt to play some classical piano music, all one hears is an out-of-tune piano. It’s a bit like a honky-tonk but worse – some kind of abandoned instrument. Interestingly, the brain filters out the “rubbish” from the signal and quickly the out-of-tuneness recedes into a normal piano.
So, to avoid sounding like I’m just trying to write piano music for a bad instrument, I must find a new way of thinking about composing for this diplacusis piano. This echoes my experience with diplacusis and hearing loss generally. I need to find new ways of listening if I am to appreciate and enjoy music now. My aim is to create something beautiful, despite the supposed limitations imposed by my condition.
Craig was keen to describe how each note, each adjusted sample, made a different sonic journey lasting 10 seconds. What he could hear was a fascinating mixture of rhythmical beats, emerging harmonics, clusters of partials, percussive noise, all evolving over time. Every single note has its own character, which he was able to describe to me in some detail, waving his arms expressively as he did so. So this is not a piano, but rather an 88-note composition with a total duration of just under 15 minutes!
The problem is, of course, that I cannot hear them! To me, each sample lasts about 3 seconds, and I do not trust what I hear even within that time frame. So, how can I possibly write music for this instrument if I cannot hear it properly?
Once again, new digital technologies come to my aid. Firstly, there are my wonderful GNResound Linx Quattro hearing aids. During the building of the instrument, I removed the hearing aids, so as to capture as accurately as possible my diplacusis. Now, by reinserting them, I can gain a much better impression of the sounds of the instrument. I can hear them for longer and understand the complex shifting interactions between the higher partials. However, the hearing aids alone are insufficient, especially in the lower registers. Even with my unvented mould, which prevents sound escaping from my right ear, the low end response is not enough.
As we worked on the instrument, we used a spectrogram to understand what was happening in each sample. This was fascinating, because it conveyed rich information about each note’s “story”, showing the strange rhythmic pulsations that arise from beats, the emergence and withdrawal of various overtones, the intensity of different registers, and so on.
So, my way of composing is becoming clear: I must familiarise myself with the story that each of my 88 mini compositions tells. Then I can string these together in ways which create a convincing musical narrative. There may be many such narratives – that remains to be seen – but each will have its own unique and engaging storyline that listeners can perceive.
To help them in this, I plan to add a video component to the performance, showing the spectrograms as they change, any musical descriptions (in text) or notations that are relevant, and perhaps a more imaginative interpretative layer. Multiple windows on a single screen, conveying the story of the piece.
This will help people in the Aural Diversity concert (where this will be premiered) whose hearing diverges from my own. They will be able to experience the composition in several ways at once. My performance will not resemble a traditional piano recital much. The keys on the keyboard are merely triggers for sonic navigations to begin. But it will hopefully prove engaging as I convey the emotional nature of the discoveries described in these posts and combine that with an informative and stimulating visual display.