In previous posts I have discussed the construction of a “diplacusis piano”, a digital instrument that reproduces accurately what I actually hear. Diplacusis is a phenomenon in which you hear two different pitches, one in each ear. In my case, the left ear is mostly in tune, whereas the right ear is mostly out of tune, by fairly random amounts.
The problem with composing for the resulting instrument is twofold: firstly, because of my hearing loss I cannot hear the (quiet) sounds it produces very well; secondly, what I do hear I hear with diplacusis, so diplacusis on diplacusis!
How then to compose for this instrument, given that I have only a poor idea of what a person with normal hearing would hear? My solution is to develop a visual language based on the spectrograms of each note. I have been steadily learning about the character of each spectrogram as I go.
Here are some stills of most of the keyboard. The image quality has been reduced for speed of upload, but they are clear enough for you to be able to see how they vary. It’s really intriguing. My idea now is to start to connect together the various overtones to begin to create some kind of “harmony”. You’ll see that I have put gridlines on each image to help with this.
These are static images (generated with Pierre Couprie’s wonderful EAnalysis software). In the live performance, I will work with spectrograms that continuously evolve over time. This, I hope, will act both as a kind of score but also, for listeners who have even less hearing than myself, a visual version of the music that can be enjoyed without necessarily hearing everything.
So, here is a selection of the keyboard, just to give you an idea:
And here are just two notes for comparison at higher quality. You can see how different they are in terms of both structure and behaviour over time. This gives me a starting point for composition.