Ph.D. in musicology, Pierre Couprie is an associate professor in digital pedagogy, computer music and a researcher at Paris-Sorbonne University (Institute for Research in Musicology). His research fields are the musical analysis and representation of electroacoustic music and the development of tools for research or musical performance. He is also a member of the steering committee of Musical Analysis French Society (SFAM). He collaborates with the MTIRC of De Montfort University since 2004 on musical analysis projects. In 2015, he won the Qwartz Max Mathews Price of technological innovation. He is an improviser in The Phonogénistes and The National Electroacoustic Orchestra (ONE). Personal website: http://www.pierrecouprie.fr
Why do you make music?
It is difficult to answer that question. The main reason is probably because I love creation in music. When I started to study music (10-11 years old), I also started to compose, it was natural for me. After several years, I discovered that playing and composing music are not the same activity for lots of musicians. Most recently, I also discovered that the practice of creation is at the heart of all my activities. For me, creating music in EA studio, composing a score, developing a piece of software, or inventing any kind of structure is relevant to creation. There are many links between these activities. Then I make music because I love that but also because I love to develop and create new ideas.
What music do you make?
Actually, I am focusing my musical creation on “improvised electroacoustic music”. Working alone in a studio or on my table to write a score is now difficult for me because I love to play on stage. Then, improvisation is the best way to compose and perform at the same time. I like what Michel Waisvisz said in 1990: “the term of ‘electronic music composer’ implies being a performer as well; you cannot sit behind a desk and write electronic timbral music without hearing it”. Moreover, I think the practice of improvisation better matches with the gradual shift from composer (of writing music) to performer which is occur in contemporary music (since the birth of free jazz in 60’s).
How do you make music?
I make music through the development of my digital instrument. This instrument is a kind of chimera mixing acoustical, analogue and digital technologies which are mutating over time. Recently, Thor Magnusson proposed the term of “musical organics” to describe the study of new digital instruments. This idea of an instrument like a living organism is very interesting. My first instrument for improvisation was an oboe (the instrument I used to learn), a microphone, an effect box (Lexicon MPX100), and pedals to control it. After I change my oboe to tenor recorder and/or Didgeridoo. It was an augmented instrument with the sound quality (and possibilities) of an acoustic instrument and great possibilities of electronic transformations. After this first step, I exchanged my effect box to a laptop and sound card. I used Max and Ableton Live to develop electronic transformations, especially the opportunity to record or use reinjection technic during the performance. The third step was to exchange my acoustic instrument to sensors. I tried lots of systems and finally I have made my choice with a very simple 2 axis sensor on my hand to control intensity + one parameter from effects and an iPad Pro to control other parameters with 1 or 2 axis sliders (from Mira connected to Max). All sounds come from recording I have made or extract from an audio CD (like a DJ). Actually, my instrument is evolving to a hybrid synthesizer by integrating a Moog Mother-32 and other semi-modular devices for sound generations. One of the most important things is to have an interface to control all these devices with varied types of gestures. Even if I use a semi-modular synthesizer, I do not like to create only (pseudo-) repetitive structures of pitches or rhythms like many performance today, but also to create varied spectromorphologies. In the computer, I created a Max modular patch to play with various generators (player, granular synthesizer, etc.) and transformations (complex delay, amplitude modulation, filters, etc.). During the performance, I control this patch with the sensor and the iPad.
Is any of your sound-based work not ‘music’ as such and, if not, what is it?
I do not think like this: you can put any label you want as “music” or “sound art” or anything you want. The essential is “art”. I used to create art when I composed music. Now I create art when I improvise. Of course, all of these creations are relevant to a form of “sound-based music” because I always focus my work or the evolution of my instrument on sound (opposing to “pitch music”) and its timbre.
How do you describe yourself (e.g. are you a performer, a composer, a technologist, an engineer, some combination of these or, indeed, something else) and why?
Maybe the answer is in my answer of the first question. I am all of these. For me it is very exciting to practise music, develop my electronic instrument and create software to analyze my performance or to teach/present this music at the same time.
What is the cultural context for your work - how are you inﬂuenced by music from other cultures or the other arts?
I have a very classical background. I used to play oboe in university orchestra, learn history of music, etc. when I was a student in Bordeaux Conservatory, we had the opportunity to assist all last rehearsal of opera and all concerts for free. Then from 16 to 23 years old, I discover a large part of western music (ancient and contemporary music). After I've had a lot of interest in jazz (especially free jazz) and non-European music. Jazz was a great opening to other musicians and what they produce in terms of musical structures. They found different ways to create a musical form or to integrate old traditions (classical music only uses the quotation but rarely merge different musical discourse except for composers like Bartók, Mahler or Berio). In the same way, Indian musicians propose a very different way to organize the time and create very large musical forms.
What skills and attributes do you consider to be essential for you as a musician?
My classic musical studies have been essential for me. To be a musician is a long hard career and to be an electronic musician too. Classical skills are very useful for electronic musician: you can use many of them. For example, with counterpoint or orchestration, you learn how to arrange pitches and melodies to sound good, you can transpose this with complex sounds and their spectrums, mixing practice is closer to the orchestration. How to elaborate complex form is also a difficult problem for young composers and composers from past resolved lot of these problems, you just have to transpose their idea with your material and your musical language. Of course, classical studies give you a good culture on all western music and you can access to them by studying scores. Then I think it is very important to analyze lot of music to learn a lot from past composers (and, of course, from the contemporary composers). In the field of improvisation, this is the same thing. There are lots of brilliant improvisers today, they put the improvised music at the same level of complexity than writing music (link to previous idea of shift from composer to performer) but with more freedom. For me it is very important to listen to them and to learn from them. Finally, lots of artists were inspired by scientific fields. I think that one of the geniuses of an artist comes from the ability to transfer structures/ideas/forms/etc. from these fields to his work.
What forms of notation should a digital musician know and why?
I think he needs to be open to all kinds of notations (and transcriptions). Western notation but also old music notation or graphical notations: All of them could be interesting for him. And of course, it is about being ready to play any kind of music. But also, because the musical notation is not only an external technic that allows you to play music, notations also contains the artist’s thoughts. Each artist uses a personal notation (score, graphics, grid, etc.) and study it is also to understand his music. We create TENOR conference (http://tenor-conference.org) because there is a kind of “Renaissance” in contemporary musical notation. Artists are imagining new way to notate music through various mediums (screen, sensor systems), supports (papers, wood, etc.), technic (from dynamic notation to situational scores) and with the use of computers. They are the heirs of Cage, Tudor, Cardew, Crumb, etc. but with digital technic.
How do you analyse digital music?
Digital music analysis is complex because you need to study complex sounds and their organization in an artistic work. The first step is to train you ear to analyze components of these sounds, it is not natural and you are not prepared to that by your culture. The second step is to understand how composers and musicians use these sounds and organize them to create music. You need to study sound studio technic and performers musical gestures. A good way is to try to imitate them to understand how they made (by creating simulations). The third step, maybe the most important, is to find your own way to analyze this kind of music. Because analysis is not a pure theoretical science but also a practical activity and the main goal is to enhance your skills. This is what I am trying to realize with my research on analysis and representation of electroacoustic music. Because works or performances are only available through their master support, you have to deconstruct the musical process of artists and imagine how they do this. With EA works or performance, this is very complex because you have not just deal with pitches, durations, and instrumental timbres but with all possible sounds and any kind of transformations. You also can transpose methods from other research field (like acoustics, physics, mathematics, etc.), or imagine more aesthetical way (like sound-images by François Bayle), etc. Creating graphical or sound representation is a good analytical starting point for EA music. I develop an analytical method based on these representations. Transpose what you hear to shapes forcing you to create relation between graphical parameters and sound parameters of musical structures. By doing this, you learn how to listen and analyze this music. You also can use musical theories from composers (like spectromorphology by Smalley, functions by Roy, or typomorphology by Schaeffer and Thoresen) to analyze and segment sound objects during the realization of a graphical representation.
How do you approach controllers in digital music-making?
For me controllers are like mouse for musician. With your mouse, you can control only 2 parameters (maybe more but you need to create complex mapping), this is not enough to play music. You need to control more parameters like a classical musician control all sound parameters of his acoustic instrument. Controllers are extension to the computer to access in real time to many sound parameters. This is essential if you want to use and create rich musical gestures. With controllers, you also need to develop a mapping — which gesture is linked to which sound parameter(s) — and this is not a binary link but many parts of the mapping are complex. I develop this part with Max (alone or inside Live with Max4Live). For many researchers, mapping is a complex system of link with several layers (from meta-links to specialized links). I work with mapping in a different way. I create it through a modular system of linking. It means two important things. 1. Mapping is not realized through layers of several links but through channels of links in parallel. For me, it is easier to manage this type of mapping during the performance. 2. Mapping can be changed during the performance, it evolves, including to the configuration which was not considered at first. There is also another important thing that controller allows. With computer music, if you want to use complex gesture (like human gesture), you need to add aleatory or variable parameters. With controller, it is not necessary, because irregular (lively) evolution of sound parameters are given by the capture of your gesture.
Do you have any other useful or relevant things to say about being a musician in the digital age?