Social top


Interview with Robert Normandeau

The following interview with 2008 Toronto Electroacoustic Symposium Keynote Speaker Robert Normandeau by David Ogborn was conducted via email following an in-person interview during a concert in Toronto showcasing Normandeau’s music, as part of New Adventures in Sound Arts’ annual Sound Travels festival, 2008. While the email format certainly allowed for a more expansive discussion of some points, at the same time, this second interview does recreate the course and sense of the original, live interview.

Robert Normandeau holds an MMus (1988) and a DMus (1992) in Composition from Université de Montréal. He is a founding member of the Canadian Electroacoustic Community and of the concert society Réseaux (1991). Prize-winner of the Bourges, Fribourg, Luigi-Russolo, Musica Nova, Noroit-Léonce Petitot, Phonurgia-Nova, Stockholm and Ars Electronica (Golden Nica, 1996) international competitions, his work figures on many compact discs, including six solo discs: Lieux inouïs, Tangram, Figures, Clair de terre and the DVD Puzzles, published by empreintes DIGITALes and Sonars published by Rephlex (England). He was awarded two Opus Prizes from the Conseil québécois de la musique in 1999, “Composer of the Year” and “Record of the Year in Contemporary Music” (Figures on empreintes DIGITALes), as well as the Masque 2001 for Malina and the Masque 2005 for La cloche de verre, the best music composed for a theatre play, given by the Académie québécoise du théâtre. He is Professor in Electroacoustic Composition at Université de Montréal since 1999.

[David Ogborn] Tonight’s concert has a retrospective character, beginning with pieces that go back some 20 years, pieces that were created towards the end of the 1980s. This was a busy time — alongside these important pieces, you were finishing graduate studies, participating in the founding of the CEC… Could you start by telling us about those times?

[Robert Normandeau] The end of the 1980s was a truly fantastic time. On a personal level, I completed my Masters in 1988 and had started a doctorate at the Université de Montréal (UdeM), which had actually created the program specifically for me — until then a masters degree had been considered the end of studies in electroacoustic music. As far as I know, the UdeM was the first university in Canada to offer a doctorate in electroacoustic music — in acousmatic music, really, as no scores were produced during my studies.

I also had the chance to get international recognition, submitting works to international competitions like Bourges (in 1986 and 1988), and this helped me a lot in terms of the Montréal music scene. I was born in Québec City and had moved to Montréal in 1984.

At the social level, the Canadian Electroacoustic Community (CEC) was founded in 1986 and became the “official” representation for the whole community at the “2001-14” conference in Montréal. In that same year, I became involved with ACREQ [Association pour la création et la recherche électroacoustiques du Québec]. ACREQ had been founded in 1978, in Montréal, essentially to produce concerts of its members’ music, but starting in the mid-1980s it became really active. I produced a series called “Clair de terre” at the Montréal planetarium dedicated to acousmatic music. It was a monthly series that lasted for four years, for a total of 21 concerts.

At the UdeM there was a whole bunch of composers at the master’s level, most of whom are still very active composers in the field. It was also the beginning of the recognition of electroacoustic music at a Canadian level, with people at the Canada Council for the Arts starting to consider electroacoustic composers on various juries. It was probably the first time that the officer for the contemporary music program was an electroacoustic composer (Jean Piché, from around 1987 to 1990, I think).

How would you characterize that time (the late 1980s) creatively?

I think that everything was open. At least for Montréal-based composers. There weren’t “æsthetic” barriers, like in Europe — especially in France, but also in the UK as I discovered later — probably because of the lack of roots here in the culture of electroacoustic music. This was an advantage and a disadvantage at the same time: An advantage because it allowed the Montréal composers to be quite adventurous, a disadvantage because they didn’t realize why they were so adventurous!

So after a while, they were lost. And very few of them became conscious of the fantastic potential that had been developed. One of the most important things at that time was the idea of composing works in the genre “cinema for the ear”. Montréal-based composers were very interested in that, and composed many very strong works.

What were some of the discoveries and approaches, whether to sound creation or to the “performance” of acousmatic music, that accompanied this intense whirl of organizational and creative activity?

Contrary to what happened in Europe, there was no censorship here. Rhythms, beats, multimedia, video-music, music for dance, music for installations, music for films… almost every genre became part of the musical practices of Québec composers.

Could you explain in more detail what you mean by “cinema for the ear”?

The idea of “cinema for the ear” had been in the air for quite a long time, even if its paternity was never established. I used the term in my doctoral thesis to describe a genre or practice in electroacoustic music where the meaning is as important as the sound. Indeed, unlike instrumental music, electroacoustic music can use sounds that have a meaning: the sound of a car, a train, or children who play in the courtyard.

Of course, I can transform these sounds in such a way that they become unrecognizable. But I can also use them because they mean something. And I can tell a story with my sounds. Of course, music is not literature, and sounds are not words, so it would be difficult to tell a precise story, but it is possible to generate meaning in the listeners’ imagination. The sound of a train will trigger the imagination of the listener in such a way that they are reminded of their own train — and not a train that can be viewed by everybody, like in a film. It is like reading a novel, where everybody imagines their own landscapes and characters.

This idea is also related to a surrealistic point of view. The representation of something is not the thing itself. The famous Belgian painter Magritte illustrated this with a drawing representing a pipe: “Ceci n’est pas une pipe [This is not a pipe]”. Which is true! Try to smoke it! So the sound of a train is not a train, it is the sound of a train.

There is a misunderstanding about the concept of “cinema for the ear”. Some have tried to use it to describe electroacoustic music in general. But this is a mistake — only a certain way of composing can be described like that. Most electroacoustic music is abstract music, like instrumental music. People are composing with sounds, textures, timbres, etc., that are valid in themselves but do not mean anything.

Tell us about the first piece on this evening’s concert, Rumeurs (Place de Ransbeck).

It is fascinating that, in electroacoustic music, the meaning of the sounds can be kept. Composers are able to play with these meanings, which is something very difficult to achieve with instrumental music.

I did many recordings in the neighbourhood of the studio Métamorphoses d'Orphée, at the Place de Ransbeck (in Belgium), and I tried to transform them in such a way that they were constantly crossing the border that separates meaning and abstraction. So it is not a piece that use soundscapes in the same sense as Luc Ferrari did, in the 1970s, where the goal was to make the listeners believe that they are “there”.

It is obvious in my work that the Place de Ransbeck is fake! But anyone can relate to the sounds, like when you read a novel: when you hear the sound of a car or of a bird, it becomes your own car and your own bird — you imagine it according to your own experience. This is unlike visual experience, where the car is “that” car only that one. The acousmatic medium is much more stimulating for the imagination.

In performing that piece, you were very active at the mixer. Can you say a few words about this practice of diffusion? How does it work and what are you trying to achieve in the space?

The general idea is to enhance the musical elements already contained in a work.

The first thing a performer would do is to “project” the space: if the composer has put some intimate sound elements in his work, the performer will put these elements on the speakers that are closest to the audience. Conversely, if the space in the work sounds like a cathedral, then the performer will use more and bigger speakers, to give the listeners the sensation of a large space.

Also, you want to perform the movements of the music. So if a sound is coming from away and then becomes closer to you in the stereo version, the performer will use some far speakers at the beginning and then progressively place the sound in speakers that are closer to the audience. And vice versa.

Finally, a performer can do a sort of remixing of some passages of a work. This is a delicate issue, since no one likes to have their piece remixed in a concert! But within some passages, it is possible to accompany the inner movements of a work. For example: if a composer has put a very dynamic passage in a work, then the performer can project that passage very dynamically by moving a group of faders very rapidly. The sound diffusion then complements the music as it exists on fixed media.

Sound diffusion is a real performance practice. There is a diffusion competition in Belgium where I have seen some performers do a great performance of some works. But I have seen the opposite as well, since it is also possible to destroy a work with a bad performance.

With the next piece on the program, Mémoires vives, it seems we are moving out of the era of making music by manipulating tape and towards the realm of computer sound transformations. What was the impact of this change?

Mémoires vives was the first piece I composed with digital means.

At the time, there was no digital audio workstation available for an individual at a reasonable price. The only thing that existed was the Sound Tools I by Digidesign, the first audio workstation, on a Mac. But it was quite expensive and worked only in stereo. My studio at the time was made of a computer (Mac Plus), a sampler (Akai S-1000, the first stereo CD-quality sampler on the market) and a stereo DAT recorder (Sony TCD-D10-Pro II).

I was still working at the electroacoustic studio of the Faculté de musique at the Université de Montréal, so the piece was mixed in a 24-track recording studio, using a Neve mixing board and an Otari 24-track analog tape recorder. It was the first piece I made using SMPTE code to synchronize the tape and the computer.

So the only digital device was really the sampler. On that first version of my Akai, I had 2 MB (that’s right: 2 MB…) of memory, which, as you can imagine, was a big constraint. But at the same time, it forced me to work with sounds of very short duration and because of that, I had to compose music made out of small units. That is something instrumental composers are familiar with, but it was something new for me, and I had to develop a new approach to composition.

I think the first version of the work was a catastrophe! Well, maybe not, but it was enough to put me in a situation where, four months after the premiere, I came back to the studio and erased the 2-inch tape on which it was recorded. The tapes were so expensive that it was not possible to keep it and to buy another one!

So I started the composition again, almost from scratch. Almost. Because of the computer and the sampler, the sounds and the gestures of the work were still available, stored in memory, which was really revolutionary. Until then, only tape was able to keep the traces of the musical content of a work. With the computer everything changed and now it was possible for me to decide to erase the tape, knowing that I would quite easily be able to redo what was recorded on it. This is something I have done many times since then.

The piece is made out of various requiems composed in different eras: from gregorian chant to the most recent requiems by Ligeti and Chion. Because the sound sources were digital (from CD), it was possible to do many different manipulations and transformations without fear of adding too much tape hiss or noise. So the digital means and the digital sources changed a lot of things in the way we interacted with the sound material.

It seems to me that there is some connection between the idea of “memory” and the decision to base the piece on various requiems. What was the attraction to the requiem sources?

The title is actually a play on words. In French, “mémoires vives” refers to the RAM (Random Access Memory) of a computer. So the title refers to memory, represented by the Requiem, and to the fact that the piece was done using digital means, RAM.

The requiem genre has always attracted me. I have a whole collection of requiems from past centuries. It’s a form that was always there. It is a celebration of death, or an homage to something that is worrying for humankind. Probably it is a way for composers to celebrate spirituality, even for contemporary composers who are not necessarily involved with any sort of religion. These pieces are always deep, and upsetting.

So this was a time when the technological tools at the disposal of electroacoustic composers were changing. At the same time as this opened up possibilities that did not really exist in the analog domain, there must have been problems and challenges — and not only of a technical nature but also of an æsthetic or social nature — that came with the new technology. What were some of the obstacles at the end of the 1980s — not necessarily only as you faced them, but as others faced them as well?

When a new technology appears, the main danger for composers is using the presets! This has always happened. Remember the first analog sequencers, the first digital delays, the first FM synthesizers, the first digital reverbs. How many works then used square-wave dance-like music, the multiple repetition of a sound source, the bell-type sound of FM synthesis or the infinite reverb? And how many of these works do we remember today? How many FM pieces are still played today and considered part of the repertoire?

One of the main obstacles facing a new technology is that we continue to use the former behaviour in front of it. While a new technology requires new attitudes from composers, instead artists continue to react the same way as in the past. When the cinema was created the first fiction films were purely and simply filmed theatre, with no camera movement, no zoom, no travelling, no close shots.

When the new digital means came into place, the models used were inspired by analog means. And today our digital sequencer still reproduces the analog mixing board, even though the new digital means are a lot more powerful than that! Composers always have to be vigilant about this.

With the next work on the program, Spleen, we are moving into a realm of more direct, specific, individual human (and even theatrical) sounds. How did you come to conceive and begin the work?

I started a cycle of works based on the use of onomatopoeia, with a short work, Bédé [i.e. B.D. = bande dessinée = comics], in 1990. I then found that the sound material had more interesting potential than had been used in that short work, so I decided to make a longer work with the same sound material. The onomatopoeias were listed and described in a book called Dictionnaire des bruits (Dictionary of Noises), a repertoire of the onomatopoeias used in the comics. The new work was called Éclats de voix which in French is a play on words: it means both “fragments of voice”, and “shout”. The onomatopoeias were pronounced by a young girl.

After the completion of that work, I realized that it was the first time for me, as well as in the history of electroacoustic music, that we were able not only to keep a trace of our recorded and treated sounds, but also to keep the gestures that were used to make them. I had a sampler with CD-quality sound (Akai S-1000) controlled by MIDI software (Master Tracks Pro), and the music was recorded on an analog 16-track recorder. Because of this setup, it was possible to keep a trace of everything that was done in the studio.

And so I decided to compose a cycle based on the same timeline as Éclats de voix. I removed the sounds used in that piece and I recorded the same (or almost the same) onomatopoeias, but with a group of four teenage boys. As one can imagine, the energy was not the same! I composed a new work called Spleen, based on the same timeline, but with different sound material. Then two years later I did the same thing again with Le renard et la rose, with adult voices. And ten years later I completed the cycle, with the voices of older people, in the fourth and final piece of the cycle, Palimpseste.

One of the characteristics of this cycle is the use of pulses and rhythms. The use of rhythm is not so obvious in Éclats de voix, but in Spleen, because the boys were so much more energetic and rhythmic in the studio, I decided to push the boundaries a little bit: the sound is raw, the rhythms are more evident, more “in the face”. In Le renard et la rose, the boundaries are pushed further again, with minimal sound treatments.

Did you have a sense that by introducing such rhythms you were working in a way that was closer to a (loosely conceived) musical mainstream?

I think that rhythms were always part of the music. Actually, the pulse is the music. Before anything, before melody, before harmony, there was rhythm. For various reasons, in the era when musique concrèteand electronic music were “invented”, it was necessary for people like Schaeffer and Stockhausen to move ahead of the traditional values of instrumental music. But it didn’t stay like this for long. If you listen to some of Henry’s work of the 1960s, or to some of the first minimalist electroacoustic works by Reich composed in the same decade, you’ll hear that rhythms or pulses are there.

Maybe it is my background as a double bass player, but it has became difficult for me to avoid the presence of rhythm in my music. I think it is vital to music. That said, I think that it is necessary to use rhythm with imagination — it isn’t necessary to have a bass drum at 120 beats per minute in every work!

Whether the use of rhythms made my music closer to a “musical mainstream”… well, I would have liked it to be the case! Then I would have gotten rich…

More seriously, I remember that when I composed Éclats de voix, the first piece in which I introduced a regular pulse, I was afraid to show the work to some of my friends and colleagues. But Francis Dhomont, my supervisor at the time, told me that it was the best and the most personal piece I had ever done! And the piece was awarded the first prize at the Noroit competition, directed by François Bayle and produced by the GRM, which is not exactly known for its acquaintance with pop music! On the other hand, I remember one of my colleagues in the UK (no name…) saying to me that he wouldn’t have allowed one of his students to do such a thing!

I don’t think that my use of rhythm is considered to be very drastic by the techno people. But at the same time, Richard D. James (alias Aphex Twin) asked to publish some of my works on his own label, Rephlex, in 2001, including two of the Onomatopoeia cycle of pieces. This was certainly not because of the fantastic quality of the timbre!

I think we are at a stage in the history of electroacoustic music where the genre has reached maturity. We do not have to put a daily statement declaring how different and original we are from the rest of the music on the planet! Rhythm is part of life and music and it can be used by us too!

Since you mention the present stage of electroacoustic music, perhaps you could say more about the present moment specifically? Before we get into the last two works on the program, could you share your observations on the most recent developments, challenges, and perhaps your experience with students too?

The younger generation, lets say starting in 2000, has nothing to do with the previous generations.

If we start from the beginning, in 1948, I think that there was no real gap between the successive generations in electroacoustic music. I studied with Francis Dhomont, who knew Pierre Schaeffer personally — and I don’t suffer from that! No new generation claimed the necessity of demolishing the past and rebuilding from the ground.

Except in 2000. The emergence of the techno music, the democratization of the personal computer, made this generation of composers feel that they didn’t have to relate to the classics of this music. Almost none of our students during those years, the beginning of the new millenium, completed their degrees. They came to the university to get some tools and some practical knowledge. With that out of the way, they just left to be on their own and make a living, outside the standard ways that previous generations had done this.

Radio-Canada and CBC in Canada, as well as other national broadcasters abroad, stopped supporting experimental music. Arts councils started to struggle with budget cuts, and at the same time the underground experimental scene was really emerging. They had the possibility of making incidental music for theatre, dance, film, television and advertising. And they were pretty conscious that there wasn’t, and still isn’t, any possibility for a freelance composer to make a living out of concert music in the “classical” music scene. Even very well-known composers weren’t able to do that anymore…

So they felt that they were inventing a new sound world, that they were completely independent from the past generations, and that there was no need to know anything about the history of electroacoustic music. By doing so, some of them did actually invent a new language that is really exciting and innovative — I think about people like Aphex Twin, Matmos, Pan Sonic, Rioji Ikeda, etc. At the same time many of them just reinvented the wheel. Glitch, minimalism, microsound, noise, all these new labels for the electronica scene have roots in the history of electroacoustic music. But this generation didn’t care about that.

Over the last two years, it seems that feeling about the past is undergoing a change. The new students have an independence in terms of the kind of music they listen to, but at the same time they stay more open to the past. Even if they won’t listen to Bayle, Parmegiani, Harrison, Gobeil or Dolden with their iPod, they agree that these composers were, at some point, important and that it is important for them to know a bit about them.

They are also more critical of the techno scene, which seems to be suffering from a bit of exhaustion. This is my own feeling after attending some shows at Elektra and Mutek in Montréal, but this is also something I have read many times over the last few years in articles on web sites dedicated to this music (not sites dedicated to electroacoustic music that wouldn’t have credibility…).

I think essentially, after the new generation had explored some new paths in sound arts, like glitch and microsound, they came to a point, the same point as us, where questions about language emerge, and no one has any clear answers. What is music? Is it possible to make music without melodies and rhythms? How far can we go in experimental music? How far can human perception follow these experiments? What do we have to say as artists? These are still questions that sincere artists will have to face and try to answer as honestly as possible.

Well since you ask it, I would ask you: Is it possible to make a music without melodies and rhythms?

I have no problem with music without melodies and rhythms. And one can think of a lot of electroacoustic music where these values are absent. In more general terms, I would say that music without melodies is not really a problem, but without harmony is another matter. Of course, by harmony I mean something very general: sounds have to be “harmonized”. They have to be put together in a way that they fit together, whatever that means. They could be consonant or dissonant, but composers need to be conscious of this.

So often do I hear bad harmonies in electroacoustic music, as well as in instrumental music, especially instrumental music with percussion. The percussion instruments are too often taken for granted, and composers forget to consider their timbre. And then one can hear very strange timbre combinations between the percussion and the other instruments. No one pays attention to it, or they do only rarely, but timbre harmony is very important.

Melodies? I think they should be reserved for pop music… But don’t misunderstand me, I like pop music! And I like melodies. But it is not part of my musical language and I feel no necessity to integrate elements like that in my works. And I feel no necessity to hear them in the music of others, either.

Concretely, what kind of things do you do to ensure timbre harmony in composing? Is it a matter primarily of “selection”, of rejecting bad combinations? Or is it also a matter of changing sounds that don’t quite fit to make them fit?

Both. Of course, harmony should be considered here in a very general sense. It is not a functional harmony like in tonal music, for example. I would say it is more related to the tuning of the timbre. The first step is to try to find sounds that are related to each other. It could be because they are located in different registers and so are complementary, or maybe they are in the same register but with different colours so that their combination enhances the complexity of the polyphony. And if they don’t exactly fit together, then spectral analysis and timbre modification helps me make the appropriate corrections.

In recent years, I tend to use as few sound sources as possible for a specific work. My last work, Pluies noires, for instance was made with two recording sessions of a baritone saxophone. This way, a certain homogeneity is ensured, and there is less work later on!

The next piece (Hamlet Machine with Actors) has a deep connection to theatre. Can you talk about how you came to work on this piece specifically, but also more generally, about your experiences with electroacoustic music in the theatre?

Hamlet-Machine is a play written by Heiner Müller. It is considered to be his testimony. Originally, the script had 250 pages but after 25 years writing it, in the end the play consisted of only 12 pages! On a first reading you don’t understand anything, especially if you are not European. But after a while, and after much research and many discussions, the sense comes to the surface — we spent two years preparing that show.

Generally speaking my relationship to the theatre varies from one show to the next. I have worked many times with Brigitte Haentjens (12 shows in 12 years) and our relationship is one of deep confidence in each other. She has always given me a lot of freedom in composing the music.

For Hamlet-Machine, the rehearsals took place in a former factory. It was in a very noisy, industrial neighbourhood, and the wooden floor of the room was really squeaky! The rehearsal process was divided into two time periods, the first of which took place in the Summer. Because of this, the windows of the building were left open and the noises from outside invaded the space and added to the squeaking floor. The piece was almost a choreography, so the actors were moving a lot, especially in groups (there were eight actors). I made many recordings of the environment during this first period.

Then we moved to another location for the final period of rehearsals and the show. It wasn’t a theatre but rather the Alliance Française building in Montréal, a very silent place with no industry around and a silent, wooden floor! I had already made some musical themes, with electronic sounds for the first time in quite a while, and I had the idea to add in the sounding environment of the first rehearsal space. And then everything made sense, and everybody agreed that the industrial flavor of the music had been missing in the electronic sounds.

For the concert version, as the title suggests — Hamlet-Machine with Actors — I have added another layer of sounds, the ones produced by the actors during rehearsals and during the show. I’m actually considering making a new version that would be entitled Hamlet-Machine without Actors. Maybe this Summer…

Did you find that working in theatre changed the way you think about electroacoustic music in general? And do you have any advice for electroacoustic composers aspiring or beginning to work with theatre?

Acousmatic music is the art of the solitary, like with the painter and the writer. Once the work is done, it is done. There will be no performance, no reading, no good or bad representation. And the artist is always alone and makes all the decisions. In theatre, it is a completely different approach. The music is part of a something else — a multimedia show in the best case, incidental music in the worst!

The way people work in theatre is very different than in music. In music, performers are paid by the hour. They learn (or not…) their part, they arrive at the scheduled hour and they leave on time. Most of the time, there is no interest in the project and they are not there to participate. Rehearsals are limited to the minimum. In theatre, it is completely different. Actors are paid 150 hours for rehearsing at the minimum (in Québec). There is no such a thing as a score, so every production can be different from the previous one. This is different than music, where we know that this performer plays a little bit differently than the other one but essentially they play the same music. In theatre, a production of a Shakespeare play can be really different from another production of the same play. So the rehearsal period is quite intense and fascinating for a composer.

The ability of theatre people to put in question every aspect of their work is incredible. For an acousmatic composer, who shows their work only when it is complete and who has very little interaction with other artists while composing, it is quite a different process. One has to show everybody the sketches of the music — I was quite frightened the first time, and still am. And one has to adapt to the show and to the evolution of the show.

When I first started to work in theatre it was 1993. I went to the rehearsals at the very beginning and as soon as possible came back to my studio to start composing. At that time everything was done on tape, so every little change was a lot of work. I think I composed the music for this first show four times over! Because everybody changes their mind so many times in theatre production.

Now I’m more clever. I come as often as possible to rehearsals, but I take notes only. I produce sound material, but don’t produce the music itself until I’ve seen the first run through. And I film it, in such a way that I can put it into my computer and test the music, like for a film. It’s a lot easier this way. I can make very precise proposals to the stage director because it is easy to see where the music fits and how it sounds with the voices.

Stage directors usually have no imagination concerning the volume of the music. If you listen to the music in the studio at quite a loud level, this is the first comment you’ll receive: it is too loud!!! So if you put the proper level in the filmed version, there will be no problem and you can adjust the level in the theatre, in the real situation.

Also, music for the theatre is not concert music. The music shouldn’t be that dense in theatre, otherwise it will override the play. Most of the time, especially if the music is intended to be played over the voices, drone-type music is enough. Apart from that, it really depends on the stage director. Some of them like your music, some prefer the music they imagine! Including pastiches! Personally, I always refused to do pastiche music, and I never had to do any.

It should be very clear between you and the stage director, from the beginning, what your role is. Most of the time, I refused to do the sound effects of a production. This has to be clarified, because we are often confused with sound engineers or foley artists, because we also work with a computer. It depends on the size of the production, of course. On smaller productions, they don’t have enough money to pay both a composer and a sound artist. But on bigger ones, they should. Unless you want to get the two jobs!

My relationship with theatre has forced me to imagine some kinds of music and some musical elements that I would probably never have conceived on my own. You have input from other people, from a different form of art and because of that, it puts you in a situation where you have to do some research and define the role of the music and its place in the show. So you compose differently for theatre.

I had the chance to work with stage directors who were open-minded to electroacoustic music, and because of that, I always started a theatre project in the same way I would a work of concert music. I mean that in both cases I took it just as seriously. Often the theatre music is more transparent, but it only takes a short while before I recombine the elements in such a way that it becomes a concert piece. Out of my 12 collaborations with Brigitte Haentjens, 9 became concert pieces.

Could you give any specific examples of music elements imagined only, or primarily, because of the experience of working in the theatre?

Sound sources. For whatever reasons, sound sources are a big concern in theatre plays. Well, for me. It’s a mystery. Brigitte Haentjens asked me to compose the music for a theatre adaptation of a novel, Malina, by Ingeborg Bachmann, the famous Austrian writer. Brigitte herself was making the theatre adaption, which put her in such a state of mind that it was almost impossible to deal with her… for a while. And I had this very strange idea to use the sound of the shakuhachi, the japanese sacred instrument, at the centre of the play. This was only our second collaboration.

When she came to my studio, I presented to her something like 30 different sound themes… She replied, after listening to them, that they were all fantastic! But that she couldn’t say anything about how to use them in the show. Then I came to the rehearsal room, filmed everything, and made a complex edit of the whole structure of the show — in one hour and fifteen minutes of theatre, there was music from the beginning to the very end. The music was awarded the prize for best music for a theatre production by the Académie québécoise du théâtre du Québec.

Years later, we were working on the theatre adaptation of The Bell Jar, by Sylvia Plath, and I had the idea of using the sound of a harp. The music was a very simple mix of harp sounds and rubbed glass sounds. I never made a concert piece of that music, but it was the play that was presented the most often, and I was awarded the prize for best music for a theatre production (by the Académie québécoise du théâtre du Québec) a second time. I have to say that this production was the best one I had ever experienced. It was like everything was perfect: everybody agreed on everything and everybody was working in the same direction.

In your keynote lecture at the 2008 Toronto Electroacoustic Symposium you spoke about the next (and final) work on the program, StrinGdberg, as an example of “timbre spatialization”. What does this mean and how does it come about in this piece?

In the instrumental music of the 1960s, composers explored spatialization, creating works that assigned performers to different locations in the concert hall (Stockhausen’s Gruppen or Carré). However, these works were limited to the timbre of the instruments: the violin on the left side will always sound like a violin on the left. The sound and the projection source are linked together. What is specific to the acousmatic medium, however, is its virtuality: the sound and the projection source are not linked.

A speaker can project any kind of timbre. Today, with the appropriate software, all these sounds can be located at any point between any group of speakers. What is unique in electroacoustic music is the possibility to fragment sound spectra amongst a network of speakers. When a violin is played, the entire spectrum of the instrument sounds, whereas with multichannel electroacoustic music, timbre can be distributed over all virtual points available in the defined space.

This is what I call timbre spatialization: the full spectrum of a sound is recombined only virtually in the space of the concert hall. Each point represents only a part of the ensemble. Space is not a conception that is added at the end of the composition process — as is frequently seen with today’s multitrack software — but is a real, composed spatialization. This is a musical parameter exclusive to acousmatic music.

Coming full circle, the earlier pieces on tonight’s program were diffused, while the later pieces (including StrinGdberg) were composed spatializations, existing in a more or less fixed form. What are your thoughts on what is gained and what is lost when spatializations are composed in advance? Is “classical” diffusion to many stereo pairs an important approach for new work too, or is it just for the performance of works from an earlier time (when multi-channel monitoring was less available)?

It depends on the project. The final, mixing step of a work is more or less the worst experience for me. This is when you state that this is the end of the work, and you make the final decisions. When you ask yourself, why is this mix better than this one? And when you aren’t able to reply to that question…

I think that my decision to make open, multichannel works, is precisely because of this. I need to be able to change my ideas at the very last moment, in the concert hall, while I’m surrounded by the real acoustics of the hall, and the interaction with the audience. And at the same time, I know that some works need a perfect balance in the studio, so that when you go to the concert hall, there are very few details to refine. Almost everything is there, just like for a film — filmmakers don’t come to every cinema to make adjustments! So, I’m torn between these two situations…

What are you working on now and when/where can we hear it?

I have two commissions at the moment. The first one is from Bangor University in Wales. I was there last Fall to work in their studio, and the premiere should take place somewhere next Fall. It isn’t completely decided yet. The second one is an “erotic” work for the Musica Viva festival in Lisboa (Portugal), next September. It is a crazy idea we had last Summer with the organisers of the festival (Paula and Miguel Azguime). We realized that there aren’t many erotic electroacoustic pieces, so they commissioned me and some other composers to create piece for an evening concert on that them. It will be hot!

I also have to compose music for a play next year, Huis clos by Jean-Paul Sartre, with stage direction by Lorraine Pintal, that will be presented in March 2010 at the Theâtre du Nouveau Monde in Montréal.

At the university level we just got a major research grant from the SSHRC to explore the immersive multimedia environment in digital music and video. It is a three-year project that will take place in our 36 speakers dome. Quite busy!…

Social bottom