Social top

Interview with Bill Buxton, Canadian Composer and Researcher

Privileging the human aspect of electronic and computer music systems

Canadian composer, producer and broadcaster Norma Beecroft has compiled 23 transcribed interviews with prominent electroacoustic composers from the 1960s and 1970s in an eBook distributed by the Canadian Music Centre (CMC) in 2015. The book explores a variety of themes, including the rise of technologies including magnetic tape to computer, the establishment of prominent electronic music studios in Europe and North America, and the unique perspectives and motivations of towering figures in 20th-century music, among others. This unique collection features discussions with composers such as Pierre Schaeffer, Iannis Xenakis, John Cage, Vladimir Ussachevsky, Karlheinz Stockhausen and Hugh Davies. Beecroft also includes several interviews with Canadian colleagues: Gustav Ciamaga, Bill Buxton, James Montgomery, Barry Truax and Bengt Hambraeus. Beecroft’s well-informed and thoughtful approach to each interview allows the featured composers to elaborate on their own work as well as on the status of electronic music at the time. Conversations with Post World War II Pioneers of Electronic Music is a must-have for music enthusiasts and electronic music practitioners. 1[1. Visit the CMC website for a preview and excerpts of the book, and information on getting your own copy.]

In her interviews with Bill Buxton and Gustav Ciamaga, made in the late 1970s, Beecroft discusses Hugh Le Caine’s visionary electronic music instrument designs, and she has kindly allowed us to republish these interviews in this issue of eContact! celebrating Le Caine. Minor changes have been made in the text transcripts in view of their online publication. These and other interviews from the eBook are also available as a series of podcasts on the CMC website.

In 2005, William Arthur Stewart “Bill” Buxton (b. 1949 in Edmonton, Alberta) was appointed Principal Researcher at Microsoft Research, the culmination of a very distinguished career as one of the pioneers in the field of human-computer interaction. Bill received his bachelor’s degree in music from Queen’s University in 1973, and his master’s in Computer Science from the University of Toronto five years later. In between his formal Canadian studies, he spent two years working and teaching at the Institute for Sonology in Utrecht (Holland). This grounding in music and science first led him into the designing of digital musical instruments, and his subsequent life, gleaned from his many writings, teaching and research, has proven him to be a relentless advocate for innovation, design and — especially — the appropriate consideration of human values, capacity and culture in the conception, implementation and use of new products and technologies. Bill Buxton has been the recipient of four honorary degrees, is a regular columnist for, was chief scientist at Alias Wavefront and SGI Inc., and in 2008 received the SIGCHI Lifetime Achievement Award for his many contributions to the human-computer interaction field.

Queen’s University and Hugh Le Caine’s Instruments

[Norma Beecroft] Let’s go back to Queen’s University. That was when you got involved with computers in the first place. Was that an adjunct of music?

[Bill Buxton] The contact with both computers and electronics and so on was directly from music, from my association first with David Keane at Queen’s [University in Kingston], and secondly with István Anhalt. With David Keane we started working, developing an electronic music studio at Queen’s; there had not been one there previously. And this began in about 1971. In ’72 and ’73, I started going up to Ottawa. I went up two or three times to do a study for a film soundtrack using the computer music system developed there by Ken Pulfer. I had been encouraged to do that by István Anhalt.

When I left Queen’s, I went to Utrecht; it slanted my viewpoint in what I consider in retrospect a very lucky direction, I think. That experience was a very good influence and saved me from wandering around in what I think are not the most fruitful directions. I was very lucky in the situation of helping to develop the studio at Queen’s because you got insight from three different levels. One, just as a user of the studio, and secondly, as somebody who had to design and maintain the equipment, and thirdly, from the point of view of being someone who had to teach other people how to use the system. And there is no faster way to learn about the inefficiencies of design and so on, of a particular piece of apparatus or equipment, than to try and explain to somebody else how to use it. The ideal system is one which takes no explanation, it’s self-explanatory. And one would hope that technological tools for music would evolve to the point where that were the case.

Well, the Queen’s studio was a computerized studio?

No. The Queen’s studio was just an analogue studio. I don’t distinguish in a sense, I just think they’re both just different levels of electronic technology to serve the same ends, and they are both for producing electronically generated sounds. Now, the way I look at it in this context is just an analogue synthesizer primarily just gives you a different way of describing or looking at sounds than, say, are available using a computer music system. In many senses, the repertoire of sounds that are available are quite equivalent. In fact, many of the computer music systems simply emulate analogue studios using computer programmes, and you work with them much in the same manner. I don’t think this is perhaps the most ideal way of approaching the problem. Working in the analogue studio you become frustrated with the means of describing musical phenomena. Now, moving from that, having [had] experience with the computer music system developed by Pulfer, you realized that this type of technology had potential in overcoming this problem, if used correctly. And I think that Pulfer’s system went a long way [towards] making it much easier for a musician to communicate his ideas to technology and getting musical results out. Everything I’ve done since then has largely been to try and take the whole thing a step further so that more and more when you’re using, say, a computer or an electronic device for making music you can address yourself to the sound and so on and the structures in a manner which reflects how you perceive or hear the sounds, rather than just some arbitrary electronic device which generates them.

The situation you are describing in Ottawa, which was developed by Ken Pulfer, could you describe just exactly what that system was?

The main thing is in music you have the question of how you describe the actual sounds, your palette of timbres. Now, with Pulfer’s system, you had a clear-cut way of orchestrating your sounds and waveforms. You did not have to turn knobs or dials. You could graphically, using a TV-type screen and a device enabling you to sketch freehand, evolve the sound characteristics of a composition interactively and hear the results right away. In a traditional electronic music studio, and in many computer music systems, the situation is such that you have to spend a lot of time twiddling knobs, whether they be real knobs or computer programmes that look like knobs, to get the results you want. And then, you have to repeat this for every sound. If you have a sequence of very contrasting sounds, it is very difficult, especially in an analogue studio, to get the rapid change because there are so many knobs to be twiddled. In a system like Pulfer’s, this was not the case; it was all stored. The computer is very good for doing that sort of operation. And secondly, if you are organizing a sequence of events in time — which is basically all music is, a sequence of sonic events along a time line — then Pulfer’s system provided an editor to enable you to sort notes; it enabled you to edit your notes, when they started and ended, and sorted them out in time, and it was very straightforward and clear. And again, you could contrast that with an electronic music studio where quite often things on a timeline are controlled by sequencers, which are controlled by setting up knobs and dials, which is very cumbersome. You cannot see at a glance what the structure of a piece of music is because all you see is set of knob settings, whereas in Pulfer’s system you could have a graphic display on a television screen of the actual notes just as you would looking at a piece of music, a score.

I think [Le Caine] would be just as famous as Robert Moog, inventor of the Moog synthesizer, had he not worked for the Canadian government.

What was his system actually called?

The language was called Music Comp, and it was basically just called the NRC Computer Music Project. It was done at the National Research Council of Canada.

And where does Hugh Le Caine fit in?

Hugh Le Caine didn’t fit particularly into the Computer Music Project at all. He was working just around the corner in the same building, more on analogue devices. He developed techniques for working new types of sound synthesizers and so on. But he was more concerned with things for live performance. One of the things with which I had the most involvement with Le Caine was the Sackbut, a keyboard instrument which was much like a small synthesizer. [The Sackbut] had some very interesting characteristics and was very good for live performance. There were a couple of prototypes made, one of which we had at the studio in Queen’s for the better part of a year, and I had done some pieces using that just for experience. Le Caine had a very good way of working. He would build a couple of prototypes of these devices and then they would come to various people, such as the studio at U of T and McGill and Queen’s, and we would get this feedback; we’d use them in the studio and then talk to Le Caine about what we felt about them, if there were any changes that we felt were necessary, and so on. That was a very good relationship.

He is the prototype of the type of engineer — which was all too rare in the early days and becoming more common now — who was very good in his particular domain, but also had a very good method of communicating with the potential users of that apparatus which he was developing. His interaction with Anhalt, both at McGill and Queen’s, and Gustav Ciamaga 2[2. The author’s interview with Gustav Ciamaga, “What Happened to Music I?,” is also published in this issue of eContact!] here at the University of Toronto developed a lot of devices. In one book, he has been described as one of the heroes of electronic music. I think he would be just as famous as Robert Moog, inventor of the Moog synthesizer, had he not worked for the Canadian government.

Just recently I was talking to Otto Luening, and we think of Otto Luening as being one of the pioneers of electronic music, but he said: “If there is any pioneer, it would be Hugh Le Caine.”

Absolutely. Hugh was a very good musician in his own right as well. Generally when he built a device, he also wrote a composition which described that device. One case in point is the variable-speed tape recorder [he used to compose] his piece Dripsody. He simply took one drop of water and, by changing the speed using this particular tape recorder, made a complete composition.

Can you describe any other of his so-called inventions that you came personally in contact with?

Well, some of them would include this variable speed tape recorder, which was basically a tape recorder on which you could either play reel-to-reel tapes, or tape loops. It had a keyboard on it, much like a piano keyboard, and you could simply play the keyboard and it would change the speed of the tape in accordance to which key you pressed. 3[3. Kevin Austin describes the Special Purpose Tape Recorder, more commonly known as the Multi-track, in his contribution to the issue of eContact!, “Le Caine, Mirrored Through Memory.”] Now, anybody who has played with a tape recorder knows that if you change the speed of the tape going past the heads, you also change the pitch of the sound recorded on the tape. And that is exactly what this device was meant to exploit. There were also filters on it, so you could adjust the timbre at the same time as the pitch structure. And there was another little keyboard so you could engage and disengage several tapes. As you are playing, changing the speed using one keyboard, you could select which of several tapes were in fact engaged to the heads, so you could play chords, and so on.

Le Caine had spent lot of time trying to develop devices which primarily were touch-sensitive, so that you could basically touch things and rub your hands on them in different pressure-sensitive and position-sensitive [sic], so where you touched it, how hard you touched it and how much of it you touched, all of these things would affect the sound that was produced. The Sonde, which is one type of generator that he built, was controlled this way. He built other devices which were also very unique and fascinating, especially now in retrospect, showed an amazing imagination. One of [these] was controlled by photo resistors, or what we call “electric eyes”. The point about that was that you could now draw your music — you would get transparent sheets of acetate-type material, and you could sketch a pattern, and then you would pull this over top of the bank of light cells, and depending on which ones had been blanked out and which had not, oscillators would play [your music]. You could go directly from your graphic score to sound. 4[4. István Anhalt, interviewed by James Montgomery and Gayle Young, describes this instrument, the Spectogram, in greater detail in “Being Allowed to Make ‘Mistakes’ While Composing Electronic Music With Hugh Le Caine’s Instruments,” also in this issue of eContact!]

Some of these devices of course are interesting only in the concept in what they showed were not fruitful ways to approach the problem of communication between the man and the machine, but they were experiments which had to be done in order to find out just how well that approach to the problem worked. And he certainly was the greatest pioneer, in my opinion, of trying to get this potential of technology available to the creative artist.

What’s happening to all of these inventions and devices? I understand that some of them are being dismantled.

Well some are still in use in studios. If any are being dismantled, that would definitely be a crime against nature, so to speak. If Canada ever had an equivalent of a Smithsonian Institute, that’s exactly where they should be.

“Say what you want and learn how to describe it, and I will build it for you.” (Hugh Le Caine)

Just one more question about Hugh Le Caine and yourself. Did he directly assist you in your interest in your career?

Well, put it this way. For every hour that I spent working on the computer system at NRC, I think I spent another hour in conversation with Le Caine. You have to realize that at that point of time, I was technologically very naïve. My background was in music, and I was rather lucky in my career that I had people like him at critical times to speak to and to give some sort of maturity to my direction that I would pursue following that. Le Caine did a great deal for me in giving me, in a very succinct manner, which he was very good at, an insight into what one could expect and what one could hope for. He was the first person who said, “Really, all you have to do is say what you want and learn how to describe it, and I will build it for you.” Even though he’s not available now to do that for me, I’m still following that same axiom. I’m thinking more not what can I build, but what do I really want to build, and then, just pursuing that and finding out how to do it, and then going ahead and doing it; if it’s important enough it gets done. It’s mostly just a frame of mind. Do you want it? What is it? And then find out how to get it. And my experience has been in the domain of computer science and engineering that the scientific element is only too willing to cooperate and work, as soon as you can overcome the communication problems, that when you say what you want, that what they understand you having said is what in fact you meant. Any success, for example, here in Toronto that I have had, has not been my success but rather a group success of a lot of people who were very interested in achieving the same ends.

Integration of the Human Aspect in Electronic Music Instruments

Can you describe what you are developing at present and the reasons for your interest in the development?

The main thing that we’re doing here in simplest terms would be trying to produce what is the logical extension of the system that started with Pulfer in Ottawa, and we are trying to develop a facility which is a congenial environment for composers to work in, and be able to exploit the potential of high technology in pursuing musical endeavour. What we have is a small computer with a special sound synthesizer we have developed, which enables several voices of complex sound to be generated immediately — so a composer can work interactively — and to develop the use of graphics as a form of notation, not only for scores but also for the actual sounds which appear in those scores. I’m interested simply from the point of view: I’m a composer and this is a dream system; there are things I would like to express musically [and] I feel this is the best medium through which to express them. I hope that several people will feel likewise.

The second point is that there are several problems about music theory which such a system brings out, and this in fact is our prime mandate in terms of our funding and so on in developing a system. That is, if you consider music theory, it tells you very little about the actual act of composition, and we would like to develop a notion of music theory which is a description of the actual process of composition, that gives some idea of the cognitive types of processes that a composer follows or uses in writing music. This we think we can partially accomplish by developing an interactive system for composition where we provide basically the tools for composition, and the smoothness with which a composer interacts with these tools is some indication as to how adequate the tools are. Now to provide such tools means that you have to say what is the action or the task that he wants to solve, and then what is the best tool; the provision of the tool in fact implies that you know what task is going to be undertaken with that tool. And so what we’re trying to do is analyze the process of music composition into a series of subtasks, and then provide tools for the solving of those tasks. The success or the interaction with composers will then be indicative of the correctness or incorrectness of our breakdown into the subtasks, and so on.

Now, there is one thing that is very clear about this, completely outside of the domain of music: the fact that music composition is a non-trivial activity, as both of us know, and the thing is, if we are successful, we will have shown that a computer-naïve user — that is, a composer — can come and make non-trivial use of the technology for very complex problem solving. Now, basically what we would like to be able to show or state: if we can do this for music, then there is no excuse or reason why it cannot be done for other fields as well, both [within] the arts and without. You don’t want the person who makes use of these tools to have to necessarily be an MSc in computer science. It should not be necessary, and I do not believe that it is necessary.

I must say that performance is something that we consider part of the compositional process. I somehow subscribe to the notion that Cage and many others have stated that a composition is not finished until it’s performed.

We are using the devices available with technology right now to develop our system in such a compact physical unit that it is portable — you could take it on stage and realize your compositions in a concert-type situation. There is somehow something intrinsically appealing of somebody coming on stage with a computer and having it serve him for some sort of humanistic endeavour. It somehow is an example of something which is contrary to most people’s conception of what computers are for. Generally, right now I think most people feel threatened and/or controlled, manipulated by computers and such technology, and [have] not even begun to think that alternatives do exist.

You’re referring to “we”…

Yes, it’s not the royal “we”, but rather the collective “we”, which is officially known as the Structured Sound Synthesis Project, at the University of Toronto. It involves an interdisciplinary team of researchers and artists, which include Gustav Ciamaga at the Faculty of Music 5[5. In Norma Beecroft’s interview with Gustav Ciamaga, also in this issue of eContact!, Ciamaga discusses “Hugh Le Caine’s Visionary Electronic Music Instrument Designs.”], and professors Leslie Mezei and Ron Decker, who are both involved in computer science (in particular, computer graphics and animation), and finally Casey Smith, who is Chairman of the Department of Electrical Engineering at the University of Toronto. Besides them, there is a large group of graduate students and undergraduate students who have done a great deal in contributing to the project’s success.

We currently have funding from the Canada Council, not from the Arts but rather from the Humanities and Social Sciences Division. It is not huge, but it has certainly been adequate to show significant results, and as a result we are hoping that it will be continued. We also now have applications pending with the National Research Council, because we believe we’ve established enough of the engineering implications of our work to warrant some capital investment in equipment to facilitate our developing a device as a standalone piece of apparatus. This will have benefits all the way around, both for music and engineering. It will mean that we can have a system for users, and a system for research and development, and a system for concerts, and so on.

I believe that there is an idiomatic form for writing for technology.

But this was basically your concept?

I had been working at Utrecht for a couple of years both teaching and studying at the Institute for Sonology, and had been in correspondence with Les Mezei largely through some computer graphics that I had been doing. He suggested that I come to Toronto to try and fill in some gaps in my background in computer science, which would enable me to realize some of these ideas that I had been discussing with him, and at the same time, we could start putting together the team and get the funding arranged. So I spent a year and a half as a graduate student in computer science, and at the same time writing numerous grant applications, one of which was [finally] successful. You know, to be frank, I would say my music has suffered a great deal from this. It has been a full-time occupation, and now, I can see the light at the end of the cave, so to speak.

Is this a unique project in your estimation?

I’ve certainly taken a close look at what other people have been doing. I’ve also spent a lot of time just visiting various studios. To answer your question directly, there are a lot of projects going on which in some ways parallel what we’re doing. I think what our group is trying to do is to benefit what we can from their work. A lot of our ideas are not original, they’re rather a synthesis of things other people have done, and we package them into a format which we think is a composite effect of which is something new and very worthwhile. Now the differences between our system and many others are primarily in the domain of how the composer interacts with the system. When it comes down to it, that is, in my opinion, the most important part. I want to change the situation where, currently, if you polled the composers in Toronto alone, for example, and said, “Do you think that a computer music facility should be developed and is even desirable from a musical point of view?”, I think you would probably come with an overwhelming negative response in this particular time in history. I do not believe that that response would be the best. I think there are certain things that can be done with this type of system which are very musical and very worthwhile and desirable. I would like to be able to make a system which was so convincing that, after its first few outings, [it] would make people reconsider what they thought about electronic music and reconsider what they thought about computer music, in particular. The human aspect has to be very obvious, to dispel that preconception.

I think there is a big problem right now with music and technology in that people have spent the bulk of their effort in trying to do the same things which were done right from the beginning, but do them in a more efficient manner, rather than spend the time reassessing what it is they are trying to do in the first place. I believe that there is an idiomatic form for writing for technology, and just as you do not write for a string quartet like you write for a vocal ensemble, nor should you try to write for computer the same you write for a symphony orchestra. The main point then becomes to discover what is idiomatic, and the idiom has definitely not been fully explored, or defined, nor could have been in a short period of time.

It’s towards that that we’re working, rather than to be continually just working for the goals of last year. I think that’s largely a problem that can only be considered when you have enough confidence in your team, confidence in both engineering, musical and computer science abilities, and I think our team is very lucky in that regard that we are not top heavy in one field to the detriment of another. The prime thing being here that we have the active involvement of a large number of composers, people such as students from the Faculty of Music as well as the Canadian Electronic Ensemble, who are a group of very intelligent, well cultured people [with] good insight in technology as well as into the arts. It’s people like that which are going to make the project successful.

The only other person I happen to know who has spoken out loud on that kind of thing is Xenakis.

I find it sort of encouraging that Xenakis has stated some ideas which are very much in agreement with my own in terms of motivation, and he comes up with a very different approach to solving those problems. I find that very healthy, and again, that’s another group with which we have contact, and I hope the interaction will prove beneficial for both parties, our individual accomplishments and failures.

Why would you say we are in a period of reassessment?

Well, it’s been twenty years since the first experiences with computer music and people are starting to take a look at the repertoire and bibliography of all articles about the problems and so on. The bibliography of articles weighs seventeen and a half ounces, and the discography weighs two and a half ounces, and it just goes to show you how much talking and scientific work has been going on, but really how little music has resulted. I think people are starting to look now and say, “Well, in the next twenty years are the relative weights of the ensuing two documents going to be reversed, so that we have an awful lot of music to show?” And that is the primary factor in determining the success or failure of systems.

Now, I think what’s happened in terms of the scientific community is that, largely as a spinoff from the United States Space programme, there has been a very big change in types of circuits that are available. I’m speaking about large-scale integration which has brought to you your pocket calculator with all kinds of trigonometric functions, and so on, for fifteen dollars, whereas a couple of years ago it was around a couple of hundred. People are now confronted with that type of technology every day and we’re becoming quite comfortable with it. And with that, is coming the fact that musicians have had a long period of time working in electronic music studios, so they’re becoming comfortable with technology and are starting to see these seeming miracles happening as a result of microcomputers and large-scale integration. And they’re starting to be able to say, “Well, if all of these things can be done, maybe something can be done for me, but if they can be, what would it be?” So they are starting to think what that is. And once they go that far, they are actually finding people who can do those things for them. That’s very much the case for me. It’s interesting. I’m speaking now about people who are involved in developing systems rather than necessarily the users of those systems.

I suppose the implication of what I am saying is that the discrepancy is not so wide anymore. The user of the system is becoming far more able to describe what he would want, or modifications of his current world, with an engineer who is capable of changing it for him. The problems of yesteryear, so to speak, have been largely that the musician may have even known what he wanted but he had no vocabulary or no ability to describe what he wanted to an engineer, and the engineer really had no vocabulary, or willingness, or I’m not sure what, to really want to be understood, or to understand what was being asked of him. And again, that was why Hugh Le Caine was such a special person; he was very much an exception to that seeming rule. We have a stronger idea now as to what are reasonable expectations and I think ten years ago we did not. We would not know when we asked for something if it was an easy matter or we were asking for the sky. And now I think that people have a much better idea about the relative implications in terms of time, money, effort in realizing something that they might ask for, and what the possibilities of the technology are. And that’s what the crux of the matter is.

I’m of the impression, certainly from a lot of the talk I have had in the last year or so, that Canada has a superabundance of people involved in musical technological developments. Am I wrong, and if I am not wrong, then how do you explain that particular predilection to technology in this country?

First of all, I don’t think you’re wrong at all. I think that, perhaps per capita, there may not be a significantly larger number of people in Canada involved in electronic music [composition] than, say, in the United States, but I would say there certainly [has been], in my opinion, in terms of total numbers, very significant innovations. I have no idea why. Maybe we’re a nation of tinkerers. This type of work has not been very well supported financially in Canada, electronic music and so on. Now, there was a driving ambition to get involved in the field, not a great deal of support, and therefore, if you wanted to work, you had to be an innovator. And perhaps much of the innovation has resulted by virtue of the fact that nobody gave us the money to walk down to the store and buy what we wanted; we had to invent it ourselves. You had to start thinking creatively. I think in the field of computer music this is absolutely true, if you look at, first of all, the system of Ken Pulfer in Ottawa, that has direct implications in what’s going on right now. One of the most sophisticated studios for computer music in the world today is at MIT, it’s the one developed by Barry Vercoe. In its written description of the studio and its motivations and so on, he gives a great deal of credit to the work of Ken Pulfer’s system, and in fact the system of Vercoe is very much modeled on Pulfer’s system.

Secondly, Jim Gabura, when he was here at the University of Toronto, his work was extremely significant. It was one of the first of what we call “hybrid systems”, where somebody had the idea that the sound could be generated by something other than the computer, but the computer could just be used to control that equipment, in his Piper system, one of the first examples of this approach. What that approach meant, in terms of implications, was that you could start to work in an interactive manner. It takes a long time to generate sounds by computer using the traditional digital synthesis techniques, and with Gabura’s system, since it took far less computation to control some oscillators, he could generate the control information right away, and so you could hear the results of your system.

Thirdly, the POD system of Barry Truax has been very important in the last few years in demonstrating how small computers can do an awful lot to satisfy a musician, whereas previously, computer music systems have been almost exclusively in the world of large expensive computers.

The cases I mentioned were simply in the world of computer music. If you just look at analogue, Hugh Le Caine again is in the same league. He is perhaps the king of the Canadian castle, so to speak. That’s very good, I think having had contact with people like that gives you very high standards to shoot for. I hope that what we’re doing at the University of Toronto has an equal amount of impact, as, say, the system of Pulfer, or Truax or Gabura. We have high expectations and I hope they prove worthwhile.

Again to sort of reiterate, the current situation is such that a very small percentage of the musical talent in the world today is involved in music and technology, and that leads me to believe that there is little wonder that we have not advanced further in past years, musically. I really hope that the day will come when music and technology, computer music is just accepted as another form of music. I won’t be able to sell records or get concerts simply because I am using a computer, but rather because I happen to write good music.

Right now, even with myself, I acknowledge — and am not particularly happy with the situation — but any reputation I have is far more based on what I have done technologically and scientifically than on any music that I have written. That’s fine and dandy, but I certainly hope that state of affairs changes. So, there’s a very sharp distinction made between the music that I have done and the scientific work I have done. You can try this out on other people. Just mention a composer, and they will be able to tell you what equipment he uses and so on, but ask them to name one composition, or one piece they’ve heard, and ninety percent of the time they will be able to answer what he has done in terms of technology, but they won’t be able to tell you one piece of music. That’s a sorry state of affairs, in a sense, and I think it’s indicative of the state of the art. And it is exactly that that I hope we are able to overcome.

It is obvious to me that Bill Buxton is completely devoted and dedicated to technology. Do you have any scepticism about the use of the technological tools in music at all?

Very much so, actually. In fact, it’s rather frightening. First of all, I sometimes just react and say, “But I don’t like electronic music.” My attitude is: I’m not happy with the state of electronic music right now. Now, I have two alternatives: one is to just say, go back to instrumental writing and turn my back on it; or, say, “Is there something wrong and do I see a solution?” I can’t guarantee I see a solution. I think if there is one, the route that I am taking is going to lead me at least part way there. I mean, I can question it, but I want to be convinced before I give up. Secondly, I have very strong reservations back and forth just about the implications of using technology. Forget the purely musical ones, but on the outside, you’re very much bound to studios or facilities, at least you still are in the current world. It’s not like you can go off into your cabin in the woods à la Murray Schafer and write music, and then surface for your performances and so on. That might become possible in technology if things change, but you will never be really free, although people have their own computers.

The amount of time it would take to maintain a studio and to gain the knowledge to keep doing that, I think would seriously detract from your musical abilities. I have very big questions, and it frightens me very often how strong my commitment is to it, in respect to the doubts that I have about it. But on the other hand perhaps, that’s a healthy attitude, if I had the strong commitment as well as no doubts, then perhaps I would be less careful about the route that I personally took.

I think it’s an unanswered question right now what the merits will be. I think we will find out in the next five years; if we haven’t accomplished any more in the next five years than we have in the last twenty, then I think that we have hit a dead end and that the original euphoria about this new, fantastic world of sound was perhaps all a mistake. But I don’t think that the problems with music and technology are any greater than in instrumental music right now. I think there’s clearly an equal groping going on right now in more conventional instrumental music, and therefore, if I feel doubtful or sometimes have questions about the musicality and so on, I don’t think I’m alone. I think that’s something that’s shared by probably every composer who’s actually not writing movie music in the world of today.

So there’s a crisis in music, period?

I think so. I don’t find the sociological crisis of working with electronic music, and music and technology, any harder to cope with than the sociological implications of me working with a symphony orchestra. In fact, I would far rather, from a sociological point of view, work with the technology than with symphonies and large ensembles like that. I would like what I am doing musically to have some sense of reality to people outside of my own society in Canada. What about colleagues whom I correspond with and work with periodically in countries such as Uruguay and so on, where the type of high technology is simply not available? I try and look at what I am doing and say, “Can this maintain relevance musically with people from that sort of situation?” We have gotten so carried away with the bigness or the greatness or the innovative qualities, especially in technological music, that we’ve lost sight of the question of communication, which heeds no barriers, which music has traditionally had. And barriers are put there for non-musical reasons. I think it’s very important that we keep the music where the music belongs and see the technological developments and so on, as extraneous to the music, and have any developments or innovations that I might make musically have some impact and relevance to people who have nothing to do with technology.

As you know, having worked together in Utrecht, for example, the most critical time, in my opinion, is the first ten minutes when you come to the system. The first impression is very strong. Two things which are prerequisites to the success of a system in my opinion: one is that the music that is coming out of this lab or studio is of sufficient quality to catch the attention of the musical public, that includes composers and the regular audience. Secondly, that once you’ve got a composer who’s taken the bait from the musical output and said, “Hmm, something can be done there,” that when they come, they can be made to feel at home right from the start. That doesn’t mean that they have to know everything about it, but that they feel comfortable, and they can say, not only does it make music, in the sense that certain compositional problems that I am concerned with can be solved, or worked on efficiently, but secondly, I feel comfortable here and I could work in this environment. And therefore, the accessibility of the system which is one of our prime concerns is, the accessibility has to be considered in terms of the physical, economic and the cognitive sense. I guess that is what we are working for. Motivation is very difficult to maintain, and you have to have some sort of sense of monomania to continually motivate yourself for this sustained period of time to get something accomplished. And you pay a very high price for that.

Social bottom