eC!

Social top

English

Voice and Live Electronics

An Historical Perspective

The core of this article constituted three major sections in the author's Doctoral thesis, “Live and Interactive Electronic Vocal Compositions: Trends and Techniques for the Art of Performance” (Montréal: McGill University, January 2007). It has been revised for publication in eContact! 10.4.

Introduction
1. Historical Overview: 1900–1970
2. Development of Computer-Based Live Electronics
3. Trends in Computer-Based Live Electronics (1990–Present)
4. Contributors to Live and Interactive Electronic Vocal Music
5. A New Paradigm
Notes | Bibliography | Author Biography

Introduction

This article presents an historical overview of computer-based live and interactive electronic vocal music. Live performance-based electronic music indicates a composition by which the voice, instruments and/or electronic sounds are processed in real time. By surveying important developments, æsthetics and figures, this document intends to assist readers in gaining a global perspective of the field in hopes of facilitating interest in the performance and commissioning of works related to this genre. What follows is a discussion of electronic music created within a classical and/or academic milieu. An examination of “other” electronic music (i.e. laptop music, fusion, modular music, future jazz, intelligent dance music, post dance, post disco, post rave) will be saved for future articles.

1. Historical Overview: 1900–1970

The twentieth century heralded a dawn of intellectual, technological, socioeconomic and cultural progress, and the Italian art movement Futurism apotheosized these transfigured ideals. The Futurists outlined their doctrine through the manifestos of poet Filippo Tommaso Marinetti, painter and sculptor Umberto Boccioni, and composer and painter Luigi Russolo. These authors spiritedly rejected political and artistic traditions of the Romantic era in lieu of their vision for the new century, epitomized by a zeal for technology, speed, and the willful triumph of man over nature. The Futurist æsthetic sprained the grandiose, bombastic traditions of Romanticism and became elemental in laying the foundation from which electronic music could evolve. In 1913, Russolo published L’Arte di Rumori (The Art of Noises), decreeing the validity of “noise” as a musical/compositional feature, stating:

Musical sound is too limited in qualitative variety of timbre. The most complicated of orchestras reduce themselves to four or five classes of instruments differing in timbre: instruments played with the bow, plucked instruments, brass-winds, wood-winds and percussion instruments… We must break out of this narrow circle of pure musical sounds and conquer the infinite variety of noise sounds. (Russolo 1996, 37)

Early Electronic Instruments

The 1920s saw the invention of several electronic instruments remarkably still in use today. (1) In 1920, Russian inventor Lev Termen (1896–1993) developed the theremin (originally called the aetherphone), a lyrical instrument comprised of two proximity sensors: a thin vertical rod that controlled pitch, and a horizontal loop that controlled volume. Classical works for voice and theremin include: Ecuatorial (1934) by Edgar Varèse; Petite Pièce Aléatoire (1966) by Jorge Antunes; Romance (1985) and In Whims of the Wind (1994) by professional thereminist Lydia Kavina; Traum Kanone (Dream Gun), from the opera The Birth of George (1996) by David Simons and Lisa Karrer; and Virtual Percussion Trio (1999) by David Simons. Incorporation of the theremin remains a fashionable trend, and has been used by artists as diverse as Pink Floyd, Phish, The Pixies, Nine Inch Nails, Muse, Wolf Parade, Massive Attack, The Decemberists, The Mars Volta, Leprechaun Catering, Meat Beat Manifesto, atelier Theremin and avant-garde jazz trio Medeski, Martin & Wood.

In 1928, cellist and music pedagogue Maurice Martenot (1898–1980) developed the ondes martenot. His original instrument consisted of a vertically placed ring on a ribbon track; the ring, pulled by the right hand, allowed the performer to control pitches, while the player’s left hand controlled loudness and timbre. Martenot later added a keyboard that could be used separately or in conjunction with the ribbon (Chadabe 1997). Vocal works with ondes martenot include: Le visage nuptial (1946) by Pierre Boulez; Uaxuctum (1966) by Giacinto Scelsi; Saint François d’Assisi (1975–83) by Olivier Messiaen; Nightmare (1994) by Lindsay Cooper and Abdulah; and Mare Teno (2000) by Michel Redolfi (b. 1951). Johnny Greenwood of the English band Radiohead popularized the ondes martenot with his use of the instrument on several albums, including Kid A (2000), Amnesiac (2001), Hail to the Thief (2003) and In Rainbows (2007). [2]

Tape Music

In 1937, composer and musical philosopher John Cage (1912–1992) boldly stated his credo on the future of music:

I believe that the use of noise to make music will continue and increase until we reach a music produced through the aid of electrical instruments which will make available for musical purposes any and all sounds that can be heard. (Cage 1966, 3)

Once the aftermath of World War II settled Cage’s vision would come to fruition, as a post-war climate infused with rapid technological advancements and economic expansion encouraged the development of new sound techniques and compositional methods. Circa 1950 two important electronic studios were established in Europe to foster research in electronic music: the French Groupe de Recherche de Musique Concrète (later renamed the Groupe de Recherches Musicales) in Paris, and the Nordwestdeutscher Rundfunk (Electronic Music Studio of the Northwest German broadcaster) in Cologne. The Parisian group, founded by Pierre Schaeffer (1910–1995), developed the technique of musique concrète, a compositional medium (predating modern sampling) by which the composer cuts, splices and manipulates recordings of sounds found in the “natural” environment — these sounds can be organic (birds, water flowing) or manmade (cars, machines). Schaeffer worked closely with sound engineer Jacques Poullin and composer Pierre Henry to realize his vision of a new soundscape, and in 1952 published a twenty–five point treatise outlining his definition of sound source possibilities entitled Esquisse d’un solfège concret.

On the other side of the musical spectrum, the Cologne–based Nordwestdeutscher Rundfunk studio developed elektronische musik, a medium using electronic sources to create sound (ex. sine wave oscillators, white noise generators). A cooperative scientific venture, early members of the Cologne studio included Herbert Eimert, Robert Beyer, Gottfried Michael Koening and Karlheinz Stockhausen. Though the Paris and Cologne studios remained sharply (and bitterly) divided in their æsthetics and methods, elements of both ideologies were eventually brought together in Stockhausen’s seminal work Gesang der Jüngling. Completed in 1956, Gesang der Jüngling coupled recordings of a young boy’s voice with electronically synthesized sounds, effectively fusing the differing camps of musique concrète and elektronische musik.

In 1955, Italian composers Luciano Berio and Bruno Maderna formed a new school of electronic composition in Milan called the Studio di Fonologia Musicale della rai (Radio Audizioni Italiane). The following year Berio wrote:

Thus far the pursuit of the other Studios has been classified in terms of musique concrète and “electronic music” which have become debatable definitions from today’s armchair perspective since they seem to have been coined partly from retarded-futuristic pioneerism, partly to be “dissociated from the rabble” and partly from a simple and legitimate desire to identify the objects of our daily discourse. In the long run, what really counts is the approach itself in its purest conception: it establishes an element of continuity in the general picture of our musical culture and is not to be identified only with its technical means but also with the inner motivation of our musical evolution. (Manning 2004, 68–9)

Berio and Maderna experimented extensively with the process of manipulating recorded speech, often by deconstructing spoken text into a series of phonemes; Berio’s Thema: Omaggio a Joyce (1958) became the first well-known tape piece to utilize the female voice as a source of compositional material (Bosma 2003). In this work, Berio altered the recorded sounds of virtuosic contemporary vocalist Cathy Berberian by fragmenting, overlaying, and filtering her vocal timbre (Manning 2004). Berio used these text-sound compositional methods to further deconstruct Berberian’s voice in Visage, composed in 1961.

These early electronic tape compositions resulted in a fixed artifact played through a set of speakers in a concert hall. Towards the late 1950s, however, composers came to acknowledge the appeal of a human presence onstage, and began creating works integrating live performance with tape music. Early compositions in this medium include: Milton Babbitt’s Vision and Prayer (1961) and Philomel (1964); Luigi Nono’s La Fabbrica Illuminata (1964); Jean-Claude Risset’s Inharmonique (1977), L’Autre Face (1983), and Invisible (1996); Charles Dodge’s The Waves (1984); Simon Emmerson’s Time Past IV (1984) and Recollections (1985); Joel Chadabe’s Several Views of an Elusive Lady (1985); Trever Wishart’s Vox Cycle (1982–89); Jonathan Harvey’s Nachtlied (1984); and Canadian composer Barry Truax’s She, a Solo (1973), Trigon (1974–75), Love Songs (1979), a series of four electroacoustic musical theater works titled Powers of Two (1995–99), Thou and I (2003) and Orpheus Ascending (2006).

Julieanne Klein (soprano) and Kristie Ibrahim (percussion) performing Agauë: A Musical Drama (2003–04), for soprano, percussion, and hexaphonic audio, co-written by Emily Hall and Niklas Kambeitz. Text freely adapted from Euripides’ tragedy The Bacchae. Recording of the première on 11 August 2004, Up To Your Ears New Music Festival, Montreal. Charles Gagnon, sound engineer. For more information, see Emily Hall’s website.

Early Live Electronics (Analog)

During the post-WWII years, composers began experimenting with the manipulation of electronic sounds within a live performance. Sound engineer Jacques Poullin worked with Schaeffer to explore real time spatialization with his potentiometre d’espace, a four-channel sound diffusion system for recorded music developed in 1951. Poullin’s system utilized a multi-track tape machine, routing each of four tracks directly to a separate speaker, as a performer manipulated a coil of wire between four large circular elements to control the spatial trajectory of a fifth audio channel in real time.

Live processing of instrumental sounds paralleled the commercialization of the electric guitar in the early 1950s; these initial processing possibilities (bass and treble control, reverb and echo, delay, looping, tremolo, wah-wah pedal, phasing and flanging) were eventually used on the voice and other acoustic instruments (Manning 2004). Two of the earliest examples of live vocal processing are Luigi Nono’s A floresta é jovem e cheja de vida (1965–66) for soprano, voices, clarinet and copper plate, and Karlheinz Stockhausen’s Mikrophonie II (1965) composed for choir, Hammond organ, and four ring modulators. In this piece, Stockhausen altered the acoustic sound of the choir in real time with ring modulation, a process by which two audio signals are multiplied together and the sum and difference of the waveforms are output.

In the early 1960s, Italian engineer Paolo (Paul) Ketoff designed the Synket, a voltage-controlled synthesizer remarkable for its portability and robustness in live electronic performance. Built for American composer John Eaton (b. 1935), the Synket was promoted as a concert instrument, and is noted as the first portable synthesizer to be used onstage (Ibid.). Writing for virtuosic soprano Michiko Hirayama and the Synket, Eaton composed Songs for RPB (1965), Thoughts on Rilke (1966), Blind Man’s Cry (1968) and Mass (1970). As noted by Shahira Cos-Myshkin:

John Eaton… believed that human nuance was the element most lacking in electronic music, and shortly after Paolo had delivered the Syn-Ket to the American Academy, Eaton asked him to modify it as a performing instrument. Ketoff increased the range of sound production possibilities, but more importantly, he made control of the Syn-Ket more sensitive to a performer by making the keyboard respond to velocity, like a piano, and sideways motion, like a clavichord… By using varying degrees of velocity and sideways motion in real time, one could achieve something akin to the variability of color, pitch and volume in, say, singing… An early example of Eaton’s humanization of electronic music is Thoughts on Rilke, composed in 1966. As the vocal soloist, Michiko Hirayama teaches the machine to sing in the setting of the sestet of the poem. (Eaton 2007, 4)

Another early electronic device, the vocoder, became a popular electronic instrument used to process the voice. Developed for musical applications by Robert Moog and Wendy Carlos in 1970, the vocoder was an analog synthesizer that analyzed vocal input and produced altered or “robotic” sounds (differing from the phase vocoder, which is discussed later in this article). Early musical artists to use the analog vocoder include: Kraftwerk, The Alan Parsons Project, Pink Floyd, Electric Light Orchestra, Herbie Hancock, Midnight Star, Black Moth Super Rainbow, Trans Am and Patrick Cowley (Wikipedia, “Vocoder” entry [15 September 2008]).

2. Development of Computer-Based Live Electronics

In 1971 Intel developed the silicon-based microprocessor chip; subsequent improvements in miniaturization coupled with an exponential growth of processing power allowed computer hardware to vastly decrease in size, heralding rapid advancements towards real-time computer-based electronic music.

The design revolution signaled by the fabrication of circuits in silicon was to affect the development of computer music systems in [a]…fundamental way, as it facilitated the construction of custom-built [complex] devices devoted exclusively to audio applications. The efficiency gains achieved by using optimized hardware were of sufficient magnitude for the execution of a number of real-time synthesis and signal processing functions in real time… (Manning 2004, 222)

Within several years these developments yielded microcomputers for commercial use. Composers and researchers could now access affordable technology outside the confines of a university research laboratory.

Early Interactive Performance Systems

The first interactive performance system was developed in 1967 by computer music researcher Max Mathews. Working at Bell Laboratories, Mathews and his programming assistant, F. Richard Moore developed GROOVE (Generated Real-time Output Operations on Voltage-controlled Equipment), a pioneering hybrid system for real-time digital control of analog synthesis. The GROOVE system allowed a ‘performer’ to influence a preprogrammed score — the synthesized ‘ensemble’ was directed utilizing a twenty-four-note keyboard, a three-dimensional joystick and four rotary knobs, enabling the performer to control dynamics, tempo and balance in real time (Manning 2004).

Though the intention of GROOVE’s initial development was for research applications, several composers worked with the system, notably Joseph Olive, Emmanuel Ghent and Laurie Spiegal. In 1975, with the support and encouragement of Pierre Boulez (director of the newly established IRCAM), Mathews implemented more sophisticated software into the GROOVE system, leading to the creation of the Conductor program. This was an interactive system by which a performer utilized a physical controller (a handheld sensor mimicking the gesture of a conductor’s baton) to interpret a musical score programmed into the computer. In the early 1980s, Mathew’s further advanced his system with the development of the Sequential Drum and the Radio Baton, interactive control devices based on custom-designed sensors. Later expansions to Mathew’s GROOVE system foreshadowed future technological developments, as newer models incorporated a graphical interface several years ahead of the WIMPS design. (3)

Composer David Behrman (b. 1937) was an early innovator of interactive exploration. [4] A founding member of the Sonic Arts Union, Berhman toured with the prominent Merce Cunningham Dance Company from 1970–76. (5) When commissioned to write an interactive composition, Behrman produced Voice with Melody-Driven Electronics for extended vocal techniques guru Joan La Barbara. This was a simple interactive network consisting of pitch sensors which tracked the performer. When pre-determined pitches were sung by La Barbara, these pitches triggered chord changes. Behrman subsequently composed Cello with Melody-Driven Electronics for David Gibson and Trumpet with Melody-Driven Electronics for Gordon Mumma.

Technological Advancements in the 1980s

The 1980s witnessed meteoric advancements in the development of computer technologies. The launching of the first Personal Computer by IBM in 1981 and the resultant commercialization of computers within a mass market signified a technological upswing. Developments in proprietary software paralleled the popularization of the PC, and composers, exposed to a new world of music editors, sequencers and notation programs, were afforded “unprecedented levels of control over the evolution and combination of sonic events” (Rowe 1993, 2). Coupled with significant progress in digital signal processing (DSP) implementations (e.g. filtering, pitch shifting, ring modulation, frequency modulation, amplitude modulation, chorusing, looping, nonlinear wave shaping, distortion, reverb, delay, spatialization), composers and engineers now had the tools to dramatically transform the nature of electronic composition.

IRCAM

The French research institute IRCAM (Institut de Recherche et Coordination Acoustique/Musique / Institute for Music/Acoustic Research and Coordination) opened in 1977 under the direction of Pierre Boulez at the Centre Pompidou in Paris.

Initially the project of one man, Pierre Boulez… the project to create IRCAM incarnated the utopian order to widen instrumentarium and rejuvenate musical language. In the late 1970s IRCAM offered the most advanced reflection into computer music in the world. (IRCAM website [15 September 2008])

In 1981, IRCAM premiered the 4X synthesizer, a powerful machine capable of sophisticated digital signal processing, direct sound synthesis and real-time spatialization. Developed by Giuseppi di Giugno, the formidable capacity of the 4X was demonstrated in Boulez’s Répons, a monumental, large-scale work composed for orchestra, soloists (two pianos, vibraphone, xylophone, Hungarian cimbalom, harp), computer and six loudspeakers. A commercial recording of Répons can be found on the Deutsche Grammophon 20/21 series (Boulez conducting Ensemble Intercontemporain), and received a Grammy in 2000 for best contemporary classical album.

MIDI

The rapid technological advancements of the 1980s encouraged a proliferation of musical hardware developments, yet the lack of compatibility between competing manufacturer products led to a growing frustration among users. With the goal of establishing a universal standard for musical interaction between devices, the MIDI (Musical Instrument Digital Interface) protocol was established in 1983. The brainchild of Dave Smith (Sequential Circuits), Ikutaro Kakehashi (Roland), and Tom Oberheim, MIDI was a pioneering cooperative effort, uniting internationally competitive companies in the quest to create a global platform to assist in the commercialization of their individual products. However, the unification of vastly differing technologies necessitated compromise, and resulted in several MIDI limitations, primarily a low data transfer rate (bandwidth protocol), unidirectional communication between devices, and lack of control over subtle nuances of sound. Additionally, the establishment of MIDI for keyboard interface prevented (or made extremely difficult) non-keyboard related gestures. Regardless, MIDI became an important platform for developments in interactive performance systems; proprietary hardware devices could now be interconnected, facilitating the design of robust, custom-made portable interactive environments. Though the low data transfer rate limited the representation of complex musical structures, it nevertheless permitted real-time processing of musical information.

Developments in Interactive Capabilities: Score following

Continued improvements in processing speed encouraged a more sophisticated exploration of interactive capabilities. The goal was to empower the computer with the ability to listen and respond to ongoing musical events; essentially, the computer had to be ‘taught’ to follow the performer. In 1983, Roger Dannenberg and Barry Vercoe independently succeeded in developing score following techniques, effectively solving one of the principle barriers advancing human-computer interaction. Dannenberg (working at Carnegie Mellon University) researched pitch tracking, the ability of the computer to follow and synchronize with a soloist. “For this purpose, he used a small transducer inserted in the mouthpiece of the trumpet to convert the acoustic signal into a digital function, passed to a real-time pitch analysis program. The results of this analysis process were then used to control an associated software-based accompaniment program, driving a MIDI synthesizer” (Manning 2004, 384). Dannenberg’s system was successfully demonstrated at the 1984 International Computer Music Conference in Paris, France. Separately, Barry Vercoe (affiliated with MIT and IRCAM) developed the Synthetic Performer with flutist Larry Beauregard. The Synthetic Performer utilized optical tracking of Beauregard’s flute fingerings supplemented with pitch tracking of the sounds generated, and demonstrated the ability to make musical decisions based on performer input (Chadabe 1997). The first major piece to implement Barry Vercoe’s research by successfully synchronizing a pre-composed electronic score with a live performer was Philippe Manoury’s Jupiter (1987), for solo flute and live electronics. (6)Attesting to its significance, Vercoe’s Synthetic Performer went on to win the Computer World / Smithsonian Award for Media Arts in 1992.

Early Interactive Software

The mid-1980s signified an exponential growth in interactive systems, as the pioneering efforts of Barry Vercoe and Roger Dannenberg were rapidly followed by the development of interactive compositional and performance software, including HMSL, M, Jam Factory, Interactor, Cypher, Kyma and Max. For the purpose of this article only Max and its descendent, Max/MSP, will be discussed. Readers are directed to Joel Chadabe’s Electric sound: The Past and Promise of Electronic Music (1997) and Peter Manning’s Electronic and Computer Music (2004) for a detailed historical outline of electronic software and hardware.

Max

Though the aforementioned software products were an important step in the evolution of interactive systems, arguably the most significant contribution to a global development of interactive music composition was the creation of Max by Miller Puckette in 1986. Puckette had previously worked with Barry Vercoe at MIT on the issue of score following, and was well versed in the challenges inherent to implementing a successful interactive music system. As a graphical, object-oriented programming language designed for interactive composition (Winkler 1998), [7] Max was initially created for the purpose of obtaining greater control of real-time signal processing applications for IRCAM’s 4X synthesizer, in essence establishing the 4X as a massive MIDI control device to be utilized with the Macintosh platform. Max was later developed by David Zicarelli for commercial applications and released by Opcode Systems, Inc. (additional contributors included Corte Lippe, Lee Boynton and Zack Settel). The first piece to be composed using IRCAM’s Max/4X system was Philippe Manoury’s Pluton (1988), for solo piano and interactive electronics. The Max software was eventually adapted for the IRCAM Signal Processing Music Work Station (ISPW), a hardware platform for digital signal processing, and served to greatly improve the processing power available to composers. In addition to including a library of signal processing objects, Max, using a specially designed operating system entitled FTS (Faster Than Sound), was now equipped to influence significant production of audio signals, controlling sampling, oscillators, delay lines, filtering, harmonizers, and pitch tracking. “The ISPW represented a flexible and powerful hardware environment, replacing the need for MIDI devices, with Max as a single unified ‘front end’ to control every aspect of music production” (Winkler 1998, 18). However, use of the incredibly powerful software was limited to the ISPW environment. Consequently, the aforementioned programmer David Zicarelli designed a commercial version of Max, which was released in 1991.

Max/MSP

MSP is an additional component of the original Max protocol, added in 1997. The combination of Max and MSP form an object-oriented graphical programming environment used for interactive music and multimedia applications (Winkler 1998). An additional video processing subsystem, Jitter, was added to Max/MSP in 2003. “Jitter extends the Max/MSP programming environment to support real-time manipulation of video, 3D graphics and other data sets within a unified processing architecture.” (Cycling ’74 website [15 September 2008]) Both Max/MSP and Jitter are currently distributed by the San Francisco based company Cycling’74, and remain the standard software for interactive performance and instrument design, utilized in universities and compositional studios throughout the world. As stated on the Cycling’74 website:

Max is the foundation on which we constructed our support for audio and visual media. MSP is a complete set of audio objects that work seamlessly within Max, and Jitter provides a robust architecture for video and matrix data processing. For historical reasons, we often refer to Max, MSP, and Jitter separately, but we develop them as a completely integrated environment. (Cycling ’74 website [15 September 2008])

Pre-dating the release of Max/MSP, Puckette created Pure Data (Pd), an open source programming language used for multimedia works and interactive computer music. The Pd software was designed to provide the indispensable features of the earlier developed Max and FTS while addressing various shortcomings of the original Max blueprint. Additionally, Pd contains several potent capabilities, particularly the ability to integrate video processing and 3-D graphics into the established audio synthesis and signal processing design, creating a unified multidisciplinary work environment.

3. Trends in Computer-Based Live Electronics (1990–Present)

The terms live and interactive are often (erroneously) used interchangeably, with no clear differentiation made between the two.

The terminology used in computer music is fluid and therefore often confusing… The terms “real-time music”, “interactive performance”, and even “artificial intelligence” have been used and misused so often they have been rendered virtually meaningless as they have morphed into academic jargon. (Belet 2003, 306)

Defining “Live” and “Interactive”

While the musical result of these genres often appears to be similar, the compositional process for each is distinct. The poetics of live electronics reflect a desire to extend human musical capability by transforming the performer’s sound with technology, while the essence of interactive music involves a collaborative, give-and-take relationship between performer and machine. The computer’s ability to respond to performer input in an interactive system implies a level of intelligence on the part of the computer not present in live electronics or music for fixed medium (the expression fixed medium is hereby used in lieu of tape to incorporate playback from CD or computer hard drive).

Live Electronics

Live electronics with live performer designates a composition where the instrumental sounds and/or electronics are processed in real time (the term real time delineates the computational speed by which computers receive and process data; a real-time operating system responds to input immediately, with minimal latency, or delay in processing). [8] Live electronics normally involve real-time control of signal processing parameters and/or changes in signal routing. Composers may utilize these elements to affect the sound of the voice using a variety of techniques such as filtering, pitch shifting (including harmonization, FFT analysis/resynthesis, and granulation), ring modulation, frequency modulation, amplitude modulation, chorusing, looping, nonlinear wave shaping, distortion, reverb, delay, spatialization, or a myriad other possible digital effects.

One example of sophisticated live electronic transformation is phase vocoding, a digital signal processing (DSP) sound analysis and modification technique. The phase vocoder is a computational algorithm with two primary functions: it can be used to process sounds in a real time performance, commonly combined with Max/MSP (Settel and Lippe 1995), or it can process recorded sounds with a standard software synthesis program (e.g. Csound). [9] Real time capabilities with phase vocoding became viable in the early 1990s, and subsequently this has become a popular compositional tool. One extremely interesting capability of the phase vocoder is cross synthesis; since the frequency and duration of the signal are separated, it becomes easy to interject an alternative signal into the processing of the original signal (i.e. one could cross synthesize the signal of a singing voice with that of a guitar, effectively creating a “singing guitar”).

Interactive Electronics

Interactive computer music is a sub-genre of live electronics, defined by Todd Winkler as:

…a music composition or improvisation where software interprets a live performance to affect music generated or modified by computers. Usually this involves a performer playing an instrument while a computer creates music that is in some way shaped by the performance. This is a broad definition that encompasses a wide range of techniques, from simple triggers of predetermined musical material, to highly interactive improvisational systems that change their behavior from one performance to the next. (Winkler 1998, 4)

Fundamentally, the defining characteristic of interactive electronic music is the active-reactive reciprocal relationship between performer and computer; essentially, the musical dialogue between man and machine. Using the voice as an example, the singer initiates this dialogue by generating an acoustic signal that is picked up by a microphone and converted into digital format. The computer interprets the sonic parameters of the signal (such as frequency or amplitude) using specialized software, and generates a response based on algorithmic parameters programmed by the composer; the result is then played back through speakers. Metaphorically (and literally), the computer is listening and responding to the actions of the performer in real time.

Levels of Interaction

Winkler identifies three possible levels of interaction — compositions can utilize one or several levels within the same piece. At the simplest level, the performer controls only one element, such as the triggering of sound files or establishing tempi. In Mauro Lanza’s Erba nera che cresi segno nero tu vivi (1999–2001), for example, the soprano has control over certain temporal elements of the piece. Though elements such as tempo and rubato are fixed, the singer manages the duration and intensity of the fermatas written in the score by triggering sound files (using a hand-held device or foot-pedal) to begin a new phrase. The structure of Erba nera is similar to that of a composition for fixed medium; all of the sound files are pre-composed, and the voice is not processed in real time. The differentiation lies in the temporal control afforded the performer, which allows for interpretive variation between performances. At this first level of interaction, the system may also alter signal processing based on triggers from the performer.

Julieanne Klein performing Mauro Lanza’s Erba nera che cresi segno nero tu vivi (excerpt). Live at the 2006 DafX Conference (Montréal, 18-20 September 2006). Pollack Concert Hall, McGill University. Erba nera is published by Ricordi.

At the second level of interaction, the computer listens and responds in real time in a quasi-intelligent interpretation of the performer’s input. In this instance, the sound and even structure of the composition can change dramatically from performance to performance, as the work is subject to an array of interpretive decisions made by the performer. Philippe Manoury’s En Écho (1993) was one of the first pieces composed for voice utilizing this level of interaction. Synchronizing the computer accompaniment and the vocal line with pitch tracking, Manoury’s electronics listen and analyze the frequencies output by the singer and match them to a pre-programmed score using score following techniques. As the performer advances through the piece, the computer adjusts its output according to the timing and interpretation of the singer, just as a piano accompanist would. En Écho follows a chamber music model; as there is no conductor leading the piece, each “performer” (the singer, two sound technicians and the computer) must continuously listen and react to each other in order for the composition to be musically effective.

Julieanne Klein performing selections from Philippe Manoury’s En Écho. 2 November 2006, Tanna Schulich Hall, McGill University, Montréal. En écho is published by Durand.

The final level of interaction is based on the properties of improvisation. The performer does not play from a pre-determined score, instead melodically and rhythmically improvising while the computer responds “intelligently” in accordance. Jazz trombonist, composer and software developer George Lewis began exploring this degree of interaction in the late 1970s. Lewis is considered one of the early pioneers of the field, and is renowned for his interactive computer music software Voyager (1985–87). In performance, Voyager listened to Lewis’s trombone improvisation and generated a musical response determined by elements such as melody, harmony, rhythm, and ornamentation.

Aesthetic Considerations

There are a number of æsthetic considerations regarding analysis of works for computer-based live electronics, interactive system or fixed media. Works for fixed media imply a certain degree of inflexibility; the electronic part is pre-recorded, and remains unchanged once completed. This presents several practical performance concerns, as “the tape or other fixed electronic sound media is relentless and unforgiving as it simply plays on…” (Belet 2003, 306), leaving little room for rubato, dramatic pauses, or alterations in tempo. Additionally, the rigidity of this medium resists interpretative freedom, and repeat performances can seem static. Conversely, within the spectrum of interactive composition, the music maintains the ability to evolve and mature organically through time (Rowe 1999). The following discussion provides viewpoints of several key figures in the field of computer music on this issue.

Guy Garnett

Guy Garnett, Director of the Cultural Computing Program and Associate Professor of Music at the University of Illinois at Urbana-Champaign, argues for the importance and relevance of live performers to computer music composition in his article “The Aesthetics of Interactive Computer Music.” Garnett does not differentiate between live and interactive, instead defining interactive computer music as a sub-genre of what he calls “performance-oriented computer music”, ostensibly in contrast to purely algorithmic composition, acousmatic music or music for fixed medium.

The inclusion of an active performer in one way or another re-introduces into computer music elements that had been almost entirely removed from computer and electronic music of the recent past. The first group of these æsthetic elements is brought about by the re-emphasis on human performance and human cognition that comes from working with a live performer. It is a re-emphasis in the context of music as a whole, where the performance element has played a large role since the beginning of musical time. However, it is a new emphasis for computer music, which has tended toward abstraction and objectivity, often with disappointing results. (Garnett 2001, 25)

Garnett explores intrinsic musical and humanistic qualities live performers bring to computer music, including gestural nuance, physical and cognitive restraints, and inherent variability. Gestural nuance refers to interpretive subtleties such as rubato, phrasing, dynamic contrast and articulation, while physical and cognitive restraints delineate the performability of the music. “Constraining music to what is cognitively graspable, without confining it to what is already cognitively grasped, brings about a more realistic compositional attitude which in turn leads to more successful works” (Ibid., 26). Finally, inherent variability characterizes the work’s changeability over time. Unlike acousmatic or fixed medium compositions, live electronic music is subject to variation with each performance, enabling a continuous transformation of the work.

Since the work is not fixed, it is open to new interpretations, and therefore the possibility at least exists for the growth of the work over time or across cultural boundaries. The work can thus maintain a longer life and have a broader impact culturally, because it is able to change to meet changing æsthetic values. (Ibid., 27)

Garnett is currently composing a cyber opera entitled The Death of Virgil, based on the novel by Hermann Broch, which incorporates singers, instrumentalists and technology in a meditation on life, love and art (composer’s webpage [18 September 2008]).

Todd Winkler

Multimedia artist Todd Winkler discussed the symbiotic nature of interactive music in his book Composing interactive music: techniques and ideas using Max.

Using the techniques of interactive composition, elements of a live performance can be used to impart a human musical sense to a machine, forming a bridge to the musical traditions of the past through the interpretation of expressive information. At the same time, the computer opens up new possibilities for musicians to expand their abilities beyond the physical limitations of their instrument. (Winkler 1998, 8)

Winkler also comments upon the relevance of the audience’s perception and understanding of the interactive process:

Live interactive music contains an element of magic, since the computer music responds “invisibly” to a performer. The drama is heightened when the roles of the computer and performer are clearly defined, and when the actions of one has an observable impact on the actions of another, although an overly simplistic approach will quickly wear thin. On the other hand, complex responses that are more indirectly influenced by a performer may produce highly successful musical results, but without some observable connection the dramatic relationship will be lost to the audience. (Ibid., 9)

Winkler’s work explores various ways that human actions can affect sound and images. He composes for interactive video instillations, dance productions, and live computer-based electronic performance, and is Associate Professor of Music at Brown University (composer’s faculty webpage [15 September 2008]).

Robert Rowe

Robert Rowe, inventor of the interactive software Cypher, is currently Director of the Music Composition Program and Associate Director of the Music Technology Program at New York University. He further outlined difficulties inherent to the presentation of fixed media (tape) with live performer:

Works for performers and tape have been an expression of the desire to include human musicianship in computer music compositions. Coordination between the fixed realization of the tape and the variable, expressive performance of the human players, however, can become problematic. Such difficulties are more pronounced when improvisation becomes part of the discourse. And, as taped and performed realizations are juxtaposed, the disparity between levels of musicality evinced by the two often become untenable. (Rowe 1993, 5)

In support of interactive systems, Rowe states that this level of composition inspires the exploration of new technologies and opens up new compositional domains while simultaneously encouraging collaboration between humans and computers (Rowe 1999). He has written numerous compositions for performer and interactive system dating back to 1986, when he composed Hall of Mirrors for bass clarinet and the 4X real-time system. Rowe’s large-scale vocal work, The Technophobe and the Madman (2001), was composed for two singers, two pianos, bass, percussion and interactive music systems as a collaborative project between New York University and Rensselaer Polytechnic Institute (additional contributors to The Technophobe and the Madman included Nik Didkovsky, Tyrone Henderson and Neil Rolnick).

Jean-Claude Risset

At the other end of the æsthetic spectrum, Jean-Claude Risset, French composer and early pioneer of digital synthesis, spiritedly defends music for fixed medium. He describes the importance of the compositional process as one entailing time and space as requisite elements, and cautions against the prevailing modern day enthusiasm and commitment towards real-time systems:

Composition is not — or should not be — a real-time process. Musical notation applies time over space. It refers the reality of the music to a representation — the score — which is out of time. This representation suggested transformations that could not be conceived or performed in real-time — such as symmetries with respect to the pitch or the time axis used in counterpoint. Non real-time operation is necessary to free oneself of the arrow of time and its tyranny, of the dictates of haste, instancy, habits, [and] reflexes. (Risset 1999, 37)

Citing limitations in compositional complexity and a less flexible control of sonic parameters, Risset additionally underscores the problem of portability in real time composition. He observes that the continuous progression of technology resulting in new operating systems, upgraded software, and new modes of composition can contribute to an ephemeral quality of many real-time works. The effort involved in porting a piece onto a new operating system is considerable, and Risset notes a troubling tendency for composers to spend their energy producing new works rather than adapt older works to a new system. “This situation leaves no chance to develop traditions for performance or to let musical works become classics. It brings the risk of a perishable, memoriless electronic art” (Ibid., 35).

Regardless of the compositional benefits Risset affords to music created for fixed media, current trends remain focused on the production of works with live electronics. Ironically, fixed media pieces facilitate a greater longevity, as they are relatively easy to reproduce, while live electronics compositions require continuous adaptation to new and updated software. Interactive pieces have proven to be the most difficult works to sustain throughout time and technological developments — it is common for an interactive work to experience only a limited number of performances.

4. Contributors to Live and Interactive Electronic Vocal Music

Composers

Since the early days of cutting and splicing tape, technological advances have continued to influence compositional techniques. Many composers who experimented with tape and electronic sounds naturally evolved their composition medium towards live electronics. This section will present a brief look at contributors to the field of live and interactive electronic vocal music, including composers, performers, and research centers. The task of ranking and determining important contributors to any field is clearly difficult, as the author’s discrimination risks offending omitted parties. What follows is intended to be a general overview of the topic; composers previously discussed in this article may not be mentioned below. Please note that any conspicuous oversights are unintentional! (10)

Luigi Nono

Italian composer Luigi Nono (1924–1990) is highly regarded as one of the eminent artists of the post-war European avant-garde, along with Stockhausen (b. 1928), Iannis Xenakis (1922-2001) and Pierre Boulez (b. 1925). One of the first composers to embrace live electronics as a tool for enhancing dramatic context, Nono created a substantial body of vocal repertoire in this genre. Though he maintained an early association with the Milan Studio di Fonologia Musicale della rai (founded by Luciano Berio and Bruno Maderna), Nono’s most important live electronic vocal pieces were generated from his work in the 1980s at the Experimentalstudio der Heinrich-Strobel-Stiftung in Freiburg, Germany.

Nono utilized live electronics as a means to explore “mobile sound”. Gerard Pape provides a more detailed description of Nono’s usage of live electronic elements:

Nono used various real-time transformation devices to obtain mobile sounds by technological means. For example, he used harmonizers for obtaining micro-intervallic transpositions and retrogradations; the Halophon for programming various kinds of spatial movements over time; the digital delay for creating canon-like effects; band pass filters for selecting only certain portions of sound spectra. In addition, he used vocoders to modulate one sound by another and gates to control the onset of a sound. Nono used all these live electronic effects in various works in the 1980s in combination with a tremendous variety of new instrumental and vocal techniques to attain his goal of the mobile sound” (Pape 1999, 62).

As Nono’s compositions strongly reflected his desire to assimilate space and technology with music in live performance, his works generally involve intricate architectural setups. Nono’s belief was that a composition should be a living object, embodied in the moment of performance. In order to implement this ideology, he often sat at the mixing board during performances, controlling dynamics and spatialization in real time (unfortunately, the only technical record of this remains in the memory of the technicians who assisted him in performance). This has led to a very real issue of authenticity in Nono’s works — the amount of indeterminate and/or improvisatory elements in live performances during Nono’s lifetime reduces the authenticity of the notated score. Undoubtedly this was not of terrible concern to him, as he continuously strove to separate the idea of the composition from the objectification of its score (Rizzardi 1999). Roberto Fabbriciani, flautist and famed interpreter of new music, discusses Nono’s use of live electronics:

Nono was cautious in his use of live electronics, not to produce effects which were all end in themselves [sic], since these could create a superficial listening. His aim was in fact to produce a more conscious listening, a readiness to savour every little change loaded with significance and to generate strong emotions against any established, traditional form… Live electronics became a structural component of his music, totally interdependent with the interpreter since the machine acted on the resulting sound in relation to opportune and specific actions of the executant. Technically, the live electronics consisted of only a few sound treatments, however they were used with such a variety of applications and in such varied contexts, that often the original score was unrecognizable. Such treatments would include: amplification, spatial projection, delay, pitch shift (harmonizer), filtering and mixing. The novelty of being able to take advantage of these techniques in real time generated new ideas and opened the way to numerous innovations. (Fabbriciani 1999, 9)

Nono’s compositions reached a profound artistic maturity towards the end of his life. Though A floresta é jovem e cheja de vida, composed in 1966, is noted as one of the first pieces to utilize live electronics (Ibid.), Nono generated most of his live electronic vocal compositions in the 1980s, including Io, frammento dal Prometeo (1980/81), Quando stanno morendo. Diario polacco nr.2, Guai ai gelidi mostri (1983), the opera Prometeo, tragedia dell’ascolto (1984), and Risonanze erranti (1986).

Luciano Berio

Luciano Berio (1925–2003) was also a founding member of the post-war European avant-garde, as well a pioneer in the evolution of electronic music. In 1987, Berio founded the Italian Centro Tempo Reale in Florence (prior to the creation of Tempo Reale, Berio was director of IRCAM’s electroacoustic program from its inception until 1980). His objective was to create “a structure in which to investigate the possibilities of real-time interaction between live performance and programmed digital systems” (Giomi 2003, 30). Considerably influenced by his personal relationship with famed contemporary soprano Cathy Berberian, Berio created a large catalogue of vocal compositions throughout his life. Perhaps in reaction to the purely electronic experimentation of the Cologne school and its subsequent followers, Berio strongly advocated the use of technology as a means to extend and augment human expression. His primary concern was “the creation of a homogenous path between acoustic sources on the one hand (voices and instruments) and electroacoustic sources on the other (live electronics)” [Ibid., 32]. To this end, Berio was not interested in the creation of new sounds, and generally neglected the trends of complex algorithmic composition, instead preferring relatively straightforward techniques such as harmonization, delay, sampling, and spatialization to achieve his compositional goals. (11)

Among the numerous works realized by Berio at Tempo Reale are several important vocal compositions, including Ofanìm (1988), Outis (1996), Altra voce (1999), and Cronaca del luogo (1999). Ofanìm, composed for two children’s choirs, two instrumental groups, female voice, and live electronics, utilizes sound spatialization and electronic amplification to achieve an amalgamated acoustic result (Giomi 2003). Outis is a large-scale opera composed for 19 soloists (instrumentalists and vocalists), a separate vocal group of 8 singers, chorus, orchestra, and live electronics. Composed for the architectural structure of the famed Teatro alla Scala (La Scala), Berio sought to create, in his words, “an acoustical dimension… which no longer corresponds to that of the orchestra pit” (Ibid., 41). In order to accommodate his amplification and diffusion systems to the hall, Berio placed loudspeakers strategically throughout the theater, including inside the main chandelier situated in the center of the ceiling. Berio later adapted the setup of Outis for the Théâtre du Châtelet in Paris in 1999. Altra voce, for mezzo-soprano, contralto, flute and live electronics, sought to liberate the voice and flute and develop “their respective autonomies and harmonic premises” through the use of live electronics (Ibid.). Finally, Berio’s opera Cronaca del luogo utilized multimedia interactive systems for on stage real-time movement analysis, as well as control of sound synthesis and live electronics. (12)

Morton Subotnick

New York composer Morton Subotnick (b. 1933) is another pioneer of early electronic music and interactive performance systems (Whipple 1983). Subotnick was highly influential in the establishment of the San Francisco Tape Music Center in 1962 (later becoming the Mills Center for Contemporary Music) along with composers Ramon Sender, Terry Reily and Pauline Oliveros. In the late 1960s Subotnick worked with Donald Buchla and Ramon Sender to develop the Buchla synthesizer, and realized several important compositions with this instrument, including Silver Apples of the Moon and The Wild Bull. In 1975, Subotnick invented the ghost box, an interactive analog system consisting of pitch and envelope followers, a voltage controlled amplifier, frequency shifter and ring modulator. (13) The first vocal piece composed in this medium was The Last Dream of the Beast (1979), written for Joan La Barbara. This work was expanded into an instrumental version and utilized in Subotnick’s stage tone poem The Double Life of Amphibians (1984).

In 1986, Subotnick and Mark Coniglio developed Interactor, interactive software capable of being utilized in numerous multimedia contexts. A preliminary version was used in the composition Hungers (1986), an electronic opera featuring video artist Ed Emshwiller, soprano Joan La Barbara, a dancer and three musicians. By using sensors attached to her wrists that transferred physical location data to the computer in the form of MIDI messages, La Barbara was able to control video images through musical gesture. Though her voice was not electronically processed, the amplitude and timbre of her sound was controlled in real time by the computer as she alternated between multiphonics and traditional singing. Other compositions that utilized the Interactor software include La Barbara’s opera Events in the Elsewhere (1990), and Subotnick’s The Misfortune of the Immortals (1994–95), and Intimate Immensity (1997). [14]

Jonathan Harvey

There are several important British composers of live and interactive electronic compositions, including Jonathan Harvey (b.1939) and Simon Emmerson (b. 1950). Harvey began working at IRCAM in the 1980s at the request of Pierre Boulez. Amongst his live vocal compositions are Inquest of Love (1992), a full opera with live electronics, and One Evening (1994) for soprano, mezzo soprano, chamber ensemble, real-time devices, and signal processing. In his article “The Metaphysics of Live Electronics,” Harvey states:

With live electronics… two worlds are brought together in a theatre of transformations. No-one listening knows exactly what is instrumental and what is electronic anymore. Legerdemain deceives the audience as in a magic show… When they lack their connection to the familiar instrumental world electronics can be inadmissibly alien, other, inhuman, dismissible (like the notion of flying in a rational world). When electronics are seamlessly connected to the physical, solid instrumental world an expansion of the admissible takes place, and the ‘mad’ world is made to belong. (Harvey 1999, 80)

Simon Emmerson

Simon Emmerson (b. 1950) is a noted British composer and author of numerous books and articles concerning live electronic music, including The Language of Electroacoustic Music (1986), Music, Electronic Media, and Culture (2000), and the recently published Living Electronic Music. Emmerson’s electronic vocal works include: Time Past IV (1985) for soprano and tape (first prize winner at the 1985 Bourges Electroacoustic Awards); Ophelia’s Dream II (1979) for six singers and electronics; Songs from Time Regained (1988) for soprano, ensemble and electronics; and Sentences (1991) for soprano and live electronics. After serving twenty-eight years as the Director of the Electroacoustic Music Studios at City University, London, he is now Professor of Music, Technology and Innovation at De Montfort University in Leicester, UK.

Kaija Saariaho

Several continental European composers have contributed importantly to the maturing repertory of live electronic music. Finnish composer Kaija Saariaho (b. 1952) has lived and worked in Paris since 1982 (the year she attended computer music courses at IRCAM), and has interwoven electronics into many of her compositions. Known for her elegant vocal writing, Saariaho’s exquisite Lohn (1996), for soprano and live electronics, premiered at the 1996 Wien Modern Festival, and was subsequently recorded by soprano Dawn Upshaw. In this work, Saariaho intricately intersperses pre-recorded vocal material and concrète sounds (birds, wind, rain) synthesized and processed with AudioSculpt and CHANT (both IRCAM software). [15] Following this, Saariaho composed the electronic opera L’Amour de loin, which premiered at the 2000 Salzberg Festival directed by Peter Sellers, conducted by Kent Nagano, and again with Dawn Upshaw. (16) She has also composed a version of From the Grammar of Dreams for soprano and electronics (2002). Many of Saariaho’s works have been performed as visual concerts, designed and realized by her husband, Jean-Baptiste Barrière. (17) She was recently honored as the 2008 Composer of the Year by Musical America.

Philippe Manoury

Philippe Manoury (b. 1952, France) is one of the most renowned composers of interactive electronic works. He is considered one of the world’s leading computer music researchers, and is historically significant for having composed the first piece for Puckett’s Max software, Jupiter (1987), and one of the first interactive vocal compositions, En Écho (1993) [composer’s faculty webpage (15 September 2008)]. In addition to En Écho, Manoury has composed several important large-scale electronic vocal works, including three operas. 60e Parallèle, composed from 1995–97, was originally titled La Nuit du Sortilège — written for voices, large orchestra, and electronic sounds, it premiered at the Théâtre du Châlet in 1997. K… was commissioned and premiered by the Paris Opera in 2001, and is a work in twelve scenes for voices, orchestra, and real-time electronics; the text was written by Bernard Pautrat and André Engel, and is derived from Franz Kafka’s Der Prozess. Manoury’s most recent opera, La Frontière, a chamber work for six singers, nine musicians and real-time electronics was written in 2003. That same year he composed Noon as the composer-in-residence for the Orchestre de Paris. Noon is a large work for soprano, choir, orchestra, and real-time electronics based on a text of Emily Dickenson. Manoury is currently on the faculty of UCSD (University of California San Diego).

Julieanne Klein performing selections from Philippe Manoury’s En Écho. 2 November 2006, Tanna Schulich Hall, McGill University, Montréal. En écho is published by Durand.

Philippe Leroux

Another significant French composer of electronic music is Philippe Leroux (b. 1959), a student of Pierre Schaeffer, Olivier Messiaen and Iannis Xenakis. His monumental Voi(REX), composed in 2002 and premiered by French soprano Donatienne Michel-Dansac, has been performed in Paris, Montréal, San Francisco and New York. In 2006 Leroux completed Apocolypsis for four voices, fifteen instruments and electronics, one of the largest works in his œuvre to date.

Apparently indifferent to the concept of silence as the expression of vacuity, Leroux’s music makes sound itself its very essence — not in a contemplative or dramatic manner, but always in movement. This perpetual flux (give Heraclitus his due) speaks of the appearing and disappearing of sound, but mostly of the necessary nurturing process that allows it to evolve. This music may go for velocity over virtuosity — yet it is not without risk. More interested in intervals than in pitches or particular notes, it surrenders to the playful, almost mischievous, interplay of [combinations] — music morphing ad infinitum — but never at the expense of its performers, whom it intend to glorify. (Bilaudot Publishers, composer page [15 September 2008])

Luca Francesconi

Luca Francesconi (b. 1956, Italy) studied composition with Luciano Berio and Karlheinz Stockhausen, and founded the musical research center Agon Acustica Informatica Musica in 1990. Among his vocal electronic compositions are Etymo (1994) for soprano, chamber orchestra and electronics on a text of Charles Beaudelaire, Sirene/Gespenster (1997),a pagan oratorio for four female choirs, percussion, brass and electronics (produced by WDR, ASKO Ensemble and IRCAM), and Lips, Eyes, Bang (1998) for actress/singer, twelve instruments, and real-time audio and video transformations. A recording of Etymo was released this year on Kairos records featuring Canadian soprano Barbara Hannigan. Francesconi is currently a professor and Chair of the Composition Department at the Musikhögskolan of Malmö in Sweden.

Fausto Romitelli

Fausto Romitelli (1963–2004) sought to integrate elements of techno, rock, European traditionalism and the French spectral school into his compositions.

At the centre of my composing lies the idea of considering sound as a material into which one plunges in order to forge its physical and perceptive characteristics: grain, thickness, porosity, luminosity, density and elasticity. Hence it is sculpture of sound, instrumental synthesis, anamorphosis, transformation of the spectral morphology, and a constant drift towards unsustainable densities, distortions and interferences, thanks also to the assistance of electro-acoustic technologies. And increasing importance is given to the sonorities of non-academic derivation and to the sullied, violent sound of a prevalently metallic origin of certain rock and techno music. (Ricordi publishers, composer webpage [15 September 2008])

One of Romitelli’s most know works, In EnTrance (1995–96) was written for soprano, ensemble and electronics, utilizing a mantra from the Tibetan Book of the Dead. An Index of Metals (2003), the last work completed before his tragic death from illness, is a video opera composed for soprano soloist, ensemble, multimedia projection and electronics.

Mauro Lanza

Another important Italian composer of electronic vocal music is Mauro Lanza (b. 1975). Employed at IRCAM as a research composer and teacher since 1999, Lanza’s most profound vocal work is a cycle of pieces titled Nessun suono d’acqua, a large-scale multimedia work for voice, ensemble, electronics, new instruments and multiple video projections. Nessun suono d’acqua (which translates as “no water sound”) is comprised of four vocal pieces, based on Amelia Rosselli’s Prime Prose Italiene. The cycle includes Barocco (1998–2003) for soprano and toy instruments (6 players), Erba nera che cresci segno nero tu vivi (1999–2000) for soprano and live electronics, Mare (2004) for soprano, small ensemble, toy instruments and live electronics, and Cane for soprano, ensemble, toy instruments and electronics. Cane was commissioned by the McGill Digital Composition Studios with the assistance of the Daniel Langlois Foundation, and was premiered in March 2007 during the Montréal New Music International Festival.

Julieanne Klein performing Mauro Lanza’s Erba nera che cresi segno nero tu vivi (excerpt). Live at the 2006 DafX Conference (Montréal, 18-20 September 2006). Pollack Concert Hall, McGill University. Erba nera is published by Ricordi.

Canadian / Canadian-based composers

alcides lanza

alcides lanza (b. 1929, Argentina) became a naturalized Canadian citizen in 1976. Prior to this he worked with Vladimir Ussachevsky at the famed Columbia-Princeton Electronic Music Center under a Guggenheim fellowship. lanza went on to become director of the McGill University Electronic Music Studio, and in 1983 founded The Group of the Electronic Music Studio (g.e.m.s.) at McGill with Claude Schryer and John Oliver. (18) He has composed several works for voice and live electronics, including the song cycle Trilogy: Ekphonesis V (1979), Penetrations VII (1972), and Ekphonesis VI (1988), written for actress-singer, lights, electronic sounds (tape), and electronic extensions. lanza also composed vôo (1992), for acting voice, electroacoustic music and digital signal processors. Both compositions were written for his wife, singer-actress Meg Sheppard.

Bruce Pennycook

Bruce Pennycook (b. 1949, Toronto) received his DMA in Musicology from Stanford University, California, where he studied under John Chowning and Leland Smith. Following his return to Canada, Pennycook founded a research center for music and technology at Queens University, Ontario. One of his most important works is Praescio I–VIII, a series of interactive compositions in which each piece focuses on a different instrument. Praescio II (1989) for soprano, chamber ensemble and interactive system, utilizes the poetry of Canadian author Tessa McWatt. Pennycook is Professor of Music and Professor of Radio-Television-Film at the University of Texas at Austin where he also chairs the Digital Arts and Media Bridging Disciplines Program. He is currently writing a work for soprano sax, Max/MSP and adapted wii controller for the 2009 International Saxophone Congress.

Zack Settel

Zack Settel (b. 1957, New York) studied with Morton Subotnick at the California Institute of the Arts, and was employed by IRCAM as a musical assistant until 1995, where he assisted Puckette in the development of Max. He composed Hok Pwah (1993) for soprano, percussion and live electronics, as well as L’enfant des glaces (2000), an electroacoustic opera with real-time vocal processing conceived by soprano Pauline Vaillancourt. Settel currently resides in Montréal — a DVD of L’enfent des glaces was released in 2006 (Atma Classique).

Laurie Radford

Laurie Radford (b. 1958, Manitoba), noted for his significant output of electronic, electroacoustic and instrumental works, has composed interactive works with live computer-controlled signal processing of both audio and video. Radford’s works for voice and electronics include in the angle (1998) for soprano, Bb clarinet, violin, piano, and digital signal processing, of circles and seconds (1999) for soprano, soprano saxophone or Bb clarinet, violincello, percussion, and digital signal processing, and I was struggled…! (2002) for voice/actor, piano, electroacoustic music, and digital signal processing.

David Adamcyk

David Adamcyk (b. 1977) is an upcoming Canadian composer who has been affiliated with McGill University since 1999, and is currently finishing his doctorate in composition. In 2005–06 he studied in Paris with Philippe Leroux and participated in Ircam’s composition cursus during the following academic year. As part of a Langlois Foundation Visiting Professor research project, he was also the technical assistant for Martin Matalon’s composition La Makina, which was premiered at the 2008 MusiMars new music festival in Montréal. Vocal live electronic works include Avant la larme (2006), commissioned and premiered by soprano Julieanne Klein, and a new composition for voice, piano and saxophone commissioned by the Société de Musique Contemporaine du Québec (SMCQ), to be premiered in October 2009 (also by Julieanne Klein). Adamcyk has won four prizes at the SOCAN Foundation composer’s competition, two of which ranked 1st place in the 2007 edition, including Avant la larme (composer homepage [15 September 2008]).

Julieanne Klein (soprano), Zosha di Castri (piano) and Adam Kinner (sax) performing David Adamcyk’s Avant la larme. Studio recording. David Adamcyk, sound engineer.

Performers

Few vocal performers devote their careers to contemporary and electronic music; those that do are usually endowed with exceptional musical and vocal abilities. Tracing a direct lineage back to pioneering singer/composer Cathy Berberian, many of these singers actively collaborate with multimedia artists and/or compose for their own instrument. Innovatively journeying to the outer extremes of their voice, they enhance their creative vision with the infinite possibilities of machines.

Joan La Barbara

Joan La Barbara (b. 1947) is one of first female composers to enhance vocal performance art with electronics (Weber-Lucks 2003). A pioneer in the field of extended vocal techniques and vocal sound exploration, La Barbara continues to do a great deal of improvisatory and reactionary performancing. She first experimented with electronics in 1967 as a student at Syracuse University where she began composing with the Moog synthesizer; it was during this time that she became fascinated with the process of sonic transformation. La Barbara later learned to process her voice electronically using electric guitar effects, phase shifters, pitch modulators, frequency analyzers and a Roland Space Echo. She created numerous pieces for voice and live electronics, including: Vocal Extensions (1975); Thunder (1975) for six timpani, voice and electronics; Autumn Signal (1978) for voice and Buchla synthesizer; and 73 Poems (1993), a collaborative work with text artist Kenneth Goldsmith written for electrically modified voices.

When asked to elaborate upon the concept of electronics as an extension of acoustic performance practice (as opposed to being an entirely new medium), La Barbara responded:

[They are] both — it’s a new medium, but it certainly is an extension of our existing harmonic vocabulary because we’re human, and we’ve used machines to make machine noises. Many times composers use the machines to make very musical sounds, though sometimes they use them to make very unmusical sounds. I would think that some of the work of David Tudor is a great example of just trying to get to the machineness of the machine sounds, though often you’ll find that the electronics are very fluid. I’ve always used electronics as a further extension of what I can do with the voice. (La Barbara 2005)

La Barbara’s collaborations with husband Morton Subotnick as well as her own compositional output has resulted in the creation of a wide array of works for live and interactive electronics utilizing extended vocal techniques. La Barbara continues to perform extensively, often recreating pieces written for her voice by Alvin Lucier, David Behrman, Morton Feldman, Charles Dodge and Roger Reynolds. She was a featured composer at 2007 Santa Fe Electronic Music Festival, (19) and is also involved in The Human Voice in a New World, a series of performances for voice and interactive electronics sponsored by the Electronic Music Foundation.

Laurie Anderson

Though not a traditional singer, Laurie Anderson (b. 1947) is a renowned multimedia artist and performer of spoken word. She first experimented with stereo spatialization of her voice in Stereo Song for Steven Weed (1977). This piece utilized two microphones and two speakers placed on opposing sides of a small performance space — Anderson then explored conversational gestures in a public, self-personified dialogue. She subsequently began to incorporate more sophisticated electronic effects into her performances, using a vocoder in her 1981 piece O Superman. Anderson’s spoken prose is often more politically oriented, although her vocal processing and affectations explore roles of gender ambiguity and duality (she often uses two microphones, one to represent the “female voice” and one to represent the “male voice”). In the 1990s she toured Stories From the Nerve Bible, a multimedia work discussing the Gulf War “syndrome”. In this piece, Anderson explored the American public’s overt patriotism during the first Gulf War, citing the glorification of guns and other weapons of mass destruction. In 2003 Laurie Anderson became the first artist-in-residence at NASA (an interesting post due to her politically outspoken performances speaking against the Gulf War). The End of the Moon, which coupled as her final report for her research at NASA, was comprised of music for spoken voice, violin and electronics. In Anderson’s words, this piece “looks at the relationships between war, æsthetics, the space race, spirituality and consumerism.” She is currently touring Homeland, “a series of songs and stories that creates a poetic and political portrait of contemporary American culture, and addresses the current climate of fear, obsession with information and security” (The Egg, event webpage [15 September 2008]). A recording of Homeland was released on Nonesuch Records earlier this year.

Pamela Z

Extending the realm of vocal sonic experimentation, Pamela Z and Franziska Baumann are among a special class of performers who utilize electronic body instruments to accentuate and extend their vocal capabilities. San Francisco based composer/ performer/audio artist Pamela Z uses The BodySynth®, a specially designed MIDI controller created by Chris Van Raalte and Ed Severinghaus. The BodySynth® senses muscular energy generated by the body’s muscles, and then translates the information into MIDI data. Pamela Z combines elements of the bel canto vocal tradition, extended vocal techniques and spoken word while processing her voice in real time with Max/MSP software (artist’s website [15 September 2008]).

Franziska Baumann

Swiss composer/vocalist/flutist Franziska Baumann specializes in live vocal electronics in addition to sound installations and theater music. As an artist-in-residence at STEIM (Studio for Electronics and Interactive Musicians), she developed an interactive SensorLab-based cyberglove that she wears on her right arm, enabling her to sculpt her voice in real time. Baumann is currently a professor for improvisation and composition at the University of the Arts in Berne, Switzerland, and is also part of “body (without) sound”, a national research program which focuses on the relation between gesture, movement and sound (artist’s website [15 September 2008]).

Donatienne Michel-Dansac

French soprano Donatienne Michel-Dansac has collaborated with composers from IRCAM since 1993, premiering numerous works by Philippe Manoury, Luca Francesconi, Fausto Romitelli, Mauro Lanza, Georges Aperghis, and Philippe Leroux. Michel-Dansac regularly sings with ensembles throughout Europe, including the Tapiola Orchestra of Helsinki, the London Sinfonietta, the Orchestre National de France, and the Orchestre Philharmonique de Radio-France.

Juliana Snapper

Juliana Snapper is a Los Angeles based vocal artist who specializes in the creation of experimental opera and multimedia theater works. She has collaborated with numerous new media video artists and composers, and her video collaborations with artist Paula Cronan have screened in New York, Chicago, San Francisco, Los Angeles, San Diego, London, Madrid, and Zagreb… She is currently a doctoral student at the University of California San Diego, where she is a member of the Department of Critical Studies and Experimental Music Practices.

Important research and production centers

In creating and mounting technologically complex computer-based electronic works, an affiliation with a major research institution is of great benefit for performers and composers, as these institutions maintain the infrastructure, expertise, and financial backing to produce major projects. One of the most important institutions for scientific research of music and sound is the aforementioned IRCAM, located in Paris, France. Though Boulez was in charge of the Center when it opened in 1977, numerous important figures in the electronic music scene were involved at the administrative level, including Luciano Berio, Jean-Claude Risset and Max Matthews. The establishment of IRCAM paved the road for the creation of numerous other private and public research facilities throughout the world.

The aforementioned STEIM, established in 1984, is located in Amsterdam, Netherlands, and focuses on the “research and development of instruments and tools for performers in the electronic arts” (STEIM website [15 September 2008]). Other European institutions include: NOTAM — Norwegian Network for Technology, Acoustic and Music (Oslo, Norway); DIEM — Danish Institute for Electroacoustic Music (Denmark); CRM — Centro Ricerche Musicali (Rome, Italy); IEM — Institut für Elektronische Musik und Akustik (Graz, Austria); and La Kitchen — Center for Research and Development of Interactive Tools (Paris, France). Several European universities of note include: University of York (UK); Birmingham University (UK); De Montfort Univerity (Leicester, UK); Pompeu Fabra University (Barcelona, Spain); Queen’s University (Belfast, Ireland); and the University of Helsinki (Finland).

North American research centers are primarily situated in academic settings, and include: CCRMA — Center for Computer Research in Music and Acoustics (Stanford University, California); McGill University’s Digital Composition Studio and CIRMMT — The Centre for Interdisciplinary Research in Music, Media and Technology (Montréal, Canada); CMC Computer Music Center (Columbia University, New York); MIT Media Lab (Cambridge, Massachusetts); CNMAT — Center for New Music and Audio Technologies (University of California, Berkeley); CREATE — Center for Research in Electronic Art Technology (University of California, Santa Barbara); CRCA — Center for Research in Computing and the Arts (University of California, San Diego); and CMMAS — Centro Mexicano para la Música y las Artes Sonoras (Mexico).

Other research center around the globe include: The Gerald Lapierre Electro-Acoustic Music Studio (University of Natal, Durban, South Africa); CEME —China Electronic Music Center (Central Conservatory of Music, Beijing, China); and LNME — Laboratorio Nacional de Música Electroacústica (La Habana, Cuba).

5. A New Paradigm

The historical overview presented in this article pertains principally to live electronics composed within a classical and/or academic milieu. However, it is important to note the profusion of artists that employ live vocal processing within other genres, including: Kraftwerk, Afrika Bambaata & The Soul Sonic Force, Madlib (as Quasimoto), Daft Punk, Chromeo, Lee Scratch Perry, Dub Syndicate, Tikiman, Tom Waits, Legendary Pink Dots, The Knife, The Residents, AGF, Brian Eno, Herbert/Wishbone/Dr. Rocket, Leafcutter John, Vladislav Delay/Luomo (Vocalcity)/Uusitalo (Vappaa Muurari live), Matmos, Modeselektor, Mum, Noze, Ricardo Villalobos, Digitalism, Flight of the Conchords, MSTRKRFT, Last Days of Humanity and Liturgy.

In the past, the domain of contemporary and avant-garde music did not often co-mingle with “non-classical” (or non-academic) electronic production. However, an exciting new trend is emerging, one that fuses the traditions of both worlds, notably in works by Eric Whitacre, Ricardo Romaneiro, Nico Muhly, Mason Bates, Joshua Penman, Matt Marks, Caleb Burhans, Tristan Perich, Max Richter, Hauschka, Lukas Ligeti, Jacob TV and Kyle Bobby Dunn. Regardless, segregation between the academic and non-academic sector unfortunately still exists, perpetuated by individuals within these communities as well as unadventurous audience members. Ronen Givony, founder and artistic director of the innovative Wordless Music series in New York City, addresses this division by outlining the commonalities inherent to both genres:

Although we have built no shortage of firewalls to distinguish them, it takes not more than a few minutes of listening to discern that only artificial social constructions and genre distinctions separate the music of, say, Morton Feldman from Brian Eno, Stockhausen from Autechre, Philip Glass from Stars of the Lid, and Conlon Nancarrow from Squarepusher. Nevertheless, what the composers of both so-called “classical” and “electronic” music hold in common, above all, is an ambition to create what might be called gradual music, or listening music… What is listening music? It is not always music with a clearly perceptible start, middle, or end. It is, rather, music that concerns itself with the journey — with space, imagery, mood, texture, repetition, and internal meaning. It is music for sustained and concentrated absorption: something that takes form slowly, and resists easy understanding… Today’s composers of listening music find themselves weirdly and unnaturally segregated into two largely isolated musical ghettoes: “academic” or “institutional” composers, on the one hand, whose music is performed and funded by a commissioning structure enfolding orchestras, universities, government cultural agencies, and philanthropic entities; and, on the other, “outsider” or “underground” composers, whose music is nearly always self-funded — to the extent that it is funded at all — performed in darkened rooms, and nurtured by an international network of small record labels, concert promoters, and press. Despite their many differences, however, both share an overriding concern with the outer, more ineffable boundaries of human feeling and experience, and the way that music can be made to reflect these phenomena in our lives. In one, this vision might be expressed in the form of a somber, otherworldly meditation growing out of insistently repeating drones, beats, samples, cut-ups, and disembodied fragments; in the other, it might be the form of a gloriously deafening raid on the inarticulate — the grand, heroic, Beethovenian blast. (Givony 2008)

In the past century, our natural and sonic environment has become irrevocably altered, steadily replaced by manmade machine-objects and the clang of their resultant sonorities. The artistic output of human civilization naturally parallels this evolvement. Though ambivalence towards technological progress dwells within our culture, the trend of exponential scientific development seems irreversible. As artists, poets, philosophers and creators representing this evolution, we should actively strive to delete what Givony labels “artificial social constructions and genre distinctions,” and unite in a cooperative, humanistic effort to produce inspired and thought provoking digital media relevant to the new century.

Notes

  1. For an excellent history of electronic instruments, visit 120 Years of Electronic Music: Electronic Musical Instrument 1870–1990 [Last accessed 18 September 2008.]
  2. An inspired performance of Thom Yorke singing “How to Disappear Completely” with six ondes martenot can be found on YouTube [Last accessed 18 September 2008.]
  3. Windows, Icons, Mouse, Pointers, Systems. The WIMPS design was popularized with the 1984 release of the Apple Macintosh. The Macintosh computer was highly significant in the development of musical computation, editing processes and artistic applications, as the new graphical interface made the processing and editing of digital media highly attractive to users.
  4. Berhman also played a unique role during the electronic avant-garde of the 1960s, and was influential in instigating the commercial dissemination of experimental music. Employed by Columbia Records and given essentially free reign to produce whatever artists he desired, Behrman developed the “Music of Our Time” series, highlighting the works of such forerunning artists as John Cage, Alvin Lucier, Terry Reily, Pauline Oliveros, Henri Pousseur and Steve Reich.
  5. Merce Cunningham has been a long time supporter of new music. Composers who have produced works for his dance company include John Cage (Cunningham’s long-term partner), Robert Ashley, Earle Brown, Larry Austin, Pauline Oliveros and Brian Eno.
  6. Manoury’s Jupiter also was also the first piece composed using Miller Puckette’s Max software, designed in 1986.
  7. Object-oriented programs are built using modules of code, called class libraries. The tremendous advantage of building code with modules lies in the ability for the modules to be reused in other applications written in the programming language, as well as shared with other users. Over time, a very rich set of class libraries will become developed for a popular programming language.
  8. It is important to note the difference in defining real time (adverb) and real-time (adjective).
  9. Jonathan Harvey’s Mortuos Plango, Vivos Voco (1980) is an early example of highly successful usage of a phase vocoder. In this piece, Harvey recorded and processed two acoustic sources: the sounds of the great tenor bell at Winchester Cathedral, and the singing voice of his son, who was a chorister in the Westminster Cathedral choir. Harvey morphs the sound of the bell and the boy’s voice, intertwining the two sonic elements seamlessly together. Other early usages of phase vocoding can be seen in Trevor Wishart’s Vox-5, in which he morphs the sounds of crowds, bees, bells, and other environmental sounds, and Joji Yuasa’s A Study in White.
  10. In 2008 IRCAM launched an updated version of their online database cataloging works in this field. Readers are strongly encouraged to explore the site: http://brahms.ircam.fr. The following is a short list of additional composers who have written for voice and live electronics: Georges Aperghis (b. 1945, Athens); Matthew Burtner (b. 1971, Alaska); Edmund Campion (b. 1957, Dallas); Andrew Cole (b. 1980, New York); Evdokija Danajloska (b. 1973, Macedonia); Frédéric Durieux (b. 1959, Paris); Steve Everett (b. 1953, Georgia); Richard Felciano (b. 1930, Santa Rosa CA); Stefano Gervasoni (b. 1962, Bergamo, Italy); Mari Kimura (b. 1962, Tokyo); Michaël Lévinas (b. 1949, Paris); Andrea Nicoli (b. 1960, Torino, Italy); Sun-Young Pahg (b. 1974, Korea); Hèctor Parra (b. 1976, Barcelona); Joseph Rovan; Salvatore Sciarrino (b. 1947, Palermo, Italy); Agostino di Scipio (b. 1962, Naples); Jos Zwaanenburg (b. 1958, Netherlands).
  11. Interestingly, though not uncommon to electronic composers, Berio did not maintain the facility to program his own electronics, instead relying upon technicians to interpret and engineer his instructions. One can conjecture whether or not he would have shown a greater interest towards interactive compositions were the technology more readily accessible during this time period (Max/MSP was released in 1997, just six years prior to his death in 2003).
  12. For more information visit the InfoMus Lab’s page on the work, performed at the Salzburg Festival, July–August 1999 [Last accessed 15 September 2008.]
  13. Subotnick would compose a “ghost score” by creating a series of control voltages and recording them to tape or EPROM. A miked performer played “through” the ghost box, and the sonic output was processed according to the pre-recorded control voltages in real time. As the sound of the recorded control voltages were silent or “transparent”, Subotnick coined the term “ghost score” to refer to the electronic score. Reflecting on the importance of this, Subotnick stated: “I think that technology has to be transparent. You cannot be aware of the technology, but [it should be] a world that you move into seamlessly” (Machover 1999, 20).
  14. The Misfortune of the Immortals (a collaboration between La Barbara, Subotnick and Coniglio), is an “interdisciplinary interactive media opera for voices, dancers, actors, video projections (by Steina and Woody Vasulka), MIDI instruments and interactive computer systems allowing onstage performers to interactively control the theatrical environment.” Intimate Immensity is an interactive media poem written for Joan La Barbara and Thomas Buckner, utilizing a Balinese dancer, infrared light and two video artists, whose images were choreographed and manipulated in real time.
  15. AudioSculpt (based on the phase vocoder analysis/resynthesis engine), allows composers to easily modify spectral information with an easy-to-use graphical interface. Composers may utilize techniques of cross synthesis, filtering, and expansion/compression of time using only the mouse (i.e. no complicated programming techniques are required). CHANT has become a highly utilized real time synthesis vehicle, and is one of the main products of the Max/MSP library.
  16. L’Amour de loin won the Grawemeyer Composition Award in 2003.
  17. More information can be found on the website of the independent distribution project Petals, founded by Barrière, Saariaho and others.
  18.  “The ensemble has presented numerous concerts of all genres of contemporary music, acoustic, electroacoustic, interactive live-performance, music theatre and multi-media. Many of the works premiered by g.e.m.s. during its fifteen consecutive seasons have been major international prize-winners and have been performed at contemporary music concerts and festivals in Canada, the USA, Europe, South America and Japan” .
  19. Other vocalists involved in this festival include Golan Levin, Thomas Buckner, Earl Howard, David Wessel, Zachary Lieberman, Jaap Blonk, Paul Botelho and David Moss.

Bibliography

Adamcyk, David. Email exchange. 12 December 2006.

Anhalt, István. Alternative Voices: Essays on contemporary vocal and choral composition. Toronto: University of Toronto Press, 1984.

Battier, Marc. “Electroacoustic Music Studies and the Danger of Loss.” Organised Sound 9/1 (2004), pp. 47–53.

Belet, Brian. “Live Performance Interaction for Humans and Machines in the Early Twenty-First Century: One composer’s æsthetics for composition and performance practice.” Organised Sound 8/3 (2003), pp. 305–12.

Biddle, Ian. “Nostalgia, Irony and Cyborgian Vocalities in Kraftwerk’s Radioaktivität and Autobahn.” Twentieth-Century Music 1/1 (2004), pp. 81–100.

Bosma, Hannah. “Male and Female Voices in Computer Music.” Proceedings of the International Computer Music Conference (1995), pp. 139–43. Banff, Canada: International Computer Music Association.

_____. “Bodies of Evidence, Singing Cyborgs and Other Gender Issues in Electrovocal Music.” Organised Sound 8/1 (2003), pp. 5–17.

Brown, Linda. “The Beautiful in Strangeness: The Extended Vocal Techniques of Joan La Barbara.” PhD. Diss. University of Florida: Gainsville, 2002.

Cage, John. Silence: Lectures and Writings. Cambridge MA: MIT Press, 1966.

Causton, Richard. “Berio’s ‘Visage’ and the Theatre of Electroacoustic Music.” Tempo, New Ser., No. 194, Italian Issue (1995), pp. 15–21.

Chadabe, Joel. Electric Sound: The past and promise of electronic music. Upper Saddle River, NJ: Prentice Hall, 1997.

Connor, Steven. “The Decomposing Voice of Postmodern Music.” New Literary History 32 (2001), pp. 467–83.

Cope, David. Techniques of the Contemporary Composer. New York: Schirmer Books, 1997.

_____. New Directions in Music. Prospect Heights IL: Waveland Press, Inc, 2001.

Dodge, Charles. Computer music: synthesis, composition, and performance. New York: Schirmer Books, 1997.

Eaton, John. First performances: The Syn-Ket and Moog synthesizer in the 1960s. EMF CD 056, 2007.

Edgerton, Michael Edward. The 21st century voice: contemporary and traditional extra- normal voice. Lanham, MD: Scarecrow Press, 2004.

Emmerson, Simon. “‘Live’ versus ‘real-time’.” Contemporary Music Review 10/2 (1994), 95–101.

_____. “Sentences for Soprano and Electronics: Towards a poetics of live electronic music.” Proceedings of the Académie Internationale de Musique Electroacoustique (June 1996), pp.316–18. Bourges: Mnémosyne [Bilingual publication].

_____. “Acoustic/Electroacoustic: The relationship with instruments.” Journal of New Music Research 27/1–2 (1998), pp. 146–64.

Fabbriciani, Roberto. “Walking with Gigi.” Contemporary Music Review 18/1 (1999), pp. 7–15.

Garnett, Guy. “The Aesthetics of Interactive Computer Music.” Computer Music Journal 25/1 (2001), pp. 21–33.

Giomi, Francesco, Damiano Meacci and Kilian Schwoon. “Live Electronics in Luciano Berio’s Music.” Computer Music Journal 27/1–2 (2003), pp. 30–46.

Givony, Ronen. “Age of Content.”Programme Notes to the Britten-Pears Aldeburgh Festival. Suffolk, UK, June 2008.

Harvey, Jonathan. “The Metaphysics of Live Electronics.” Contemporary Music Review 18/3 (1999), pp. 79–82.

Hinkle-Turner, Elizabeth. “Women and Music Technology: Pioneers, precedents and issues in the United States.” Organised Sound 8/1 (2003), pp. 31–47.

Kimura, Mari. “Performance Practice in Computer Music.” Computer Music Journal 19/1 (1995), pp. 64–75.

_____. “Creative Process and Performance Practice of Interactive Computer Music: A performer’s tale.” Organised Sound 8/3 (2003), pp. 289–96.

La Barbara, Joan. “Voice is the Original Instrument.” Contemporary Music Review 21/1 (2002), pp. 35–48.

_____. Personal Interview. 17 June 2005.

Licata, Thomas, ed. Electroacoustic Music: Analytical perspectives. Westport CT: Greenwood Press, 2002.

Manning, Peter. Electronic and Computer Music. New York: Oxford University Press, 2004.

McNutt, Elizabeth. “Performing Electroacoustic Music: A wider view of interactivity.” Organised Sound 8/3 (2003), pp. 297–304.

Metzer, David. “The Paths from and to Abstraction in Stockhausen’s Gesang der Jüngling.” Modernism / Modernity 11/4 (2004), pp. 695–721.

Montanaro, Lisa. “A Singer’s Guide to Performing Works for Voice and Electronics.” DMA Treatise. Austin: University of Texas, 2004.

Pape, Gerard. “Luigi Nono and his Fellow Travelers.” Contemporary Music Review 18/1 (1999), pp. 57–65.

Risset, Jean-Claude. “Problems of Analysis and Computer Music.” Analysis in Electroacoustic Music. Edited by Françoise Barrière and Gerard Bennett. Bourges: Institut International de Musique Electroacoustique, 1997, pp. 360–68.

_____. “Composing in Real-time?” Contemporary Music Review 18/3 (1999), pp. 31–39.

Rizzardi, Veniero. “Notation, Oral Tradition and Performance Practice in the Works with Tape and Live Electronics by Luigi Nono.” Contemporary Music Review 18/1 (1999), pp. 47-56.

Roads, Curtis. “Interview with Morton Subotnick.” Computer Music Journal 12/1 (1988), pp. 9–18.

_____. The Computer Music Tutorial.Cambridge: MIT Press, 1996.

Rowe, Robert. Interactive Music Systems. Cambridge: MIT Press, 1993.

_____. “Aesthetics of Interactive Music Systems.” In Aesthetics of Live Electronic Music. Contemporary Music Review 18/3 (1999), pp. 83–87.

Russolo, Luigi. “The Art of Noise.” Classic Essays on Twentieth-Century Music. Edited by Richard Kostelanetz and Joseph Darby. New York: Schirmer, 1996, pp. 35-41.

Schloss, W. Andrew. “Using Contemporary Technology in Live Performance: The dilemma of the performer.” Journal of New Music Research 32/3 (2003), pp. 239–42.

Settel, Zack and Cort Lippe. “Real-time Musical Applications using Frequency Domain Signal Processing.” Applications of Signal Processing to Audio and Acoustics Workshop. New York: New Paltz, 1995.

Sivuoja-Gunaratnam, Anne. “Desire and Distance in Kaija Saariaho’s Lohn.” Organised Sound 8/1 (2003), pp. 71–84.

Stroppa, Marco. “Live Electronics or… Live Music? Towards a critique of interaction.” Contemporary Music Review 18/3 (1999), pp. 41–47.

Subotnick, Morton and Tod Machover. “Interview with Mort Subotnick.” Contemporary Music Review 13/2 (1996), pp. 3–11.

Subotnick, Morton. “The Use of Computer Technology in an Interactive or ‘Real time’ Performance Environment.” Contemporary Music Review 18/3 (1999), pp. 113–17.

_____. Personal Interview. 13 August 2005.

Teruggi, Daniel. “Electroacoustic Preservation Projects: How to move forward.” Organised Sound 9/1 (2004), pp. 55–62.

Truax, Barry. “Sounds and Sources in Powers of Two: Towards a contemporary myth.” Organised Sound 1/1 (1996), pp. 13-21.

_____. “Electroacoustic Symbolism in Powers of Two: The Artist.” Analysis in Electroacoustic Music. Edited by Françoise Barrière and Gerard Bennett. Bourges: Institut International de Musique Electroacoustique, 1997, pp. 379–85

_____. “The Aesthetics of Computer Music: A questionable concept reconsidered.” Organised Sound 5/3 (2000), pp. 119–26.

Ultan, Lloyd. “Electronic Music: An American voice.” Perspectives on American Music since 1950. Edited by James Heintze. New York: Garland,1999, pp. 3–39.

Weber-Lucks, Theda. “Electroacoustic Voices in Vocal Performance Art — A Gender Issue?” Organised Sound 8/1 (2003), pp. 61–69.

Winkler, Todd. Composing Interactive Music: Techniques and ideas using Max. London: MIT Press, 1998.

Wishart, Trevor. “The Composition of Vox-5.” Computer Music Journal 12/4 (1988), pp. 21–27.

_____. On Sonic Art. Edited by Simon Emmerson. Amsterdam: Hardwood Academic Publishers, 1996. [Incl. CD.]

Zurbrugg, Nicholas. Art, Performance, Media: 31 Interviews. Minneapolis: University of Minnesota Press, 2004.

Social bottom