eC!

Social top

English

Voice and Live-Electronics using Remotes as Gestural Controllers

Introduction

When I was first invited in October 2007 to undertake a residency at STEIM (STudio for Electro-Instrumental Music)in Amsterdam I had just finished an intensive period of composing straight-forward chamber music works for diverse ensembles and/or singers and a large, evening-length operatic work, Die Bestmannoper, for 14 singers, orchestra, choir, piano, toy piano, harmonium and theremin.

I have also been working on my voice as a singer / performer for almost 20 years now, experimenting with and performing all kinds of genres and techniques, always on the lookout to discover the new and unknown within the voice. I developed different extended vocal techniques within experimental Jazz hardcore bands like Vol-Vox, No Doctor and Astro-Peril, as well as working on various other ensemble projects. A quite resonant technique of inhaling while constraining the muscles around the vocal chords may be the most interesting one. (1) Meanwhile, I was studying Musicology, Music Education, Composition and classical singing. Over the past 14 years, I went from baritone to tenor and then from tenor to countertenor, interpreting Classical repertoire from Handel to Mozart, Schubert to Schumann, Debussy to Britten, etc. Until 2001 I studied with Michael Buettner and Gerold Hermann (University of Potsdam, Germany) and Floyd Callaghan (SUNY Potsdam, USA).

At one point I began to work with extremely low tones — which may recall Mongolian, Tuvan or Tibetan chanting, or throat singing, as it is also known. I developed many other personal techniques as well, such as a special whistling technique which can be be as loud, piercing, precise in pitch and almost as high as a piccolo, (2) and various rapid mouth techniques which do not involve the vocal cords at all, but use only the mouth components: the lips, tongue and palate. (3)

I was always interested in discovering a multitude of voices within just one single voice, particularly my own. An approach to musical creation by which one explores and develops different voices using only one vocal apparatus — accumulating the collected “material”, keeping it in memory and accessing it whenever desired, similar to a music library — is what I would call a vocalistic approach, in contrast to one which uses the voice as sound material to be manipulated by electronic apparatus. (4)

Vocalistic and Computer-based Approach to Music Creation

What is interesting from a composer’s standpoint is the fact that this approach is comparable to — and may remind one of — the creation of music based on live sampling. The techniques I developed have close analogies to typical processing techniques in digital sampling: for example, manipulations of the source through the application of filters and modulators (variations in the mouth cavity shape and size), transposition or other processes. “Storing” the individual sounds in “memory” and building up a sound library from which the performer/interpreter can call up any sample at any time is basically the same as the vocalistic approach.

The vocalistic and the computer-based approaches inspire and influence each other in my work. The interdependency and, in particular, the obvious similarities between the two approaches, as well as the desire to simply add an electronic sound based — and therefore abstract — layer to my solo vocal performances were the reasons I began to explore new possibilities involving electroacoustics. The final impulse came from theatre director Thomas Ostermeier, with whom I worked during the Winter and Spring 2007–08 on the Schaubühne Berlin production of DER SCHNITT (THE CUT) and DIE STADT (THE CITY) by contemporary British authors Mark Ravenhill and Martin Crimp. He encouraged me to work on the idea of exploring the use of hand and arm movements to control and generate electronic-based sounds.

STEIM, Amsterdam

STEIM, in Amsterdam, is one of the few institutions in the world which has dedicated itself to such an idea, and has done so for many years now. The foundation pushes for the development of software as well as the design of extremely physical interfaces. The philosophy of STEIM stands for a very “human approach to technology” as opposed to the concept of the exclusive use of the computer as “an extension of the formalistic capabilities of humans.”

STEIM promotes the idea that Touch is crucial in communicating with the new electronic performance art technologies. […] At STEIM the intelligence of the body, for example: the knowledge of the fingers or lips is considered musically as important as the ‘brain-knowledge’. (STEIM “info” page)

STEIM principally emphasizes the support of performing artists. However, in my opinion, the development of electronic devices and tools to be used as new music instruments — for both composition and performance — should attract the interest of contemporary composers as well.

A New Electronic Instrument using Remotes

The most interesting aspect of the development of a new electronic instrument is that it offers me freedom in movement, or physical independence during performance. The devices used must not hinder the performance act, and therefore the musician performer should not be connected to the hardware, computer or any other gear by wires or cables. For a very long time STEIM was working on solving such problems. The biggest challenge was to find a solution without cables or wires while avoiding tremendous costs. However, no really satisfying solution was found until 2006, when a Japanese gaming enterprise came up with handy and affordable remote controllers which could be connected via Bluetooth to a computer and which included accelerometer and tilt sensors. This is crucial since these sensors are necessary for the measurement of movement and velocity in three-dimensional space on its x, y and z axes.

Since October 2007, and with the kind help and support of STEIM, I have been configuring a setup using two gestural controllers — one for each hand. Two Wii Remotes are connected to a laptop (MacBook Pro at the moment). LiSa and junXion (both created by STEIM) are the software programmes I am using. junXion detects and analyzes data received by sensors — built into the Wii Remotes — which measure the three-dimensional movements generated by the performer. The data is constantly collected and translated into MIDI information, which can be used for triggering and altering sounds from electronic music devices such as expanders, synthezisers or samplers. Apart from this digital-based equipment I’m also using an eight-channel mixing board with send-ins and outs for various routings. Although the whole setup can be realized on a stereo amplification system, it is more interesting to present the sound image over five individual loudspeakers fed by two stereo signals — one from each Wii Remote — and the clean, unprocessed vocal sound.

LiSa, short for Live Sampling, is a software which has been developed over many years at STEIM by the programmer Frank Baldé and the musician artist, visionary and inventor of The Hands, Michel Waiswisz. (5) As the programme is particularly suited for — and in fact geared towards — use in live performance, I found that LiSa was ideal for my needs.

The process of learning the programmes and configuring the setup took me quite a while, about two or three months altogether. This is not so long if one takes into consideration the virtually endless possibilities of LiSa and junXion in combination with two gestural controllers. Once I had mastered the possibilities of the programmes and had created various setups, I had to learn how to work with the overall instrument, i.e. play with the controllers.

Video play
Video example 1. Studies for Self-Portrait. DNK concert series for new music, Amsterdam, 17 April 2008.

I began by learning how to use the movements of my hands and arms to control different musical parameters, like loudness or pitch, for instance, and how to trigger sounds which have just been sampled or pre-recorded samples stored on the computer by pressing different buttons. Naturally, I was constantly adjusting the settings during the rehearsal process in order to define the functionalities of the controllers exactly the way I wanted them to be: clearly, the performance aspect of the instrument is intimately connected with its development. In May 2008, after going through different versions of the setup, I felt I had reached the point I initially attempted to achieve. A short description of how the setup is configured follows.

Setup and Configuration

Each Wii Remote is equipped with 11 buttons, for a total of 22 buttons available. One specific action or function is assigned to each individual button. The most basic functions are: two record modes (which record either the ambient sound or my voice); straight-forward playback actions; playback at different transpositions, and backwards playback; and loading various samples and/or sample zones.

Now, as a basic example, in order to alter the triggered sounds one may want to influence loudness and pitch of the samples for example. The volume within my setup is controlled by simple up-and-downward-movements of the arms which are measured on the x-axis: if the arm is held down and the Wii Remote is pointing to the floor no action is applied and therefore nothing can be heard. If the arm is held up and the Wii Remote is pointing up to the sky the triggered sound is the loudest possible. Moving the arm up and down in between these extremities is the performer’s way of controlling the loudness.

To alter the pitches of the samples during playback I use the y-axis. The determining movement around this axis is the rotation of the wrist from left to right and backwards at an entire angle of approximately 160 degrees. With the Wii Remote held horizontally and pointing forward, rotating the wrist leftwards at an angle of 80 degrees produces a lowering of the pitch. Similarly, the mirror action to the right raises the pitch accordingly.

In a live performance situation using this particular setup I can record the live vocal sounds at any moment, store them anywhere in LiSa’s buffer and play the sample back and manipulate it whenever I want to.

Working with the interface, I have developed techniques to quickly change between record and play back modes and can thus create very interesting and complex musical results with just very basic tools. Of course, within my setup I have defined many more buttons with different assignments; these can also be changed during the course of a performance. Also I haven’t yet spoken about the third axis. But to describe all this in detail would definitely go beyond the scope of this article. What is more important though is to understand that seemingly simple definitions or assignments can quite easily create complex musical results.

The Physical — Theatrical — Aspect of Performance

Furthermore, I put strong emphasis on aiming for synchronicity between movement and its resultant sound as I configure the setup. Essentially, each movement should trigger a specific variation of a musical parameter — for example, dynamics, pitch and so forth — and therefore each parameter change should be perceived by the listener. This assists him/her in decoding the different aspects of the performer’s musical creation or interpretation, or at least some of them. Even if the settings and configurations are all kept very simple, if the movements to trigger changes in the music and the resulting sound are not “in synch” with each other, it can seem somewhat odd to the listener. And as soon as several parameter changes are applied at once to a single sound, the result can easily become quite complex. If the correlation between movement and sound creation is too difficult to perceive — or absent — it may no longer be possible to comprehend the specificities of the creative act; if the listener cannot figure out how a sound is created, a feeling of arbitrariness could arise.

Video play
Video example 2. Music for a Singer/Performer with Live Electronics, Gestural Controllers and Playback for Ten Loudspeakers. Schaubühne Berlin, 3 May 2008.

While many artists in recent years have created many great musical tools and interfaces, consideration of the importance of the visual component in the presentation of musical ideas is often neglected, ignored or even completely denied. I feel it is crucial to the presentation of live electronic music that the performer understand not only the theatrical implications of its presentation but also the consequences of their neglect, as experienced in so many live electronic music performances today. And it is the duty of a mature performer to present his/her work in such a manner that the audience is able to understand the performance act at least to a certain degree; this assures authentic and pertinent reactions to the music by the audience — and therefore communication between the musician and the public.

Future of the Project

Much has been said and described about how the instrument functions, but the question of what to name it has so far not been conclusively answered. Perhaps it isn’t necessary; there is nothing wrong with simply calling it a Live Electronic Instrument Performed with Gestural Controllers. Admittedly, it is a bit awkward. And it doesn’t make any distinction between my specific approach using a movement-based setup — with its range of expressive possibilities — and setups which require no physical action at all in order to generate and control sound. Bill Thompson, a composer of electroacoustic music and an expert in circuit bending, suggested the name KiSS, for Kinesthetic Sonification System. And indeed it does almost perfectly describe the corporeal nature of the interface/instrument, albeit in somewhat scientific terms.

At the beginning of this project my initial and main idea was to build an instrument which can be played by just moving and dancing. In addition to hand and arm movement tracking, the setup still needs to integrate head, hip and leg movements as essential sound-generating and controlling components. The ultimate goal is to develop the project into something along the lines of what I would call Klangtanz [literally “sound-dance”], where sound would be created by dancing; the performer would dance the sound.

For the moment, however, the idea is still only Luft von anderem Planeten [“air from another planet”] (6); it has been expressed, and only awaits its realization.

To see Alex Nowitz performing using this instrument/interface, see the accompanying two videos, featuring performances of Music for a Singer/Performer (2008) and Studies for Self-Portrait (2008). Both performances demonstrate the current state of the instrument as described in this article.

Notes

  1. See Video example 2, at 7:04 to hear this technique used in a performance of Music for a Singer/Performer with Gestural Controllers, Live Electronics and Playback for Ten Loudspeakers.
  2. See Studies for Self-Portrait (Video ex. 1) from 11:14 to 12:20.
  3. At the beginning of Music for a Singer/Performer (Video ex. 2), diverse mouth-only techniques are demonstrated.
  4. Of course, this so-called vocalistic approach to music creation cannot be considered new; a significant tradition has evolved since the late 1960s. Such exceptional contemporary vocalists as Jaap Blonk, Paul Dutton, Phil Minton, David Moss, Sainkho Namtchylak and Lauren Newton — to name but a few — have been working on and have exhibited an incredibly wide range of vocal abilities. And although from another musical generation, here we should also mention the remarkable Cathy Berberian, an extraordinary classical singer capable of many different techniques as well as having been an excellent interpreter of music by composers such as Luciano Berio and John Cage.
  5. On June 18th, 2008 Michel Waisvisz passed away after an eight month long struggle against malignant cells; a big loss for STEIM, not to mention the live electronics community all over the world (see http://www.steim.org/michel).
  6. “Ich fühle Luft von anderem Planeten” is the beginning of the poem Entrückung (Rapture) by Stefan George, which Arnold Schönberg set to music in the fourth movement of his Second String Quartet, Op.10, for string quartet and soprano.

Programme Notes

Music for a Singer/Performer with Gestural Controllers, Live Electronics and Playback for Ten Loudspeakers

The performance of Music for a Singer/Performer with Live Electronics, Gestural Controllers and Playback for Ten Loudspeakers is comparable to a fight. The performer operates at the intersection of human being and machine. He fights against the superiority of technology, against the music machine. And he himself transforms into a human being–music–automate, some sort of homme machine (Julien Offray de La Mettrie, 1748). This vocal and musical performance might be understood as an allegory of the human struggle to overcome technological dominance and a portrayal of his powerlessness to do so.

The performer holds two Wii Remotes — remote control units used in a popular Japanese gameplay, one in each hand — and uses them as gestural controllers during the performance. With the remotes he generates electronic sounds and controls the entire setup of the live electronics, consisting of an amplification system, a mixing board, a computer and two software programmes developed at STEIM in Amsterdam: LiSa, the Live Sampling programme, and junXion, a digital interface that measures movements, analyzes data and translates it into MIDI information. LiSa allows for instant access to samples which have just been sampled as well as to pre-recorded samples stored on the computer, while the gestural controllers are used to trigger and modify these samples in various musical ways and thereby to create and shape the composition. The performer has direct control over a range of musical parameters — volume, panning, pitches, etc. — by means of buttons on the remotes and the movements of his hands and arms, and uses this setup to extend the array of his vocal techniques. An abstract sound image is created which he produces during the live situation. Ten individual loudspeakers are used for playback of pre-recorded music for ten instruments: two quartets — each with violin, viola, violoncello and double bass — plus a bass clarinet and a large drumset. Each instrument is played back through a single loudspeaker; the drumset playback, however, is dispersed over all ten loudspeakers. The playback spatialization varies during the performance, creating different sound images and reenforcing the compositional idea of setting up diverse surround sound situations.

Despite all the technical sophistication, it is the voice that is at the centre of the whole musical performance. This piece could be understood as an expression of protest and uproar against all those tendencies found in our society that inconspicuously lead to technocracy, standardization and the loss of individuality. The fragments of the lyrics used in the composition/performance are taken from William Shakespeare’s Sonnet No. 66, “Tired with all These,” and from the repertoire of sound poetry by the singer-performer.

Amsterdam, 8 April 2008.

Music for a Singer/Performer with Gestural Controllers, Live Electronics and Playback for Ten Loudspeakers was performed a number of times at the Schaubühne Berlin as the Overture of the plays THE CUT and THE CITY by contemporary British playwrights Mark Ravenhill and Martin Crimp. Further performances will take place 18–22 October 2008 (Schaubühne Berlin).

Studies for Self-Portrait

for amplified voice and live electronics

[…] Most of the “material” (samples) used in Studies for Self-Portrait is based on music which I have composed in the past. My voice is also recorded during the live performance and thus provides additional material. In both respects it’s like looking into a mirror and drawing a self-portrait. The piece is the result of three residencies at STEIM since October 2007. […]

Amsterdam, 17 April 2008.

Studies for Self-Portrait, for amplified voice and live electronics, was performed at the DNK new music series in Amsterdam and at Fylkingen (centre for experimental art) in Stockholm in 2008.

Acknowledgements

The setup would never have been possible without the help and support of the following people, to whom I would like to extend a heartfelt expression of gratitude:

STEIM: Frank Baldé, Nico Bes, Robert van Heumen, Takuro Mizuta Lippit, Daniel Schorno and Michel Waisvisz; the Schaubühne Berlin: Thomas Ostermeier and Tobias Veit.

Many thanks to jef chippewa / shirling & neueweise for proofreading and corrections to the English version of the article.

Social bottom