eC!

Social top

English

The “Beast” that is Live Spatialization

My decision to become involved in electroacoustics was largely due to my fascination with the movement of sound. The nature of my work called for something other than composed spatialization or two channel diffusion systems. I wanted to perform spatialization with multiple instruments (whether they be live or on tape). I searched for software/hardware/“otherware” to do this to no avail. The systems I found were either not designed for real-time use, severely cost prohibitive, not performable or all of the above. I was searching for a performable spatializer and could find none… so I made one, then two, then another and I am still adjusting and improving on the work I started four years ago.

A Personal Account of my Journey to Find/Create Performable Spatialization for Multiple Inputs/Tracks

Very early on in my study of electroacoustics I became fascinated with the movement of sound. As a student at Concordia University, there was ample opportunity to play, rehearse and experiment on their multi-channel speaker system which was installed many times a year for various electroacoustic concerts. Although my fascination with the movement of sound originated in fader diffusion of two channel pieces to multiple speakers, this was not suited to the work I was doing in theatre and live performance.

There were two main criteria I needed to fulfill:

  1. The sonic movement had to have the ability to react or be altered in relation to live elements.
  2. The spatialization system needed to be able to control multiple streams (recorded or live channels) of audio independently of one another.

These criteria led to problems:

  1. Automation: I needed automation to handle multiple streams simultaneously (e.g. a movement on channel A needs to continue as I work on the movement of channel B and/or there needs to be common useful options, such as circular movement or cross movement, that can be triggered with the touch of a button).
  2. CPU usage: handling multiple audio input and output channels as well as independently spatializing each one is costly on computer CPUs.
  3. Cost $: On a student and/or practical independent budget, I could not afford certain systems.
  4. Size: I wanted to be able to carry the system to rehearsals or small performance spaces and not take long to setup or strike. That way I could try experimenting in many contexts.
  5. Adaptability: The system had to be able to 'plug in' to many different speaker systems. It also had to be able to adapt to various speaker arrangements that would be used in theatrical and other performance settings.

I researched for a suitable software/hardware solution to no avail (1). The systems available were either too expensive, did not have automation, were not suited to live performance, too bulky, were too CPU intensive or a combination of the above. There was nothing I could find that fit all of my criteria. So, with resignation, I delved into the world of Max/MSP for a solution. At the time that I chose the Max/MSP route, a friend of mine had been working on something similar and I had the fortune of a one month residency at the Société des Arts et Technologie (SAT) in Montreal.

Armed with my friend’s patch, a reading of the manual and some experimental patches done before my residency started, I set myself the task of creating a live spatializer. Needless to say, my first spatializer had many problems, most notably there was a long delay and the computer would shudder to a halt if more than two or three panners (2) were automating at once.

I did not give up, however. My second spatializer was based on rhythm. The movement of the the sound was based on a prescribed tempo and the trajectory was created by clicking on the speaker order (e.g. based on a tempo of 120 bpm in 4/4 time, go from speaker 1 to 5 to 7 to 4 on every beat). Although this was an interesting idea, and one that I will pursue in the future, the panner I made was much too limited. It took much too long to create an interesting or usable trajectory in a live context.

This led to my third panner which was my first successful panner. It was created and used for a project exploring new modes of musical performance spearheaded by Mark Corwin and Tim Brady. We called ourselves the wire ensemble, 2004. This panner was essentially a very stripped down version of the first panner I created. It was limited to quadraphonic use and had a maximum of eight input channels. It had some basic trajectory automation such as circle, zigzag, etc. and volume and trajectory speed were controlled by the computer keyboard. 

Although successfully used for this performance it was not very adaptable and still had too many limitations:

By the time I made my fourth panner I had a lot more in my tool kit. I had become an experienced user of Max/MSP, I had joined forces with a small informal group of artists in Montreal who shared work and ideas on spatialization, and Jean-Baptiste Thibeault had created a very useful “trajectory” patch that was to be used for the automating of trajectories in my patch.

I am very happy with my latest panner. It is adaptable as it can pan in one to eight speakers and the speakers can be remapped via a matrix. There are excellent automation possibilities thanks to the patch created by Jean-Baptiste Thibeault and one can create and save new trajectories. Delay times are very low in large part due to the simple and effective panning software made by Yves Gigon which is used as the basis for the panning software I have implemented in my patch. It has a much more intuitive visual interface. It is small; a computer with audio interface is all that is needed. One limitation that I hope to improve in the near future, is that it is still mostly controlled by the mouse, which limits the user to controlling one panner at a time. Although with one-click automation this limitation is not too great. I have successfully run seven panners simultaneously (seven input streams. Live, recorded or both) all with automation in a performance entitled, A Tree in the Middle, performed as part of the Sound Travels festival in Toronto, 2006.

Below are links to spatialization topics and my website where examples of the patches discussed in this article can be found.

Adaptability and the Difficulties of using Precision Spatialization in a Live Context

There is plenty of excellent work in precision spatialization (surround sound, ambisonics, wave field synthesis, etc.). In my experience, there are two important limitations and possibly one other contentious limitation to consider when using highly precise tools such as ambisonics in a live context: adaptability, processing power and musicality (3).

Most precise spatialization is difficult to adapt to varying or unusual performance spaces. It usually has a 'hotspot' that is relatively small (4) and it usually requires a formal listening environment (i.e. tuned speakers, accurate speaker placement, precise positioning of the listening area, etc.). For me, precise spatialization is an example of a very specific form, like the string trio in classical music. It can perhaps encompass a broader spectrum of composition than a string trio, but it is no less precise in its instrumentation and performance. What I was searching for was a system that could adapt quickly to what was being performed and where it was performed. I also wanted to be able to be able to try out
various spatialization ideas in a rehearsal setting. Adaptability was the key word in the
creation of this system.

Some issues to consider in live spatialization and non-traditional speaker formations are:

A hypothetical example: we wish to have water sounds moving down a stairwell following dancers. We have three speakers available and they are installed at the three landings of the stairwell. The stairwell is open to the main performance area. In the main performance area there are four speakers high in the ceiling and a six channel surround system at head level. The four high speakers and the six channel surround system have to be controlled independently and have to follow improvisations by dancers and musical performers who are moving throughout the space. Also, the audience is free to move about the space so the “spatialization” has to be effective throughout. We have two regular laptops with eight-channel soundcards and the music is a combination of a four-piece ensemble outputting five channels that need to be spatialized independently and up to five recorded tracks which also need to be spatialized independently. Finally, the piece is going to tour to various locations, all of which have different sizes and speakers but the speaker arrangement stays the same.

Quickly we come upon difficulties with a precise system. As audience members move away from the hotspot the impact of the spatialization begins to be lost. Also, a precise system will quickly use up processing power which is at a premium with only two laptops available. As the piece tours, it will be necessary to redo the spatialization to adapt to the varying spaces.

But let us imagine that we have unlimited resources so that processing power is not a limitation. We achieve wonderful precision by pulling back some of the more extreme filtering and Doppler effects so that the audience can appreciate the performance from anywhere. Then the composer comes in to hear the recorded aspects of the piece through the system. Five minutes in he stops us and asks why we changed the key of this part of the piece.

We say that we have not changed anything… but the Doppler effect has. In creating an accurate illusion of movement, we have changed the pitch information of the piece. And as the spatialization changes with the improvisation in the piece, there is no way to set the pitch change. Or is there?

We run into the same issue as with more traditional precise systems: We are composing for an instrumentation much like we compose for a string trio. Whereas most composers have an in-depth understanding of the capabilities and limitations of a string trio when arranging their composition, fewer have an in-depth understanding of the various sonic changes in spatialization such as Doppler effect and filtering as they relate to composition. This difficulty is compounded in a non-traditional system and/or one that changes drastically with every performance or new venue.

With my system I run into the opposite problem as the one described above. It is very adaptable but it is strange to hear a sound moving very quickly without a Doppler effect. So, the problem runs round and round. If I take away the Doppler effect so I do not alter the composition, then I lose the illusion of movement. If I add the Doppler effect so that I have a very good illusion of movement, then I change the composition.

This brings me back to my analogy above; precise spatialization is like a string trio. It should be composed for. Just like there are string trios, horn quartets, orchestras and so on, there are different systems that one can compose for. The difficulty lies in that there are new systems being made all the time so it is very difficult to study the systems to compose for. If I do not know what a violin sounds like, it is hard to compose for it. There are many great compositions for precise spatialization and with a deeper understanding of spatialization in its various forms and with the standardization forms, there is the possibility of creating compositions with much more depth and complexity.

I have explored, and continue to explore, the possibilities of creating a system that can adapt to a composition much like two improvising or collaborating musicians. However, in this case, it is the music and sonic movement that are in a duet.

Notes

  1. Some examples of various spatialization systems can be found in the references below.
  2. Early on I realized that I probably would not be able to do highly accurate spatialization such as ambisonics, or something similar, because they were too CPU intensive and they weren’t adaptable to the variety of possible speaker configurations I would be using (I have not done exhaustive research in this area and am currently looking into other low-CPU alternatives). Therefore, I worked mostly with a basic panning structure with a common adjustable curve to smooth out the panning movement. I consider the speakers more as a point source as opposed to creating a sound field using multiple speakers. I use the word spatialization and panning somewhat interchangeably as I chronologically set out to do live spatialization but ended up with more of a panner.
  3. The term “musicality” is used in its broadest sense here to encompass traditional music, electroacoustics,
    sound art, sound design, etc.
  4. Technologies such as Wave Field Synthesis might solve the issue of the hotspot but they do not change the precision of the system.

Further Reading

See the CEC’s Audio and Technology Links page for more Ambisonic and Multichannel resources.

Social bottom