The Game Changer
Playing with electronics as soloist
Being a classical pianist means to focus exclusively on giving the best possible performance on the concert day. Rehearsing in the concert hall to check out the piano and the acoustics, getting acquainted with the rooms backstage, probably turning on the heat a little bit more, and trying to feel as home as possible in a concert venue will take a few hours in the afternoon. The rest of the day is filled with rituals that differ from artist to artist: sleeping, a light meal at a certain time, watching TV… Generally trying to stay focused on the work while at the same time trying to not get overwhelmed with stage fright.
The preparation for the concert requires practicing mechanical skills as well as reading and rehearsing the score and making the music that of oneself, or as they say “becoming one with the music”. This includes long walks thinking about a Beethoven Sonata and what tempo should be employed in the last movement if one wants to play the second as slow as planned. And will the individual technique allow for the desired interpretation or will things have to be adapted to a weak fourth finger?
For the performer of music with live electronics, these musings must seem like the luxuries of forgotten times. What once was an endeavour in solitude has now become teamwork. The pianist, who barely knew the instrument itself and relied on a mechanic to repair anything that might not work must know now about electronic machinery, software coding and computers. Not that this was part of the curricula at the Music Academy and not that there will ever be time for it: playing the piano is a difficult task and it hasn’t become any easier in modern times. Being a soloist has just become much busier.
Let’s take a look at how the work of a pianist, as for any soloist playing with electronics, has changed in the last years. Or more to the point: why is playing with electronics seen as such a cumbersome task amongst musicians that only few classically trained musicians are ready to deal with it?
An Interpreter’s View on the Problematic Aspects of Playing with Live Electronics
The Concert Hall
Concert halls have over the centuries emerged into very special rooms in our busy society where we can play in total silence and focus on the sound. They will have good to wonderful acoustics and feature good to fantastic grand pianos. They are optimized for musicians to come and test the hall and play acoustic music in the evening. A pianist giving a classical piano recital will need the facility manager to open the house and the telephone number of the stand-by piano technician. A piano trio will need two more chairs and music stands. The rest is already there. This is by no means a bagatelle and the city that builds a new concert hall will be busy doing so for years — and will learn the hard way how much money it costs. 1[1. Compare, for example, any one of many examples where the construction of a concert hall resulted in numerous delays and significantly higher costs — from the Sydney Opera House to the Elbphilharmonie Hamburg.]
Concert halls are a luxury but the electronic musician needs more. An acoustic that is good for the acoustic instruments as well as for the electronic sound from the speakers. An environment where electronics can easily be set up and don’t interfere with security issues (emergency exits, wireless transmitting frequencies…). A built-in sound system that is compatible with the electronics brought into the venue and very powerful, too, because composers like to push things to the edge. And, last but not least, personnel that understand the special kind of treatment contemporary classical music needs, which is very different from the needs of other amplified music, like rock and pop music. Or as Karlheinz Stockhausen puts it: “Most of the time I must work in halls which really are not suitable for the electroacoustic music of today and tomorrow” (Stockhausen 1996, 75).
Most concert halls will not be equipped with all this additional machinery and personnel. The additional features will most likely not be there, if the concert does not happen in one of the specialized studios for contemporary electronic music. This means that everything has to be brought into the hall and set up and checked for faults. Since the venue will have a busy schedule and can’t afford to lose revenue, the hall will be available for setup only on the day of the concert and the pianist will come into a very busy hall where people transport speakers, setup mixing consoles and lay out a few hundred metres of cable during what was traditionally the dress rehearsal. The quiet concert hall where the musician can focus on the sound has become a noisy working place with a team of workers communicating via walkie-talkie.
Depending on the amount of things that have to be brought into the hall, this setup will take several hours and only then will there be a line check to see if everything works right. Next is the sound check, where the pianist will be asked to play very loud and very quiet passages and the speakers and the PA will be set accordingly. It almost seems like the pianist has lost all control over the acoustic environment — which is partly true, because the performer on stage does not know how the electronics will sound in the hall. That would only be possible with a lengthy rehearsal, where the pianist can listen to the sound in the hall, go back to the piano, find some time to play the piano solo to feel how the sound radiates into the hall and start rehearsing with the musician on the sound board. This will most likely not happen today. In my own experience, for the broad majority of concerts with live electronics, I will not have had a rehearsal for myself to know what the music sounds like in a particular hall. Which is a bit ridiculous considering that all the responsibility still lies with the musician.
If we consider the situation of a piano trio playing a Beethoven piece: they practice in silence in the afternoon before splitting up and going on their way so that each musician can prepare in their own individual way for the concert. The musician working with electronic music has a tech conference at the beginning of the rehearsal where he explains in detail what is needed for each piece and has his first argument with the technician… who sometimes think he knows better (and sometimes indeed does!). He then helps out with the setup in a very busy surrounding and only late in the day carries his bag containing his concert clothes and maybe a yoghurt backstage — finally alone for the first time, and only because the doors of the concert hall have now been opened so that the audience can take their seats for the concert. Only now is there time for him to look at the score for the first time that day and to try to get the noisy atmosphere of the setup out of his head to concentrate on the music.
Certainly, all musicians who play with electronics will face the same difficulties of setting up the stage. But in contrast to a rock / pop music setup and to concerts with improvised music, the classical program will consist of several pieces of music, each having their own æsthetic and technical setup. The keyboard player in a pop band or a contemporary jazz band using electronics will sit down before the tour to think about the equipment that is needed and will probably make plans to keep it as simple and easy to set up as possible without making too much æsthetic compromise (see Tremblay 2006 and 2007). But here the performer is also the creator, and if it’s decided to go with only one organ and leave the other at home, it’s an æsthetic decision only the creator of the music can do. An interpreter of classical contemporary music will be bound to the æsthetics of the composer and cannot easily change things. Indeed, great care will be taken to keep the æsthetics of the era in which the composition was written and equipment from today does not sound like the equipment of the 1970s, for example. Where a rock band will probably renew the sound of an older work, the interpreter will seek to keep it. Which means that there could be as many as five or six completely different setups. And although each single setup might not even be as complicated as that of a pop or jazz keyboarder (it could easily be as complex though), the need for multiple setups in a single concert will almost inevitably result in a material war which is difficult and more time-consuming to set up and is far more prone to failure.
The Personal Technician
Of course all these responsibilities could be passed on to a technical team. What we would need then is a sound engineer who is savvy with computer coding in several common software environments and physically strong enough to carry eight or more loudspeakers into the hall and place them on stands. Being able to read the score is as much a requirement as setting up for each piece in the little time between the pieces. And manipulating the mixing console while at the same time reading the score to adjust as needed for eventual mistakes by the performer, who is on stage controlling the computer with a foot pedal or sensors. Needless to say, that already this is a demand that is hard to fulfil by one person alone. In the case of a soloist, the addition of one person will double the cost of the concert, because a second person will charge fees and will also need a hotel room and food. To expand an ensemble of five people by a sixth person will elevate the cost by 20%, which is about the buffer the reasonably experienced concert organizer would have calculated in the budget anyway. A soloist travelling with a sound engineer doubles the charge and this will most likely result in the concert organizer deciding to not program the concert — think of the budget American universities have for their concert programs. This is the case even when the sound engineer will not take on any musical responsibilities, in which case the concert should be sold as a duo concert anyway.
Practicing and Money
One thing that seems to be downright obvious is that the musician has to get acquainted with both the music and the instruments. Unfortunately, when playing with electronics these two things are often impossible.
For in order to practice with a live electronic setup the machinery has to be set up somewhere. Rehearsing with a specialized studio will provide the performer with a little bit of allotted rehearsal time but by no means as much as the musician would normally want to practice a piece. But in most cases the performer will have to do without the help of a studio. This means that the musician has to personally invest in and provide at least a minimum technical setup. This investment, of about 2000€ for a very basic but acceptable setup 2[2. This is a quick projection of the cost to purchase a portable computer, an audio interface with microphone amplification, a microphone, loudspeakers, controller interfaces and cable (this sum has been about the same for the last ten years or so).], is often very hard to make for a pianist coming right out of university. Additionally, software must be purchased. Relying on Pure Data or a Max/MSP runtime will most likely not suffice, especially when the source code of the latter must be adapted to the hardware.
Unfortunately this setup will only suffice in about 80% of cases, since pieces might have been written using a computer with a different OS or with commercially available programs of an obscure sort. And since the hardware and software changes constantly, one must theoretically keep older versions of operating systems and programs, too. It is obvious that all this work distracts the musician from maintaining the skills expected from a full-time musician. This might not be so obvious with instruments that are so physically demanding that they can be played only a few hours a day, like brass instruments or the human voice. It is certainly a problem with instruments that demand a time-consuming schedule like the violin, or especially the piano. (It seems like the piano player always takes longest to learn a score and many professional ensembles for contemporary music employ two or more pianists. 3[3. E.g. Ensemble intercontemporain, Ensemble Modern, musikFabrik, ensemble recherche.])
Although the cost of 2000€ might seem a small amount of money in comparison to the amount of cash musicians spend on their “original” instrument, they might only do so hesitantly if they are not enthusiastic about electronic instruments in the first place. After all, it’s an investment that will not better them as pianists, on the contrary. Dealing with electronics can easily be seen as counterproductive to the daily practice routine of a professional musician. At this point the musician will have to decide if he or she wants to commit to being a specialist in music with electronics. And becoming a specialist means investing more time and money than is required for a standard practitioner.
Most composers simply use the equipment available in their own working environment, which they own, buy or even make, and the musician is sometimes forced to replicate this process. Considering that only 10–20% of all compositions will survive beyond the first concert, this seems to be an excessive investment. Often composers and musicians first meet for the performance of the work at the concert venue and the musician will be given the composer’s equipment to perform with. Here the issue of technical intimacy comes into play: a performer can only get a “feel” for a musical instrument after playing it for a long time. This is not only true for analogue instruments but also for electronic devices such as MIDI controllers. Although it might seem that a button or a slider should be easy to use, since they only have to be pressed or pushed, the same could be said about piano keys, and pianists will — not without reason — protest that notion. As with a piano key, the resistance of a slider is something the player has to get used to and the more familiar the musician is with the equipment, the better the performance will be. And a good performance should not only be the wish of the performer, it should be the wish of everybody involved in the performance, because it is the performance that will to a large extent decide the fate of the composition.
Therefore, the production of music with electronics and the setup in the hall should actually be streamlined for the needs of the performer, but we all know it has thus far generally been streamlined to making the electronics sound good in the hall, not to making the performer feel comfortable with and confident in the performance environment.
A question remains concerning how technically proficient a performer must be with software coding of common programs. Must an instrumentalist own and know Max/MSP, Ableton Live and Supercollider? What about Pd and virtual instruments? It is easy to say that this should be part of the curricula of the professional education. However, teachers of solfège, theory and music history already lament how little time is spent on topics that are not directly related to aspects of instrumental practice. But alas, it is the time studying at a music academy the only time where the musician can practice the 8–10 hours daily that are needed to become a technically proficient player and ultimately a fast learner who does not have to deal with technical imperfections?
Alas, the same goes for the composers, who are often taught a few courses in computer music but basically learn by doing. And composers are not professional programmers. This is reflected in patches that are far from perfect, but considering how difficult programming can be, this should not be considered as an imperfection.
Another problem is that musicians will obviously not own or have access to the equipment available in a concert hall, not to mention huge installations like the ZKM’s sound dome, to practice with. It is therefore necessary to create a “practicing patch” with minimal requirements, analogous to a practicing score, a reduced version of the full score. However, it is very rare that the composer will provide the musician with a stereo patch the performer can work with, without the composer’s presence. It should also be possible to start the piece at any given time in the piece, so the performer doesn’t always have to start at bar 1. And certainly the patch should be documented well enough for another person to understand the patch than the composer. This, in my experience, is very rare. It is actually more common that composers don’t even recognize their own patches after a few years.
The Aging of the Modern Age
Another problem lies in the fact that contemporary cultural artefacts seem to age faster than ever before.
Real-time / performed electro-acoustic music… is currently facing a serious sustainability problem: while its production is indeed considered very recent from the music history point of view, several technological generations and revolutions have gone by in the meantime. (Bernardini and Vidolin 2005)
Comparing the life spans of common media like paper, magnetic tape and disc drives seems to suggest that we’re saving our data more and more insecurely. Likewise, software programs have a lifespan of only a few years and cable connections seem to change as often. What was once cream of the crop hardware like SCSI, Jazz and Zip drives is now technology of a long forgotten era. And it is extremely difficult to restore the data that’s stored on such media, because not only is the information likely to be corrupt, but in order to access the data, a disk drive and drivers that only work with machines that are from the same era. Unfortunately this is the technology we stored our art on and music with electronics made between 1980 and 2000 likely ran on a hardware platform that is not being produced nor even used anymore. We have to rely on and pray for a few old devices to run the programs.
As strange as it seems, it is often the interpreter who has to deal with this data archaeology, although it’s certainly not their field. But an organizer will ask the interpreter to play a specific composition at a concert and the interpreter will then have to deal with the publisher who most likely does not have a working patch stored and who will pass on the contact details of the composer. Then there will be a discussion of how to resurrect the composition, who can do it and how much it will cost. This is indeed very far away from the original business of a musician and the interpreter will easily feel overwhelmed and may even consider ceasing to work with live electronics in general. After all, there is centuries of fantastic music waiting to be played which doesn’t weigh down the performer with any of the problems described above.
A Few Proposals for Solutions
Solutions for all these problems will not be found overnight. Since music with live electronics is still new we don’t yet have standards and best practices we can rely on. But a few things can be done even now and we have to strive to standardize them.
Standardization of Technology
No doubt some standardization of technology would help everybody involved in the process of producing music with live electronics. A model could be the automobile: after some familiarization, everybody who knows how to drive an automobile can ride any similar vehicle. We’re far from that. At this point in time (early 2011) it seems that a common ground for computer technology used in the performance of works for instrument and live electronics would be an Apple OSX system computer. 4[4. A quick glance through my personal patch collection obtained from composers shows Linux: 1, Windows: 1, Apple: well over 20.] But to think that Apple computers are in any way standardized would be fateful as well. In recent years, for example, Apple has changed the connectors for power and video as well as almost abandoning the Firewire connection just to jump then from Firewire 400 to 800.
The most commonly used software today for live electronics is by far Max/MSP. The free Pure Data is still a rarity, as is other free or commercial software, with the possible exception of Ableton Live, which is slowly gaining track in the classical contemporary music scene. This is also reflected in the software Max for Live, a collaboration between Ableton and Cycling 74, the makers of Max/MSP. Max/MSP provides a free runtime version but if anything goes wrong — and it is by no means safe to assume that a patch that works on one computer will run on another — the patch can’t easily be adapted without access to the original file from which it was generated, and therefore also the commercial version of the software.
Of course such a call for standardization should by no means discourage anyone from trying out new technologies. But there must be an awareness that the more peculiar or uncommon the technology used, the more responsibility the composer has to take in providing not only software and patches, but even hardware to the performer. 5[5. An example would be the composition *error_05/auto_face* by Michael Pinter (reMI) for auto_face (a self-programmed Linux program), video, internet and ensemble. Here the composer provided the whole computer, which was shipped from Graz to Hamburg for the performance.]
Reliable and Future-Proof Patches
Patches should be able to be launched by double-clicking and should be easy to rehearse with. Apart from the fact that oftentimes the patches are simply not coded well enough and sometimes only run on the machine they’ve been coded on (because they have been unintentionally tweaked to accommodate the hardware of the programmer’s computer), they often require additional software that might not be installed on the host machine. It should however be possible to distribute patches with all information included so that they can be opened by Max Runtime or as an executable. 6[6. I heave heard contradictory testimony about whether runtimes are reliable or not. My impression is that runtimes that have been tested on several machines are likely to work but that this degree of testing doesn’t happen most of the time.] Adjusting the microphone level etc. is of course inevitable, but these are not the time-consuming errors that could potentially stall the preparation of a concert.
How to store the data of the information age is a major topic of our time and not restricted to music. 7[7. See, for example, the Preserving Virtual Worlds project, a joint effort of the US Library of Congress, Linden Lab, the Internet Archive and the Universities of Illinois and Maryland, Stanford University and the Rochester Institute of Technology, or for a more general example the National Digital Information Infrastructure & Preservation Program. The European counterpart, which has a strong emphasis on the arts, would be CASPAR.] But although “the archiving and preservation of electroacoustic music is now well established for fixed format (‘tape’) works… the same cannot be said by a long way for ‘live-electronics’” (Emmerson 2006, 209). European projects for the preservation of art with a focus on music are the mediaartbase.de (Karlsruhe, Germany) and the Integra (Birmingham, UK) projects. These archives are looking into better ways on how to store the data of today in the future and first steps are being taken. 8[8. See eContact! 10.x — The Concordia Archival Project (CAP) for articles and resources on the subject of archiving electroacoustic works.] But as Julia Haecker, research assistant at mediaartbase.de 9[9. See Haecker’s article “The Digital Mind” in this issue of eContact! for more information about the project.], confirms: in the end it’s the composers who have to provide all the necessary documentation and them not doing this is a major hindrance (Haecker 2011). This incompleteness of the scores will inevitably lead to unwanted reliance on oral history 10[10. For example, questions concerning undocumented or poorly described technical requirements for various works by John Cage come up on a regular basis on the John Cage mailing list.] and speculative (Vidolin 1993/1999) or arbitrary performance practice. 11[11. See Kerry Yong’s performance of Giacinto Scelsi’s Aitsi (1974, for amplified piano with distortion) on a Casio keyboard on YouTube.]
Detailed Stage Riders
Performers who have been taught an acoustic instrument in a music academy will typically have little to no knowledge of technology. Chances are good that they have never heard the term “stage rider” mentioned at all, since the concerts at the academies are most often cared for by the technical team of the studio or concert hall. It is thus no wonder that most stage riders are in most cases incomplete and incompetent — many interviews I have done in recent years with technicians have proven that. 12[12. These interviews are part of my ongoing research at the CeReNeM Huddersfield. A selection of them can be found in this issue of eContact!]
But as mentioned above, and as most performers have experienced, there is no time for trial and error when setting up the stage, and the tech team must be fully informed well ahead of the performance date about the machinery needed for the concert. This not only includes charts with the required materials and drawings indicating where each device fits in the setup, but also needs to include the details of the connectors. Even the common connections like USB, FireWire and Video are changing constantly and seeing people running around trying to find the right adapters, or even soldering on the fly, is all too common.
Finally, it is an absolute necessity to get in direct contact with the technical team in the days leading up to the concert. In large organizations technical riders can get lost and sometimes only the direct communication about a certain setup will ensure the organizer thinks all the details through properly.
Travelling with a Core Working Setup
Musicians can’t rely on patches and machinery working in any given concert hall. In fact most concert halls are not adapted to this kind of music and it is therefore inevitable that the musician will need to bring their own working setup with them. The core setup consists of the audio generating devices and the controllers, just like an instrumentalist travels around with their instruments (pianists and organists would like to be able to do this, too). Playing one’s own controller is essential for the technical intimacy that is needed. Even MP3 players all work differently and there is no time to get acquainted with a technical device before the concert. Playing music is very much the release of a long trained course of action and changing the action, for example, from pushing a button to turning a wheel, is a major distraction to the performer’s trained motion sequence.
The sound creation and reproduction chain includes the computer, the software, the patches and the audio card. If the musician is missing any components of what I call the core setup required to perform the piece, the entire setup must be made available by the concert venue. Presently this will only be possible when the concert venue is one of the bigger studios for electronic music. It is questionable though how a setup can be tested without a musician actually playing it. This sort of testing can’t be more than a line check and a crude functionality test of the software. In any case, the musician will also need time to get acquainted with the new devices.
The notion that the entire setup should then be provided by the concert venue stems from the fact that even standardized connections don’t always work to the full extent of their specifications (an example would be USB buses not distributing enough power for mounting certain USB sticks) and have differing firmware. It can therefore not be assumed that one setup will work with devices of the same brand and model range and that parts of the setup can be replaced by “the same” device. 13[13. This is much more common than one might think. Three examples off the top of my head: 1) The RME Fireface 800 and the DSI Mono Evolver (a synthesizer) used different hardware components over the years, which alters the capabilities of the devices. 2) In 2007, Apple switched from Texas Instruments to Agere firewire chips which caused numerous problems with audio interfaces for the next two years. 3) For an RME Fireface to provide MME support in Windows 7, firmware must be installed. Since this firmware update is Microsoft-specific, a MacIntosh user will probably not even know that there’s a new firmware out. This list could go on and on.] This happens often when trying to reduce weight on a trip and agreeing to use the audio interface that’s available at the venue.
Concerts must be Programmed with all Compositions Working
Oftentimes music is programmed because it fits a certain festival theme or a certain program. This is a practice that’s easily done with programs consisting solely of instrumental music because all you need to find then is a performer who is able to execute the piece. With works using electronics this is much harder, since although the music has been printed and announced in the publisher’s catalogues, it is not clear what state the electronics are in or if they require obsolete hardware and/or software. This can lead to a lengthy phase of data archaeology. It can also lead to finding that a composition is lost because the data is simply not recoverable. If a musician offers to play a piece that is not in the repertoire and already has a proven working setup, a time span of at least half a year might be needed to find and resurrect old hardware and software, even for smaller projects.
The failure of electronics during the concert is very often compared to a violin or piano string breaking. But in an acoustic concert everything is streamlined and ready for the concert, and the violinist will have a backup string and the pianist will have a stand-by technician. Only if all the technology has been thoroughly tested and set up in a professional manner is it comparable to the situation of the concert hall before a concert of classical music.
The use of electronics has changed the role of the performer in far more drastic ways than generally thought. The focus of the concert preparation has seemingly shifted away from the performer and his or her needs to be able to guarantee the best possible concert performance to the demands of the electronic setup. The performer seems to have lost control over the sonic outcome of the concert and has become a cog in the wheel. For the performer this shift seems unfortunate, since the act and the difficulty of performing have not fundamentally changed. For this reason, most performers feel uncomfortable playing with electronics to this day.
The solution seems to be clear: performers have to take their fate in their own hands and have to become specialists. This is certainly a task that consumes much time and money, but is the only way to regain some control over what’s happening in the concert hall. To quote Stockhausen again: “As a musician, you must assume responsibility for how you sound when recorded” (Stockhausen 1996, 84). It will require not only investing in a core audio setup and setting up a rehearsal space where they can practice with and get acquainted with electronic gear. Learning-by-doing is certainly an effective approach, but perhaps not the best way to go about it in the midst of preparations for a stage performance. It is thus necessary that at some point music schools will have to recognize the need for educating performers in contemporary performance practice.
It is however quite clear that the performers will need to speak up for their needs and demand from composers patches with which they can practice. They will also have to fight for enough rehearsal time in the concert hall above and beyond the time needed for the technicians to setup and do a line check and for the composer to do a sound check. Only if they get ahead of the technicalities, instead of relegating the responsibilities for the electronics to the composers and technicians, will this comedy of errors — as Elizabeth McNutt (2003, 297) calls the following excerpt from a text by Puckette and Settel (written in 1993 but still sounding strangely familiar in 2011) — finally become a thing of the past for the performers:
The composer… must first assemble the combination of local and flown-in gear which will permit the piece to be played. If time remains, the piece will be rehearsed and adapted to whatever hardware changes were made. It is at this moment that the player meets her accompanist… for the first time. … Part of the rehearsal is taken up by an extraordinary sound check in which sound engineers push the outputs all the way up to listen to hisses and hums. … The computer software and hardware extend the sound check into a debugging session. The computer is rebooted again. Will it work this time? (Puckette and Settel 1993, 136)
Bernardini, Nicola, and Alvise Vidolin. “Sustainable Live Electroacoustic Music.” Proceedings of the Sound and Music Computing (SMC) Conference 2005 (Salerno, Italy: XV CIM, 24–26 November 2005). Also published in eContact! 8.3 (June 2006).
Emmerson, Simon. “In What Form can ‘Live Electronic Music’ Live on?” Organised Sound 11/3 (December 2006), pp. 209–219.
Guercio, Mariella, Jerome Barthelemy and Alain Binardi. “Authenticity Issue in Performing Arts using Live Electronics.” Proceedings of the Sound and Music Computing (SMC) Conference 2007 (Lefkada, Greece: University of Athens, 11–13 July 2007). http://smc07.uoa.gr/SMC07 Proceedings.htm
Haecker, Julia. Private conversation, April 2011.
McNutt, Elizabeth. “Performing Electroacoustic Music: A Wider view of interactivity” Organised Sound 8/3 (December 2003), pp. 297–304.
Puckette, Miller and Zack Settel “Nonobvious Roles for Electronics in Practice.” Proceedings of the International Computer Music Conference (ICMC) 1993 (Japan: Waseda University, 1993), pp. 134–137.
Scelsi, Giacinto. Aitsi (1974), for amplified piano with distortion. Performed by Kerry Yong. YouTube, posted by user “mshlom,” 27 February 2009. http://www.youtube.com/watch?v=XxvZYyw08wc (Last accessed 21 April 2011)
Stockhausen, Karlheinz. “Electroacoustic Performance Practice.” Perspectives of New Music 34/1 (Winter 1996), pp. 74–105.
Tremblay, Pierre Alexandre. “Pragmatic Considerations in Mixed Music: A Case Study of La Rage.” Proceedings of the International Computer Music Conference (ICMC) 2006 (New Orleans, USA: 2006).
Tremblay, Pierre Alexandre, Nicolas Boucher and Sylvain Pohu. “Real-time Processing on the Road: A Guided tour of [iks]’s abstr/cncr setup.” Proceedings of the International Computer Music Conference (ICMC) 2007: Immersed Music (Copenhagen, Denmark, 27–31 August 2007).
Vidolin, Alvise. “Note tecniche sulla realizzazione della parte elettronica di Aitsi.” In Pierre-Albert Castanet and Nicola Cisternino (Eds.), Giacinto Scelsi. Viaggio al centro del suono (La Spezia: Lunaeditore, 1993), pp. 228–233. Translated into German by Harald Muenz and reprinted in MusikTexte 81/82 (Köln, December 1999) as “Zum Elektronikpart in „Aitsi“ von Scelsi”.
The Silence List, the “John Cage List”. https://lists.virginia.edu/sympa/info/silence
Yong, Kerry. “Electroacoustic Adaptation as a Mode of Survival: Arranging Giacinto Scelsi's Aitsi pour piano amplifée (1974) for piano and computer.” Organised Sound 11/3 (December 2006), pp. 243–254.