Worlds Collide
Utilizing animated notation in live electronic music
Live electronic music, especially when featuring live electronics and acoustic instruments, faces several challenges regarding composition and performance practice. These challenges derive mostly from the hybrid character of live electronic music, where there seems to be an opposition between, on the one hand, the demands concerning music notation and live performance by computer musicians on their electronic instruments and, on the other, musicians performing with acoustic instruments. In other words different “musical worlds collide”. This paper describes the challenges of live electronic music and proposes Motion Graphic Notation Guidelines (MGNG), both of which are based on practical experience. These guidelines describe how to apply animated notation in order to tackle problems in live electronic music composition and performance practice.
The use of animated notation in general has several special features and advantages. First, an animated notation can bear an æsthetic value in itself and thereby enhance the music if screened during the performance (Fischer 2013). Furthermore the visualization of the music and its structure can support the understanding of the piece significantly. Third, an animation is a time-based media (Betancourt 2013) and therefore allows for the exact structuring of musical events over time. Additionally, it allows for the indication of very slow changes of musical parameters over time. Nevertheless, animated notation itself has very specific features, advantages and disadvantages that need to be carefully taken into consideration when utilizing it as a tool for composition and performance of live electronic music. This kind of notation is not bound to an established framework, as is traditional Western staff notation (Daniels and Naumann 2009). For instance, the mapping of visual parameters in the score and their acoustic counterpart is not defined. This freedom is its strong point but at the same time the main reason why animated notation is often regarded as inaccurate for music notational purposes and therefore hardly used (Ibid. 2009). The Motion Graphic Notation Guidelines address this problem and propose a set of empirically researched principles (Fischer 2013) concerning how to accurately apply animated notation. Simple instructions guide composers and performers, with suggestions how to use motion graphics and animations for notation in a clear and comprehensible way. Scores following MGNG can be utilized without any previous knowledge of traditional music notation; they can be used for the notation of music for computer musicians and live electronics as well as for acoustic instruments. A thoroughly flexible orchestration palette is thereby supported. Finally, MGNG’s major advantage is a clear differentiation of graphics regarding their meaning and usage.
Problems of Live Electronic Music
Live electronic music is understood for our present purposes as a subcategory of electronic music. The larger, metacategory of “live electronic music” refers to the creation of music using any kind of electronic means or devices of sound generation, manipulation and processing, and acoustic instruments in a live context (Collins 2007). Electronic music in general, and specifically live electronic music, faces very special challenges. Here, we address the most significant questions and problems regardless of whether they are related to the composition or performance of live electronic music.
Composition of Live Electronic Music
The first important question that arises need not be discussed in detail here, but should nevertheless be mentioned (as it is deeply connected to further issues) is whether live electronic music (and other music of its nature) is music at all, or rather sound art. When looking at music from a psychological perspective, as William Forde Thompson did in his 2009 book, Music, Thought and Feeling: Understanding the psychology of music, the answer is very clear. Thompson indirectly determines all atonal or abstract music as non-musical. Whether live electronic music can be considered “music” or not will not be discussed here. Still Thompson’s approach and perspective offer an explanation why electronic music in general (apart from electronic club music) lacks the potential for an audience to easily identify with, understand and enjoy it (Thompson 2009). Francis Dhomont claimed there is “a poor attendance at our concerts” and talks in this context about the audience’s lack of assimilation of the electronic music repertoire (Collins 2007, 194). In other words, live electronic music often seems awkward and not easily accessible, especially for the unfamiliar listener. This problem is connected to a second issue of live electronic music. While electroacoustic music or tape music is pre-composed in the studio and is thereby the subject of a profound compositional process (Collins 2007; Manning 1985), live electronic music is often barely notated or completely improvised. One major reason for that is the lack of a notational system — especially regarding the electronics — for such practices. Stockhausen’s Studie II from 1954 (Stockhausen 1956) indicates that the wish to retain electronic music, apart from recording it is as old as electronic music itself. Still, most electronic music is manifested through a support medium, not through musical notation, or a score.
Live Electronic Music — Performance Practice
There are differences regarding the audience’s perception of the instruments in live electronic music, especially regarding playing and the creation of sounds. First, the computer and its associate hardware and software involved in the creation of sounds are not necessarily regarded as a musical instrument as such. Composers and performers often speak of “software instruments” (Collins and Escrivan 2007) to indicate the musical function of certain software. Also, recent developments in interface design indicate a tendency towards more instrument-like and playable hardware, such as the Reactable (Ibid.). As the computer is available in almost every household and regarded as a tool for sending emails, surfing the Internet and other tasks, it is understandable that some people find it hard to recognize it as a musical instrument equivalent to, for example, a cello or a trumpet.
Staff notation… has remained, in stark contrast to the music it represents, almost unaltered for the last 250 years.
However, there is more involved in the issue than problems of convincing the audience to accept the computer as a musical instrument. Peter Manning highlighted already thirty years ago that electronic music performance lacks live actions, which are an integral part of live performances (Manning 1985). Acoustic instruments have, of course, the “human touch” we would expect of live actions, and not only in regards to the act of physically playing an instrument. Acoustic instruments are also the source of sounds. Live electronic music mingles the worlds of acoustic and electronic or computer music. Actions of acoustic instruments support the live experience in live electronic music. However, the hidden genesis and manipulation of sounds in the computer is one of the factors that might lead to the false impression that music is not performed live. When working with delays, sequential processes or the triggering of events that are mutually dependent, the actions of computer musicians might be completely disconnected temporally from their sonic result. Some alternative approaches in live electronic music cope with this problem. For instance, Laetitia Sonami’s The Lady’s Glove or other inventions from STEIM in Amsterdam have been developed in order to connect body movements to sound making.
Other approaches visualize processes in the computer and make the musicians’ actions more transparent, like those found at live coding events. The whole idea behind much of handmade music, hardware hacking and self-built devices for sound making is to “bridge the gap between the sound world of a generation raised in an electronic culture and the gestural tradition of the hand” (Collins 2006, 1) in order to make electroacoustic a live, experienceable music performance.
Acoustic and electronic instruments have divergent needs regarding notation itself. In Western culture, regular staff notation has been the major form of music writing and has remained, in stark contrast to the music it represents, almost unaltered for the last 250 years. In electronic music, there is no commonly accepted form of notation. Staff notation is not feasible for a large body of electronic music, as sound production and even the sounds themselves are often quite different from those produced using acoustic instruments, for which this notation was originally developed. Christian Dimpker tried to overcome the gap between acoustic and computer instruments in his recent book, Extended Notation: The depiction of the unconventional. Staff notation is extended by the addition of symbols and signs that indicate, for instance, the use of sound synthesis techniques or the spatialization of sounds. At first glance, this approach seems to solve many problems. For pieces that focus on acoustic instruments with some simple electronics, it might even work perfectly. However, Dimpker’s approach implicates additional problems. First, the musician is required to learn new symbols, and, depending on the musician’s background, possibly the standard staff notation as well; this takes significant time and effort. Second, the more complex sound synthesis and audio processing get, the more complex the notation gets. Movements of sounds in space or changes of musical material can occur very quickly in electronic music. A combination of several indications — e.g., spatialization, a crescendo, band-pass filtering and the parallel change of three parameters (level, feedback, time) of a delay effect — becomes highly confusing and therefore almost impossible to apply. Additionally, by extending staff notation, which was originally meant for acoustic instruments, a more general question about whether this notation is feasible to represent electronic instruments as an independent entity within a music notation system is raised.
Animated Notation
Animated notation is encountered in the large majority of the alternative approaches to notation explored by many avant-garde composers, with a peak in between 1950 and 1970 (Daniels 2009). Earle Browns’ piece December 1952 (Gresser 2007) often appears as an example of this alternative approach to notation. Many other composers of that time in the USA and Europe, such as Cage, Feldman, Stockhausen or Kagel, experimented with graphics for notational purposes in various ways (Cage 1969; Karkoschka 1966). According to Julia H. Schröder, visual artists developed ideas further as “their interest in the individual handwriting manifesting itself in musical graphics is greater than that of composers, who were concerned with the establishment of a new, normative graphic canon” (Daniels 2009, 153). Only in recent years, where interdisciplinary thinking and working and the concept of hybrid media and arts became fashionable, a growing interest in alternative notation using contemporary techniques can be observed; this is also indicated by the growing number of conferences about this topic. 1[1. For instance, the SSMN project at the Zurich University of the Arts, the December 2014 issue of Organised Sound on “Mediation: Notation and Communication in Electroacoustic Music Performance” (19/3) and, of course, TENOR 2015 — First International Conference on Technologies for Music Notation and Representation as a joint venture of IRCAM and Université Paris-Sorbonne.] The utilization of screens and animation techniques for notational purposes is, however, still in its early stages. Even a standardized term for this kind of notation can hardly be found. Lindsey Vickery generally calls them “screen scores” — with the subcategories of scrolling score, or permutation, transformative and generative score (Vickery 2012) — while Severin Behnen talks about “motion graphics scores” — with the subcategories of animated, interactive and plastic score (Behnen 2008). For our purposes, “animated notation” will serve as an umbrella term for various approaches, in which graphics are put into motion for notational purposes.
Animated notation has become a playground for various approaches and applications. Collections of graphic notations and musical graphics, such as Cage’s Notations (1969) and its successor, Notations 21 (2009) by Theresa Sauer, illustrate the breadth of approaches explored in this field. They also show that there is no common language in the field of graphic notation. Although composers working with graphic notation were at one time trying to establish a “new, normative graphic canon,” the actual practice has proven to be quite versatile and variable. In this context, the level of determination of a score can be also quite different. While December 1952 is a musical graphic and acts as a sheer trigger for improvisation (Gresser 2007), there are other scores, such as those for the works of Anestis Logothetis, that are based on a system of graphics with a specific meaning (Logothetis 1999). The same is true for an online collection of recent animated notations by Páll Ivan Palsson (2014) or the animated notation studies by Ryan Ross Smith (2011). Animated notations use various techniques and styles and can be open for improvisation or very determined regarding actions or specific musical parameters. However, graphics and graphical attributes are often not clearly mapped with sounds. Each score is different. Although animated scores share common features, for instance a play head that indicates the actual position within a scrolling score (Vickery 2012), none of these features are obligatory or used in a generally standardised manner. On the one hand, this seems to be a deficiency. On the other hand, this freedom is the basis for individual artistic and musical expression and a way to at least try to keep the possibility to create new music alive (Thomas 1965). Live electronic music animated notation can be regarded as a “neutral ground” where acoustic instrument and computer instrument can meet, as animated notation offers the same approach regarding composition and performance for all performers by utilizing abstract graphics that are mapped to musical parameters.
Mapping Process
The major difficulty in animated notation is the connection or mapping of visual and musical parameters. Musicians who have attended music school have been trained to read and utilize Western staff notation. For such musicians, there is no ambiguity about how the various symbols and signs making up Western staff notation should be interpreted. But how does a red square sound compared to a green triangle? To understand how mapping can work, we’ll compare Western staff notation and animated notation in terms of their communication process. Certainly there is a disparity in the communication process of Western staff notation and graphic notation. Heinz Kroehl discusses sign systems and visual communication in connection to semiotics. According to his definition, regular staff notation works quite similar to a language. It consists of a system or framework of specific rules, syntax and modes that need to be learned and understood to be able to apply it for musical performance. From a visual communication perspective, staff notation is therefore scientific (Kroehl 1987). It is related to specific definitions. In other words, there is a predefined connection of sign and sonic result. Although this connection is arbitrary, the definition on how to interpret a musical sign was shaped through music history and practice of the last centuries and is taught in Western music schools and academies. A specific sign indicates, for instance, a specific key on a piano, such as an A4 with the frequency of 440 Hz. Executing this sign should always result in a similar sound or frequency, respectively, regardless of the context (instrument, place, player, time, etc.). The sonic result of pressing the key is repeatable in the same manner anytime and anywhere. 2[2. Here we are concerned with notation and result and will ignore for the moment differences in the interpretation of a “same” note.]
How does a red square sound compared to a green triangle?
Animated notation works entirely differently. Apart from the impact of the technical possibilities of animation and filming techniques available today, there is no definition of what a sign might mean. In contrast to staff notation, animated notation is artistic (Kroehl 1987). Graphics in animated notation therefore convey possibilities; there are no set definitions. There are no two people that would have exactly the same understanding of an abstract graphic. Of course, staff notation needs to be interpreted as well and this interpretation might vary significantly. However clef, key, lines, bars and notes indicate, in a much more precise manner, what to play than abstract graphics. In staff notation, the major mapping process has been done already, as it relies on a set of specific and universally accepted rules. In graphic and animated notation, meaning needs to be created individually through the interpretation of the graphics and symbols.
The mapping process describes the creation of meaning by connecting graphics and graphical attributes with sounds and sonic attributes. This process is divided in two separate steps. The first is the mapping done by the composer (C-mapping), who tries to create comprehensible connections between graphics and sounds or between graphics and actions. The second step is the mapping done by the performers (P-mapping), who interpret the score and try to find connections between the visuals and their playing. The more precise and comprehensible the C-mapping, the more precise the score is and, consequently, the less interpretation (and improvisation) that is required by the performers. Generally, one major distinction that contemporary notation has been struggling with for quite some time (Seeger 1958) needs also to be made in regards to mapping. Graphics are either tonal or actional, i.e. the graphics convey sound characteristics or refer to the means of playing, respectively. This difference and other distinctions are described below in order to emphasize the importance of this mapping process to avoiding misunderstanding and misuse of animated notation.
For example, imagine a composer utilizes an abstract film, including various images and graphics of different shape and colour. The film displays these graphics as a continuous flow of morphing images and dissolving graphics with no specific structure. No further explanations are given. In this case the composer did not establish comprehensible connections between graphics and sound and no C-mapping has been done. The score works rather as a trigger for improvisation, much like Brown’s December 1952. All the mapping is therefore left to the performer. This piece is an improvisation that takes place in parallel to the reading of the score. As another example one could imagine a score for a trio, with graphics in three different colours and two different structures. The colour indicates the instrument, while the structure defines whether the graphic indicates an action or a sound. Additionally, the composer gives a written explanation stating that the relative size of the graphics refers to dynamics, whereas the y-axis indicates pitch. Lines display single notes, while blocks display clusters. A play icon (or similar) indicates to the performer exactly when to play a specific graphic that passes over it. Here, a more concrete C-mapping has been done, however the performers still have to do their own mapping. For instance, the meaning of motion or any kind of alteration within a single graphic is not defined and therefore depends on the performers’ interpretation.
Special Features of Animated Notation
However, as mentioned before, the visual communication process in animated notation is primarily artistic in nature. Only the composer’s C-mapping and explanations set the parameters. Therefore, it is possible to create scores that can be used by beginners or even untrained musicians, as was done in Umeå Voices, an artistic research project at Umeå University (Sweden) directed by Anders Lind. He utilizes The Max Maestro, a standalone application programmed in Max/MSP that makes use of animated notation (Lind 2013). This project and other approaches like Dabbledoo Music (McKenna 2014) prove that young children and even non-musicians can easily comprehend and perform animated notation. Especially in the classroom, animated notation can support the creative development of young musicians (Schafer 1965).
Electronic music in general has changed significantly since its advent in the middle of the 20th century. It was mainly shaped by the evolution of technical equipment and hardware instruments, especially the digital revolution, increasing computational power and at the same time sinking costs of computer hardware (Collins 2007). Animation had to undergo similar processes (Sito 2013) and is today not only equipped to keep pace with technical developments, but also adjustable to changing needs in live electronic music practice. Additionally, it is able to utilize any kind of interconnection between audio and video and make scores interactive or generative (Vickery 2012). Animated notation can utilize the full variety and possibilities of graphic design and visual communication. Colour, size, shape and the arrangement of objects on the screen allow for the mapping of any visual and any sonic parameter in a meaningful way. Animated notation is not a modal system. Signs do not necessarily depend on each other or have to be seen in a larger context (outside the mapping process) to understand them. In animated notation one single graphic can contain several attributes at the same time. Composers should avoid using too many attributes and rapid changes, though. Otherwise human perception might be over-challenged. Nevertheless graphical attributes can be used in any creative way the composer desires. For instance, the size of a graphic could indicate dynamics, the position could indicate pitch and colour could indicate timbre (Fischer 2013).
Another unique feature of animated notation is the use of motion; graphics can be animated. This allows for the indication of very slow changes of musical parameters over time within the score. Rather rhythmical features, for instance an indication of a recurring action, can change (linear or exponential) only fractional over time. Stopwatches or triggering techniques become obsolete. Furthermore a musical figure or movement can be translated literally into motion. Thereby a unique connection of sound and visuals can be established.
Three Examples Revealing Problems of Animated Notation
Ryan Ross Smith — Study No. 31 (2013)
The score for American composer Ryan Ross Smith’s study for seven triangles and electronics displays seven imaginary circles with cursors that indicate which part to play. Each cursor/circle is played by one triangle player. Each circle features attack/mute event nodes connected by an arc. How to use the score and how to perform it is very accurately stated. In other words, the score leaves no doubt regarding the actions to be executed, i.e. when to strike a triangle and when to mute it. It could even be played by beginners with a convincing result, because the score utilizes primarily actional graphics. Acoustically, the piece builds a complex structure of manipulated triangle sounds over time. However, the score does not indicate any variation of the sounds themselves apart from attack and mute. The score is basically a plan with “on” and “off” messages, executed by performers. The focus of actional graphics is on actions not on sounds. The design of the score hardly supports the development of musical material and the piece, when performed, has a rather static, yet constantly varying structure.
Lindsey Vickery — Nature Forms I (2014)
This piece works completely different than the study by Smith. It is a composed for three undefined instruments and electronics. Each player is instructed to interpret the score, which is comprised of “manipulated images of organic shapes”, in different ways:
- Player 1 is instructed to read the notation “semantically” — the vertical space represents pitch, the horizontal space in the scrolling score represents the temporal domain (time and duration), while the shading is related to timbre;
- Player 2 is to interpret the notation as tablature — the space in the score indicates where on their instrument to strike and the shading how to strike it;
- Player 3 is to interpret the score as “non-semantic graphical notation” — the should result in an “æsthetic, with the images in the score providing “an æsthetic indication of the character of the sound to be created.”
The electronics in Australian composer Lindsey Vickery’s Nature Forms I use the software Max/MSP with a patch that sonifies the score automatically in parallel to the performance by the musicians: “The score is simultaneously sonified using frequency, amplitude, brightness, noisiness and bark scale data to control the spatialisation and processing of the [sonification] data” (Vickery 2014).
The original score including the sonification is very convincing regarding the electronic part. Although generated live, it can be reproduced in exactly the same manner as it sonifies the score directly. Therefore, the electronic part is very precise. However, the instructions for the players are rather vague. It takes experienced musicians trained in improvisation that are open-minded and quick enough to adjust their playing according to the score; the score is a trigger for improvisation rather than a set of performance instructions. Still, the idea of three different approaches to read the score will certainly result in a complex sound structure.
Candaş Şişman — SYN-Phon (2013)
Turkish artist Candaş Şişman calls his work SYN-Phon a “sound performance based on graphical notation” (Şişman 2013) for cello, trumpet and electronics or objects. The score was as printed out as one long graphic and presented in an exhibition space in Budapest; we can assume that SYN-Phon was designed not only with the purpose of being used as musical notation but also with an æsthetic aspiration to be exhibited as a piece of graphic art. The animated notation combines several features common to animated notation. First, there is a red static play head, and the score scrolls from right to left. Thus, performers have the possibility to look ahead, a feature that his helpful as musicians are accustomed to reading in this manner with more traditional forms of notation. The graphics are white geometrical figures on a black background. The position of graphics on the y-axis serves as a relative pitch indication. Instruments tend to play the upper part of the score while the electronics and objects use the lower part.
The score works very well in some parts and a connection of sound and visuals can be made easily. In other parts there seems almost no connection of the graphics and the music. Especially in sections where the electronics and objects seem quite disconnected from the score. Some visual sounds have a sonic counterpart, while others have none. There is also no distinction between instruments. From reading the score alone, it is entirely unclear which musician is meant to play which part of the notation. Furthermore, the performers use similar playing techniques on completely different kinds of graphics and are not able to convey the graphics as precisely as they are designed. In the beginning of the piece, there is, for instance, a curvy line, whose amplitude becomes smaller over time while its period becomes longer. The cellist in this performance has interpreted this as continuous glissandi. However the contour of what is notated and what is played are not “the same” and therefore the sound and visuals are disconnected at this point in the notation; this could potentially cause some confusion if the audience was able to read the score as it was being performed. Questions concerning the “correct” interpretation of this notation — whether or not the performers should play precisely what is notated — or whether the notation scrolls too fast, is not distinct enough or is otherwise unplayable, should be discussed further.
Motion Graphic Notation Guidelines
The following Motion Graphic Notation Guidelines (MGNG) are presented in chronological order, i.e. they might best be applied sequentially, one after another. Nevertheless it is important to note that the MGNG are only guidelines. They are meant as suggestions for composers and performers that are looking for a helping hand to get started using this notational approach. As the animated notation works primarily on the artistic level of visual communication, completely different approaches are also possible, and could of course be used in combination with animated notation.
- Previous knowledge about graphic design and software tools is advisable. Composers will greatly benefit from any previous knowledge in drawing, photography, motion graphics, animation or any other techniques to in creating their individual and possibly very personal animated scores.
- Animated notation is a tool with certain advantages, disadvantages and specific characteristics, especially in regards to the visual communication process. Composers should be aware of those characteristics that are particular to animated notation, as well as to other forms approaches to music notation; without this knowledge, it would be very easy to design a score that could have been notated much easier using another type of music notation.
- Animated notation is time-based media and therefore especially suitable for the creation of distinct structures over time. One strong advantage of animated notation is its strict representation of time. The overall length of the piece and of single events, or even complex and slowly changing structures, can be accurately timed and displayed in real time.
- Animated scores work of course as well for acoustic instruments and electronic instruments such as computers or synthesizers. Additionally they may display non-musical instruments or any other (type of) object used for sound creation. Generally, speaking animated scores can depict any kind of sound or action in a musical context.
- Usability is a topic that usually only computer scientists and designers have to deal with. However, in animated notation, how to navigate or scroll through the score, how to start or stop the score from “playing” and what is required to be able to use the score on different devices like PC or tablet should be considered very attentively, in order to make the score as easy to use and understandable as possible.
- An intelligent and consistent mapping is at the core of the creation of an animated score and helps ensure an effective and “correct” interpretation of the score. Mapping needs to be generally logical and visually easily comprehensible, particularly if the score is also meant to be seen by the audience. For example, each instrument could be mapped with a specific colour to make it easily recognizable in various contexts; graphics could be clearly distinguishable to be either symbolic (referring to sonic attributes, i.e. to the sounds themselves) or actional (representing actions that lead to a sonic result).
- The overall style and design (visual impression) is the individual decision of the composer. The score itself can be a piece of art, simply provide essential practical interface for performers, or anything in-between or beyond. There are no limits. However, the music that is manifested in graphics or other forms of notation in an animated score should convey the spirit, essence or character of the piece. The æsthetics of the visual design of the score are part of the composition. The character of the graphics should reflect the essence of the musical work and the compositional approach.
- The presentation of the score to the performers (and of course to the audience, if desired) is the final piece of the puzzle regarding how an animated score is interpreted and transformed into music. The score will surely be conceived differently if it is presented on an 11-inch tablet or is projected on a 16-metre wide wall. Other means of projection (e.g., on objects) are of course also possible and might have a significant impact on the interpretation of the score.
Conclusion
First, the Motion Graphic Notation Guidelines presented are a set of easy applicable proposals for the adequate use of animated notation. Their use is not restricted to music intended for a live electronic music context; they can also be used as a tool to assist composers and performers starting to utilize animated notations of any kind, without proposing a rigid framework. Additionally, the audience can benefit from the visualization of the musical structure. Second, when properly deployed, animated notation can help overcome problems such as the actual notation of electronic music and improvised music, as well as the insufficiency of traditional notation to represent these and other forms of contemporary music. Finally, Motion Graphic Notation provides an excellent communicational platform in which acoustic instruments and electronic instruments can co-exist.
Bibliography
Behnen, Severin Hilar. “The Construction of Motion Graphics Scores and Seven Motion Graphics Scores.” Unpublished doctoral dissertation, University of California Los Angeles, 2008.
Betancourt, Michael. The History of Motion Graphics: From avant-garde to industry in the United States. Rockville: Wildside Press, 2013
Cage, John. Notations. New York: Something Else Press, 1969.
Collins, Nicolas. Handmade Electronic Music: The Art of Hardware Hacking. New York: Routledge, 2006. http://www.nicolascollins.com/handmade.htm [Last accessed 17 December 2014]
Collins, Nick and Julio Escrivan. The Cambridge Companion to Electronic Music. Cambridge MA: University Press, 2007.
Daniels, Dieter and Sarah Naumann (Eds.). See this Sound: An interdisciplinary compendium of audiovisual culture. Audiovisuology Compendium, Vol. 1. Cologne: Walther Koenig, 2009.
Dimpker, Christian. Extended Notation: The depiction of the unconventional. Zürich: LIT Verlag, 2013.
Fischer, Christian Martin. “Motion Graphic Notation: A Tool to improve live electronic music practice”. Emille 11 (2013). Journal of the Korean Electro-Acoustic Music Society.
Gresser, Clemens. “Earle Brown’s Creative Ambiguity and Ideas of Co-Creatorship in Selected Works.” Contemporary Music Review 26/3 (January 2007) pp. 377–394.
Karkoschka, Erhard. Das Schriftbild der neuen Musik. Celle: Hermann Moeck Verlag, 1966.
Kroehl, Heinz F. Communication design 2000: A Handbook for all who are concerned with communication, advertising and design. Basel: Opinio Verlag AG, 1987.
Lind, Anders. Voices of Umeå Project (2013). http://www.estet.umu.se/konstnarlig-forskning/anders-lind [Last accessed 17 December 2014]
Logothetis, Anestis. Klangbild und Bildklang. Vienna: Lafite, 1999.
Manning, Peter. Electronic and Computer Music. Oxford: Clarendon Press, 1985.
McKenna, Shane. Dabbledoo Music [Online Interactive Project]. 2014. http://www.dabbledoomusic.com [Last accessed 17 December 2014]
Palsson, Páll Ivan. Animated Notation [Online Resource]. 2014. http://animatednotation.blogspot.com [Last accessed 17 December 2014]
Peirce, Charles Sanders. Phänomen und Logik der Zeichen. Frankfurt an Main: Suhrkamp Verlag, 1983.
Sauer, Theresa. Notations 21. London: Mark Batty, 2009.
Schafer, R. Murray. The Composer in the Classroom. Scarborough: Berandol Music Limited, 1965.
Seeger, Charles. “Prescriptive and Descriptive Music-Writing.” The Music Quarterly 44/2 (April 1958) pp. 184–195.
Şişman, Candaş. SYN-Phon (2013). http://www.csismn.com/SYN-Phon [Last accessed 17 December 2014]
Sito, Tom. Moving Innovation: A History of computer animation. Cambridge MA: MIT Press, 2013.
Smith, Ryan Ross. Study No. 6 — Escalators (2011). http://ryanrosssmith.com/study6.html [Last accessed 17 December 2014]
Stockhausen, Karlheinz. Nr. 3 Elektronische Studien — Studie II. London: Universal Edition, 1956.
Thomas, Ernst. Darmstädter Beiträge zur neuen Musik: Notation. Mainz: B. Schott, 1965.
Thompson, William Forde. Music, Thought and Feeling: Understanding the psychology of music. Oxford: University Press, 2009.
Vickery, Lindsey. “The Evolution of Notational Innovations from Mobile Score to Screen Score.” Organised Sound 17/2 (August 2012) “Composing Motion: A Visual music retrospective,” pp. 128–136.
_____. Nature Forms I (2014). Available on the composer’s website http://www.lindsayvickery.com [Last accessed 17 December 2014]
Social top