Social top


Algorithm, Performance and Speculative Synthesis in the Context of “Spheroid”

The work Spheroid, which I performed at TIES 2017, explores an imagery of speculative nature and synthetic ecology in the context of live performance. To me, it exemplifies a type of acousmatic dimension forged by algorithmic and synthesis-driven music, where the “real world” is fundamentally considered synthetic and not a place of acoustic phenomena other than the present. This can be defined as a speculative synthesis: a process where technological methods, performance and æsthetic perception are guided by the idea of sound as a synthesis. As described here, the idea can be linked with Brian Kane’s concept of “acousmaticity” and applied to a computer music that embeds the human agent in a spatial texture environment. 1[1. The work presented here was funded by a Leverhulme Trust Early Career Fellowship held at BEAST, University of Birmingham, researching synthesis of spatial texture in composition and performance.]


Composing Spheroid, I was compelled by the idea of a texture whose causality and structure combine human performance agency and algorithmic processes, but is also spatially immersive and materially conducive to evocative imagery. Recent changes in my performance practice and composition methods as I have been moving from fixed-media studio composition toward live, generative music have added certain dimensions to my conceptualization of texture. There was always an interest in texture not only as an aggregate of sound but also of possibilities — a discourse of properties rather than objects, wherein perceptual syntheses occur in the flux of change and morphology is a product of texture. I have collected these ideas under the term “morphogenesis”, to value sounds as processes of coming-into-existence, rather than as collected, finite objects (Nyström 2017). Treating texture as a thermal, entropic process of coinciding and dissipating energies, I would often envision music as an abstract æsthetic modelling of nature’s creative power. Though it was possible to model this æsthetically on fixed medium (Nyström 2014), the notion of texture as an indeterminate phenomenon conducive to multiple potential forms and amalgamations can only be given limited exploration in a performance practice that always presents the same incarnation of the sound. Moreover, the conceptualization of sonic materiality as physical, chemical, energetic processes could be augmented with more virtual, mediated, technological imaginations, extending the scope of synthesis in texture.

Erik Nyström performing his work Spheroid during the 11th Toronto International Electroacoustic Symposium in the Ernest Balmer Studio (Distillery District) on 10 August 2017. Image © Stefan A. Rose. [Click image to enlarge]

Thus, some obvious motivations for migrating into the live medium were to harness the indeterminacy of spatial texture behaviours and the potential multitude of morphological geneses they could yield, as well as the idea of music as an incomplete process of creation, allowing for spontaneity and different outcomes in each presentation of a work. Other motivations are related to the idea of viewing texture as a synthesis of agencies and causes — whether human, technological, sonic, visible, invisible, etc. — that are not “hidden”, as is typical in acousmatic music, but rather generated in the present performance context. This can be supported by a more site-specific approach to concert presentation, taking advantage of easily variable multi-channel configurations available in non-fixed music.

To these ends, I am developing technological approaches that facilitate spatial texture composition for live performance with algorithmic processes. Though I do not want to compromise the complexity of morphology, space and texture available in fixed studio composition, I do want the activity of composition to gravitate away from being one of fixing certain sounds in specific places toward one that is more concerned with devising variable and potential structures and formalizations, on one hand, and intuitively performing and improvising, on the other. This takes the musical practice and research into the well-established domains of live, interactive, algorithmic music and new interfaces for musical expression, where artistic practice to a great degree involves the designing and interfacing with technology as co-performing or co-creating agents. As I will argue here, I believe that the æsthetic outcomes of such a symbiosis, which harnesses rather than avoids system-driven elements in music, can also result in interesting acousmatic consequences in the form of an evocation that gravitates toward the non-existing rather than the real. This attitude may not align with typical conceptions of sound in acousmatic music, which often assume that sounds have an acoustic origin in the real world, behind the metaphorical curtain — a conception I believe is largely influenced by the prevalence of compositional methods that use recordings rather than synthetic sounds. In Jonty Harrison’s words:

Acousmatic music… frequently refers to acoustic phenomena and situations from everyday life and, most fundamentally of all, relies on perceptual realities rather than conceptual speculation to unlock the potential for musical discourse and musical structure from the inherent properties of the sound objects themselves. (Harrison 1999)

However, as I want to suggest that the acousmatic does not have to suppose, import or refer to acoustic origins prior to technological mediation, I propose, instead of sound objects with inherent properties, “speculative syntheses”: conceptual realities of perceptual speculation, in which hypothetical environments and natures, born from the odd fruits of systems, algorithms and the visible spectacle of the world, bring forth the causal confusions and material amalgamations that constitute the acousmatic. Though such evocations may be mapped to existing acoustic phenomena, they have a synthetic affect. In this, my project dissociates itself from the notion of an essential, pure reality or nature, in which sounds are assumed to originate and in reference to which sounds are evaluated. Instead, I view nature fundamentally as a synthesis, not separated from technology or culture, and speculate freely on the creative potential of synthetic ecologies, where unnamed organisms reposition categorical conceptions of species, nature and artifice. Morphogenesis then becomes an ontology where process is no longer isolated from artefact, and the work is no longer an autonomous entity divorced from its creator.

Speculative Synthesis

In his book Sound Unseen, Brian Kane uses the term “acousmaticity” to define acousmatic sound on a continuous scale of certainty-uncertainty, as an alternative to the dualistic ontology proposed by Pierre Schaeffer, where sound source and sound object are two distinctly different classes. Sound, Kane argues, has a “tripartite ontology” of source, cause and effect, and “acousmaticity” comes about with any uncertainties emerging within the decoupling of the three stages in this complex:

An acousmatic sound is often defined as a sound that one hears without seeing its cause, a sound heard in the absence of any visual information. However, I have tried to argue that acousmatic sound is not best characterized in terms of a division between two sensory registers…. Rather, the experience of acousmatic sound is epistemological in character, articulated in terms of knowledge, certainty, and uncertainty. (Kane 2014, 224)

This perspective is helpful since my context is not one where visual cause agency is absent. Moreover:

If we make acousmaticity the criterion for identifying a sound as acousmatic, it follows that not every sound heard from a loudspeaker is de jure acousmatic. Even in cases of musique acousmatique, when listening to sounds coming from loudspeakers, one is often quite certain about the sound’s source, cause, and effect. (Ibid.)

I here assume an interpretation of “acousmaticity” that includes not only literal uncertainty of source, but also a more poetic loosening up of the source-cause-effect chain, conducive to evocation of imagery. I propose that, broadly speaking, acousmaticity propagates from two different places. In a performance situation (whether live or fixed media), the “true”, non-fictional origin of a sound is taken to lie in one of two domains of reality: either in what I refer to as the performance-technology domain — the present listening context of humans, technology, acoustics, etc. — or in the external domain — the world behind the metaphorical acousmatic veil, where real events and places external to the present exist. Despite the alleged agnosticism to sound source in acousmatic culture, the external domain is typically the preferred sonic origin and reference: sounds are imported from the real world into the studio and reduced not only in listening, but also in composition. Neither of these reality domains alone has any significant “acousmaticity”, as long as their certainty of cause and source is unchallenged. The imported external domain is, of course, presented via the performance-technology domain, but, if pervasive, it is likely to be attributed to the origin of the sound. Speculative synthesis occurs when sounds with origins attributed to the performance-technology domain begin to invite imaginative speculation of causes in addition to what is taken to be technology or performance, or guides attention away from causes altogether. It is a synthesis of the material and the immaterial: unnamed entities, potential sources, unknown agents. Such ambiguities can occur in many ways: it may be a confusion of gestural cause-and-effect due to performance-technology interaction or it may be a product of sounds being simultaneously linked to physical performance and to a speculative realm by evocation. But it is often also a purely sonic evocation of technologically mediated nature. What is distinctive about this, however, is that the sound is never so alienated from the performance-technology domain that it becomes allocated to the imported domain: thus, it is artificial in nature, insofar that it originates in a technological and/or performative context.

Canonical examples of speculative synthesis might be David Tudor’s Rainforest (1968/1998) and Neural Synthesis (1992/1995). Though, in the case of Rainforest, the title is suggestive of an external domain; any nature associations in the music are tied to technologically mediation. More recent music that comes to mind include Rashad Becker’s Traditional Music of Notional Species Vol. I (2013) or some of the tracks on Florian Hecker’s classic Sun Pandämonium (2003). Speculative synthesis is not about the listener’s reduction of causes from a perceived sound, as per reduced listening, but is additive in nature. Morphology is where source appears and disappears: morphology generates “acousmaticity” by drawing attention away from the real, toward speculative domains and, as a result, creates an expanded environment within which sounds can colour and situate the performance domain. It is thus not only a morphogenesis, but also a posthuman ectogenesis 2[2. Ectogenesis is the growth of an organism in an artificial environment. A classic fictional example is the process of artificial human birth in Aldous Huxley’s novel Brave New World.]: a cyborg sound, born in the laboratory, outside the acousmatic Garden of Eden.

“Spheroid” of Performance and Technology

The conception of live performance is traditionally associated with the instrument, a sound-making tool operated by a performer. Traditional instruments have relatively clear sonic boundaries, predicted by listeners through intuitive knowledge of factors such as physics, acoustics, performance, repertory and idiom. As Owen Green (2011) has pointed out, there are reasons not to overemphasize the difference between acoustic and digital instruments, since both are always encountered in the context of the performance ecosystem of performer, instrument and environment (Waters 2007), which contextualizes and defines their roles. However, from the perspective of this discussion, we should note that computers and electronic interfaces are physically much less indicative than acoustic instruments are of what sound they may produce. Thor Magnusson writes:

In digital instruments, the physical force becomes virtual force; it can be mapped from force-sensitive input devices to parameters in the sound engine, but that mapping is always arbitrary (and on a continuous scale of complexity), as opposed to what happens in physical mechanisms. (Magnusson 2009, 172)

I would add that a loop of “acousmaticity” is formed here, where the virtual force becomes a speculative physical force, and the resulting “acousmatic cyborgs” reframe interfaces and technology in the embodied multimodal perception of both performer and audience. This is not unlike what Deniz Peters has described as a second “invisible materiality” (Peters 2012), which occurs in addition to the physicality of performance, where embodied listening further informs our conception of sonic physicality.

Though in a live performance the obvious visible causal agency normally runs from human action to sound, multimodal associations evoked by the sound can sometimes supervene upon the materiality of the technology. Thus, in some performances, it may seem as if body motion and interfaces are impregnated by the sonic physicality of textures. The performance model I have adopted seeks to link sensorimotor activities with texture and morphology in ways that inspire performance. I therefore avoid the laptop interface almost entirely, instead relying on physical controllers for interfacing with sound. From the perspective of a composer-performer, this may be regarded a drawback in that a lot of work needs to be done in advance of performance to develop the programme and its interfaces. Moreover, it takes away the creative powers afforded by being able to design algorithms through coding during performance. However, the scope for timely, intuitive, physical agency is greatly enhanced by having a pre-designed system optimized for physical interaction. And, as with any more or less interactive system, if sufficiently intelligent it will be able to register and interpret performance actions so as to perform tasks more complex than simply generating or controlling sound in direct response to human performance. From an audience point of view, the human presence is bodily embedded in the virtual sound environment created; the materiality of ostensibly touching sound via technology is hopefully palpable. As is typical in interactive music, the human gesture is an incomplete agent in a larger system, rather than an autonomous creative force. Thus, I seek to develop systems that have a balance of control and surprise, allowing for the performance to be fully involved in the process throughout and for texture materiality and spatiality to guide and inspire what Green describes as wayfinding (citing Tim Ingold 2000): a skilled, intuitive process of searching for sound, but only knowing one’s destination upon arriving.

Erik Nyström performing Spheroid during TIES 2017 on 10 August 2017. Image © Stefan A. Rose. [Click image to enlarge]

This linking of performer and sound enforces the sense of liveness, which, as Kerry Hagan has emphasized, is also connected with factors such as perceived spontaneity, risk of failure and indeterminacy (Hagan 2016). For Newton Armstrong, an embodied and “enactive” musical performance is situated in that it involves an agent’s ability to adapt to an environment, timely in that the performer is able to immediately act and react to meet real-time constraints, multimodal in that it requires sensorimotor skills, engaging because the agent is required for the completeness of the environment, and emergent since “optimal embodied experience arises incrementally over a history of sensorimotor performances within a given environment or phenomenal domain” (Armstrong 2006, 10). Such an enactive performance mode also allows for the experience of flow to emerge — “a way of being that is so direct, immediate and engaging, that the normative senses of time, space and the self, are put temporarily on hold” (Ibid., 8). His use of the term “enaction” is derived from Varela, Thomson and Rosch’s philosophy of embodied cognition developed in The Embodied Mind: Cognitive science and human experience (1991/2016), where the body and consciousness are considered a self-organized system coupled with its environment in a circular manner. In this performance context, spatial texture is part of the ecology, its sounds being agential nodes alongside the performer; or perhaps “living presences” (Emmerson 2007).

For Spheroid, the notion of instrument is not entirely appropriate, because the system was designed as a specific but variable work rather than a tool. It has inbuilt structures, though none are absolute. It also has a defined sound palette, though no two performances will yield the same sounds. Thus, to my mind, it is not a tool for performance, but an algorithmic work with specified interfaces. Moreover, the idea of performer and instrument invites the notion of a “solo” project or performance, which I do not consider applicable here, since the texture is multilayered and there are no specific sounds that are linked to the human performer above others. If attempting to apply Robert Rowe’s terminology for interactive music systems (Rowe 1993, 6–8), I would not be satisfied with referring to it as an “instrument paradigm” for the reasons just stated; “player paradigm” would not suffice either, because the work does not really play on a human-computer duality, nor does the system have sufficient autonomy to be considered a musical entity separate from the human player. More broadly, however, it falls into Joel Chadabe’s definition of “interactive composing” as “a two-stage process that consists of (1) creating an interactive composing system and (2) simultaneously composing and performing by interacting with that system as it functions” (Chadabe 1983, 23). It also closely corresponds to PerMagnus Lindborg’s definitions of interactive performance, some of the more applicable features being: “Simultaneous interpretation and composition. … Structure partly describable by rules. … [and] reflected improvisation” (Lindborg 2008). My approach is also concurrent with Arne Eigenfeldt’s idea of “real-time composition” in that it “blurs the distinctions between performer, instrument and environment” and that the building of the system is considered a compositional activity (Eigenfeldt 2011, 145). Metaphorically speaking, I might describe it as a system of tentacles where human agency is distributed into and situated within a spatialized environment.

A problem when composing and performing a piece whose sonic range well exceeds what an individual performer can control in one moment is the question of how to ensure that all elements of the work are sensitive to performance even if they are not directly accessible. Otherwise, one easily runs into the stabilization of static backgrounds that disengage performance, and sounds that are not adaptable to every performance context. A disconnect appears where the pre-composed material erects a wall beyond which the performer cannot reach. Boundaries are better off being enforced by a combination of the performer’s instinct and the interactive properties of the system, rather than by a limit that has no knowledge of the current context. Thus, it is necessary to have structural proportions and densities driven by performance, ensuring that the pace of the music is sensitive to context.

Real-Time Montage of Spatial Texture

The structuring of spatial texture in my work typically entails grouping synthesis processes that are distributed over discrete loudspeaker channels, balancing similarities and differences. The top-down control model continuously operates on global parameters in order to balance the parameter distributions that cause spatial integration, segregation and the general behaviour of a texture. A different, lower-level method is to create textures from discrete events and allow these to form a continuum as they aggregate in space to create a heterogenous spatial image. This is not dissimilar from Horacio Vaggione’s micromontage method (Roads 2005), which focuses both on salient relationships of detail within the texture and emergent textural qualities, without relying on macroscopic statistical distributions or formalizations.

In Spheroid, I did not desire global control of the music, as I envision the human as being inside the system rather than above or outside it. A guiding idea was the composition of texture through improvised introduction of sounds in a fashion not unlike a blobby mode of pointillist painting: blotting sounds on a canvas and having them animated and modified by algorithms as the work progresses. This led to an emphasis on the lower-level approach. The direct level of human interaction with the system is, structurally speaking, on an intermediate gesture level, but performance actions propagate toward both higher and lower levels of structure by way of indirect agency. The basic principle is simple: my actions, particularly touching pads on MIDI controllers, are recorded as data events, in terms of synthesis patches, parameter settings and relative time intervals. These are used for generating textual layers through routines that play sounds using data from memory, elaborating on their parameters. Thus, a heterogenous macrotexture is created by sounds of different colours and shapes, densities being determined by moment-to-moment time intervals, rather than global distributions. This can stimulate a rather action-oriented performance though a more reserved mode is sometimes more appropriate, as the system has its own ways of generating texture independent of continual input.

In Spheroid, I envision the human as being inside the system rather than above or outside it.

The preparatory composition phase, consisting of building the algorithm, was in no way conceived in its entirety before I was able to play with it and hear the result. On the contrary, the algorithm grew organically out of an experimental process of coding, playing and listening. It does therefore not follow a neat architectural concept, and rules and formalizations are full of exceptions to account for factors discovered in practical development. At the same time, I have tried not to iron out the “algorithmic quality” of the piece, as I want to maintain some “robotic” motion in the sonic ecology. Structurally, there is a way of playing the piece that suits the way it was composed, although there is still plenty of scope for improvisation within that frame. The texture and the form have strong similarities between every performance, though this does not preclude other possibilities in terms of the order of entry of different materials. The structural core of the piece is an improvised phrase of pulse trains performed at the start which is then sequenced to loop and evolve in an irregular fashion as the piece progresses. These form an introductory passage that will carry on for an amount of time that is dependent on the length of the opening phrase. While this happens, the algorithm is automatically spawning and adding sounds to the sequence that are derived from the phrase, and once the first passage is completed, more complex interactions begin to take place, including disruptions to the core sequence. All material that enters here is either derived from or affected by the performance in some way. Any sounds performed in addition to the opening phrase will now begin to recur as the sequencing algorithm decides to throw them into the blend under certain conditions. This process involves some machine registering of particular properties that prompt the matching of sounds from data memories into morphological compounds.

There is a set of sound modules that are spread over pads and constitute the focus of the performance activity for the rest of the piece. As with all sound used or generated in the piece, the modules are synthesis-based and have velocity or aftertouch mappings for performing morphological variations with properties such as, for instance, frequency, onset and modulations, each of which are accessible at finger touch. Most of these modules have memory processes running in the background that store relevant data from performance, allowing for sequencing of the material. This is also true for some of the textures that are controlled globally with knobs: textural states are recorded when my knob-tweaking stops — on the assumption that I have found a state that I like — and then interpolated. Sequence textures, when initiated, are typically aware of one another’s existence and their relative iterations are counted so that if such-and-such sequences are playing and have been going for a certain quantity of iterations, then some kind of change may be initiated, or the texture will begin to gradually produce something new, if the piece has reached a certain average density of events, etc. All proportions are dependent on longer-term performance durations stored throughout the piece: for instance, if I spend a lot of time on the opening and what comes after, until one of the later important sequences is instigated that duration will be compensated for later so that the piece will build up more quickly instead.

Among the key challenges in composing this way are the decisions related to how much or which elements of structure to pre-define and what to leave open for the singularity of each performance instance. Gottfried Michael Koenig described this problem as a relationship between the potential form and the actual form:

The more closely the components of the form potential are woven together, the lesser the extent to which the various actualities differentiate with regard to one another and to the potential; the potential is then in practice the form itself. (Koenig 1971, 66)

And, in Xenakis’ words,

one establishes an entire range between two poles — determinism, which corresponds to strict periodicity, and indeterminism, which corresponds to constant renewal — that is, periodicity in the large sense. This is the true keyboard of musical composition. (Xenakis 1985, 180)

In Spheroid I would say that there are two kinds of determinism at work. One being the composed formalizations, the other the performance itself. Performance is deterministic in that I am causally affecting sound, but it is not predetermined since it is improvised. Added to this are the indeterminate probabilistic elements in the algorithm. Thus, the algorithm is predetermined, but not deterministic. To my hearing, it is sufficiently smart that “a potentially enhanced sense of agency or otherness [is] brought about by automated processes and their unpredictability or apparent intentionality,” a quality of algorithmic music described by Andrew Brown (2016, 184).

Posthuman Acousmatics?

As the creator of the work, I should not have the last word on what Spheroid sounds like, but for what it is worth, I hear it as a laboratory process, where techno-human mechanisms are gradually aggregating disparate elements into a system whose parts are too awkward to be plausibly real, but responsive enough so that a form of organicity can be gleaned.

Video 1. Erik Nyström — Spheroid (2017), performed by the author during the 2017 Toronto International Electroacoustic Symposium at the Ernest Balmer Studio (Distillery District) on 10 August 2017. Video footage by Stefan A. Rose. Edited by Nicola Sersale. YouTube video “Erik Nyström 10.08.17 Toronto” (12:57) posted by “Erik Nystrom” on 27 January 2018.

As a final remark, invoking critical posthumanist discourse, I propose to speculate beyond the anthropocentric model of nature and the integrity of human instinct, toward a view of ourselves as syntheses, not separated from sound. There is no concrete material that can be collected, contained or controlled; it is alive and it drinks our blood. Several decades ago, in the Cyborg Manifesto, Donna Haraway wrote: “we are all chimeras, theorized and fabricated hybrids of machine and organism. In short, we are cyborgs” (Haraway 1991, 150). I suggest that the decoupling of causation in the acousmatic condition can leave us with the valuable perspective of bastard ontology: if the human has no true nature, it can hear no true sounds. Or: “The cyborg would not recognize the Garden of Eden; it is not made of mud and cannot dream of returning to dust.” (Ibid., 151). In contemporary computer music, the hybridic creative gesture of algorithms and human performance has formed a fruitful platform for a posthuman attitude to existence where, as Rosi Braidotti puts it, “matter is not dialectically opposed to culture, nor to technological mediation, but continuous with them” (Braidotti 2013, 35). Rather than evoking an essentialist world, the realm of speculative synthesis lies closer to Braidotti’s notion of zoe: “the dynamic, self-organizing structure of life [that] stands for generative vitality. It is the transversal force that cuts across and reconnects previously segregated species, categories and domains” (Ibid., 60).


Armstrong, Newton. “An Enactive Approach to Digital Musical Instrument Design.” Unpublished doctoral dissertation, Princeton University, 2006.

Becker, Rashad. Traditional Music of Notional Species Vol. I. Pan 34, 2013.

Braidotti, Rosi. The Posthuman. Cambridge MA: Polity Press, 2013.

Brown, Andrew R. “Performing With the Other: The relationship of musician and machine in live coding.” International Journal of Performance Arts and Digital Media 12/2 (December 2016) “Live Coding,” pp. 179–186.

Chion, Michel. Guide to Sound Objects: Pierre Schaeffer and musical research. Trans. John Dack and Christine North. Paris: Buchet/Chastel, 2009. Available on the ElectroAcoustic Resource Site (EARS). (/Other/Online Publications)

Chadabe, Joel. “Interactive Composing: An Overview.” Computer Music Journal 8/1 (Spring 1984), pp. 22–27.

Eigenfeldt, Arne. “Real-time Composition as Performance Ecosystem.” Organised Sound 16/2 (August 2011) “Performance Ecosystems,” pp. 145–153.

Emmerson, Simon. Living Electronic Music. London: Routledge. 2007.

Green, Owen. “Agility and Playfulness: Technology and skill in the performance ecosystem.” Organised Sound 16/2 (August 2011) “Performance Ecosystems,” pp. 134–144.

Hagan, Kerry. “The Intersection of Live and Real-Time.” Organised Sound 21/1 (April 2016) “Style and Genre in Electroacoustic Music,” pp. 138–146.

Haraway, Donna. “A Cyborg Manifesto: Science, technology and Socialist-Feminism in the late twentieth century.” In Simians, Cyborgs and Women. London: Free Association, 1991, pp. 149–180.

Harrison, Jonty. “Diffusion: Theories and practices, with particular reference to the BEAST system.” eContact! 2.4 — Diffusion multicanal 1 / Multi-Channel Diffusion 1 (1999).

Hecker, Florian. Sun Pandämonium. Mego 44 CD, 2003.

Ingold, Tim. Perception of the Environment: Essays in livelihood, dwelling and skill. London: Routledge, 2000.

Kane, Brian. Sound Unseen. Oxford University Press, 2014.

Koenig, Gottfried Michael. Summary Observations on Compositional Theory. Utrecht: Institute of Sonology, 1971.

Lindborg, PerMagnus. “Reflections on Aspects of Music Interactivity in Performance Situations.” eContact! 10.4 — Temps réel, improvisation et interactivité en électroacoustique / Live Electronics, Improvisation and Interactivity in Electroacoustics (October 2008).

Magnusson, Thor. “Of Epistemic Tools: Musical instruments as cognitive extensions.” Organised Sound 14/2 (June 2009) “Interactivity in Musical Instruments,” pp. 168–176.

Nyström, Erik. Morphogenèse. empreintes DIGITALes, IMED-14129-CD, 2014.

_____. “Morphology of the Amorphous: Spatial texture, motion and words.” Organised Sound 22/3 (December 2017) “Which Words Can We Use Related to Sound and Music?”, pp. 336–344.

Peters, Deniz. “Touch: Real, apparent and absent — On Bodily Expression in Electronic Music.” In Bodily Expression in Electronic Music. Perspectives on Reclaiming Performativity. Edited by Deniz Peters, Gerhard Eckel and Andreas Dorschel. New York: Routledge. 2012, pp. 17–34.

Roads, Curtis. “The Art of Articulation: The electroacoustic music of Horacio Vaggione.” Contemporary Music Review 24/4 & 5 (October 2005) “Horacio Vaggione: Composition Theory,” pp. 295–309.

Rowe, Robert. Interactive Music Systems: Machine listening and composing. Cambridge MA: MIT Press, 1993.

Tudor, David. Rainforest (Versions I & IV). Mode 64 CD, 1998.

Varela, Francisco J., Evan Thompson and Eleanor Rosch. The Embodied Mind: Cognitive science and human experience. Cambridge MA: MIT Press, 1991/2016.

Waters, Simon. “Performance Ecosystems: Ecological approaches to musical interaction.” EMS 2007 — The “Languages” of Electroacoustic Music. Proceedings of the Electroacoustic Music Studies Network Conference (Leicester: De Montfort University, 12–15 June 2007).

Xenakis, Iannis. “Music Composition Treks.” In Composers and the Computer. Edited by Curtis Roads. W. Kauffman, 1985, pp. 172–192.

Social bottom