eC!

Social top

English

Perceived Control and Mimesis in Digital Musical Instrument Performance

For centuries, the pipe organ was the most sophisticated synthesizer employed in Western Music. The sounds it produced could be specifically sculpted in novel and artificial ways, and were called upon from a keyboard that allowed for rapid and accurate control. Today, these same tasks are often centred on a general-purpose tool: the personal computer. The two technologies bring many patently obvious differences into relief, but they also share a significant similarity. Namely, both possess the ability to decouple a source of sound from its means of control. In general, decoupling is a feature regarded as exclusive to electronic or computer-based instruments. With respect to the majority of acoustic instruments, this is a fundamentally important distinction to make. In the woodwind family, for instance, design of the mechanical means of pitch control is strictly constrained according to the physical properties of an instrument’s standing wave. Digital musical instruments offer an unprecedented degree of freedom in relaxing these kinds of constraints, but it is important to recognize that (in one way or another) the free design of instrument control interfaces has been possible for quite some time. The pipe organ’s interface varied considerably over the course of its extraordinary lifetime, often in ways that were completely independent from the development of its sound generation unit. Theoretically, the freedom offered by 19th-century electric actions that output simple control voltages allowed for any imaginable type of playing interface; yet alternate controllers were not explored extensively in the years that followed. Instead, an established keyboard design and performance technique was exploited as an effective means of creating and performing an enormous amount of music for the instrument.

Standard control interfaces and playing techniques affect listeners as well as composers and performers. Equipped with a basic understanding of the physical relationship between a performer and any specific acoustic instrument, we generate expectations based on what we see and hear that play an important role in our perception of musical events as they unfold over time. When this basic understanding is missing, the result can be jarring. For instance, early criticism of laptop performance focused on the unclear nature of connections between physical action and sonic result.

The crucial missing element is the body… With the emergence of the laptop as instrument, the physical aspect of the performance has been further reduced to sitting on stage and moving a cursor by dragging one’s finger across a track pad in millimeter increments. (Ostertag 2002, 12)

Whilst we hear great leaps in sound in these performances, from screeching slabs of noise to fragile and tinkly droplets of high-end tones, the performer sits behind their screens with little or no perceivable movement, lost in thought as they manipulate files and patches. (Stuart 2003, 59)

There are many points of view on the subject of laptop performance, but it has carved a permanent space for itself within the concert tradition, and some audiences have grown to not expect strong action-sound relationships. To give but one example, computer musician Pamela Z has said that she is perfectly content watching a performer “standing on stage just flipping knobs,” despite the fact that her own performance practice using the BodySynth controller is very consciously gestural (Alburger 1999, 3).

Whether one chooses to highlight, de-emphasize or ignore it, the perception of control is of central importance to the performance of computer-based music. Moving beyond the laptop-as-instrument to the more general topic of novel digital musical instruments, it is not only the instrument’s interface that can be freely designed — the degree of perceivable control associated with its performance can also be chosen quite freely. This is a fundamental design issue, and there are important distinctions to be made between instruments with respect to the way that action-sound relationships are addressed in each case. From this point of view, biosensor-based instruments require special consideration, as they present the additional possibility of state-sound rather than action-sound relationships. Though the physical / mental state of a performer is not necessarily visually apparent, it is a reliable and compelling means of controlling sound that has been exploited alongside (and in combination with) more physically overt methods. Taken as a whole, current instrument control interfaces form a continuum with respect to the perceivable control that is exhibited during the act of performance. In many ways, the choice of a position along this continuum relative to existing projects is a sensible starting place in the conception of new instruments, as it immediately clarifies considerations of hardware, software and synthesis mappings.

“… are desires for clear action-sound relationships merely remnants of outdated instrument control schemes?”

Here, we will consider the motivations and consequences connected with this fundamental choice in gradual stages. Starting with Eduardo Reck Miranda and Marcelo Wanderley’s model (2006) of the Digital Musical Instrument (DMI) described in their 2006 book, New Digital Musical Instruments: Control and Interaction Beyond the Keyboard, we can identify qualities that are truly unique to DMIs, and gain an understanding of the relationship between the varied control interfaces currently in use. To discuss the way that DMI performers physically relate to their instruments in performance, we will look to frameworks for describing musical performance gesture, or “music-related movement,” as it is termed by Alexander Jensenius (2007). We can then attempt to locate the music-related movements of specific DMI performers, and consider how biosensor control schemes that sometimes involve no physical movement can be positioned relative to such frameworks.

The idea that there should be a perceivable correlation between the movements of a performing musician and the resulting sound is a long-standing assumption of the concert tradition. This is closely tied to virtuosity — an aspect of music that from the most cynical perspective is viewed as empty exhibitionism. Now that the physical nature of instrument performance can be so freely chosen, we should ask: are desires for clear action-sound relationships merely remnants of outdated instrument control schemes? Theories of mimetic response advanced by Arnie Cox (2001) and Rolf Inge Godøy (2003) and supporting research related to the mirror neuron system inform this question by presenting a physiological basis for these types of expectations. With this in mind, we can re-evaluate references to action-sound relationships that dismiss the phenomenon as extra-musical, or mere spectacle. At the same time, engaging with the idea of mimetic response to music gives us another perspective on the value of musical works that consciously deny expectations stemming from perceived control in performance, or seek methods of real-time control over sound that do not involve overt body movement. With evolving technology enabling increasingly sophisticated efforts in such areas, a state of flux has emerged that makes it difficult to predict what the factors behind expectations of performed music will be in the near future.

Digital Musical Instruments

Miranda and Wanderley present a very thorough catalogue of efforts in interface design for the control of computer-realized sound. Before covering specific instrument systems in depth, the authors put forward a simple but crucial model of DMIs in order to solidify concepts and terminology. The model is comprised of two primary units: a gestural controller and a sound production unit. A significant feature of DMIs is the artificially established mapping between these modules. It is fundamentally important to acknowledge that, individually, neither a gestural controller nor a sound production unit constitute a musical instrument. Both components and their network of connections must be considered as a whole.

With regard to perception of control, some important factors set DMIs apart from acoustic instruments. Aside from the modular design mentioned above, two other unique characteristics are the ability to reference an extremely diverse range of sound sources, and the ability to control musical processes on many different time scales.

Excluding systems that exploit robotic control of acoustic instruments, the sound production units of DMIs rely on loudspeakers to articulate sound. Consequently, the range of sound sources that can be convincingly evoked is enormous. It is worth considering whether or not such broad referentiality is one thread in the complex of factors contributing to criticism of DMI performance. Do the action-sound relationships of such performers present a unique challenge in this regard? Michel Chion’s discussion of the concept of synchresis suggests that there is no implicit difficulty surrounding the use of sounds that (in terms of physical source) are incongruent with their perceived causes.

Synchresis — a conflation of the words synchronism and synthesis — is “the spontaneous and irresistible weld produced between a particular auditory phenomenon and visual phenomenon when they occur at the same time. This join results independently of any rational logic” (Chion 1994, 63). Chion’s context for the term is cinema, where “causal listening is constantly manipulated,” and “most of the time we are dealing not with the real initial causes of the sounds, but causes that the film makes us believe in” (Ibid., 28). Likewise, performers can make us “believe in” the causal relationships they present in spite of any logical incongruities.

Though it is obvious, Chion specifies a condition to the potential relationship between an auditory and visual event: they must occur at the same time. Yet, in the context of music, a “weld” between action and sound does not necessarily require synchrony, and may vary in strength. In terms of the sonic time scale proposed in Curtis Roads’ Microsound (2001), the most straightforward action-sound relationships exist at the level of a sound object. Here, a single sound event with a clear onset and unambiguous decay simply needs to begin at an appropriate point in relation to a well-defined action. We can imagine a percussionist striking a surface with a mallet and hearing a piano note. The relationship might be unexpected (unless the surface is a piano), but through repeated instances we gain trust in the established cause and effect.

A different set of considerations arises when DMI performers control events at the next larger (meso) time scale, measured in seconds and associated with the traditional idea of a musical phrase. This would correspond to the mallet strike initiating a multi-onset melody that might unfold over twenty seconds. Continuing this line of thought to single actions that trigger events on the macro time scale (large-scale formal elements measured in minutes), we can begin to see that synchresis is bound in a complex relationship with the time scale of musical entities.

Locating Biosensor-based DMIs

In addition to a DMI model, (Miranda and Wanderley 2006) also presents an introductory gestural controller taxonomy that does not insist on discrete categorization. The four stable markers along its spectrum are:

As this discussion is centred on the task of creating novel, freely conceived control relationships, the most relevant point of reference is the alternate gestural controller, exemplified by the HANDS (Waisvisz 1985) the T-Sticks (Malloch and Wanderley 2007), the Silent Drum (Oliver and Jenkins 2008) and the Peacock (Miyama 2010). However, biosensor-based controllers do not necessarily align with the notion of a gestural controller, and are notably absent from Miranda and Wanderley’s spectrum. For instance, the well-known BioMuse interface (Knapp 1990) — which provides control streams related to both overt and covert processes of the human body — is not directly placed in this spectrum.

A more thorough approach to comparing DMI interface characteristics is proposed by Birnbaum et al. in “Towards a Dimension Space for Musical Devices,” where a multidimensional space is used to reveal complex relationships between controllers. As the variety of sensor technologies in use continues to grow, the need for a detailed means of comparing instruments grows as well. Birnbaum et al.’s dimension space for musical devices is intended to cover DMIs as well as interactive sound installations, and includes axes for:

Because these characteristics are quite varied (some are continuous, others discrete), the axes are not assumed to be orthogonal. Instead, peculiarities of individual instruments are reflected by the shape and area of the polygon formed by point positions on each axis. When considering DMIs exclusively, the single dimension used by Miranda and Wanderley to order instruments based on the nature of their gestural controllers can be incorporated as one axis of Birnbaum et al.’s multidimensional space. This will be considered again in the context of specific case studies. At this point, we can merely note the need for comparison of several aspects at once, and — in order to fully include biosensor-based instruments in future comparison schemes — adjust our reference to control interfaces as “controllers” rather than “gestural controllers.”

Following Miranda and Wanderley, we will adopt the term alternate to refer to any type of controller that is not strongly related to the interface of a traditional acoustic instrument. By definition, the scope of this category is historically relative and cannot remain fixed. What is alternate today may become standardized in the future. At the time of an instrument’s conception, however, commitment to a novel alternate controller leaves many creative avenues open — the controller can be designed from the ground up in order to exploit arbitrary types of performance actions, and the process of composition is not influenced by long-standing idiomatic practices. But alternate controllers are also subject to difficulties that typically surround the unfamiliar. As Atau Tanaka has noted, it becomes the composer or performer’s responsibility to lead audiences toward an understanding of a DMI’s unconventional control scheme (Tanaka 2000, 398).

Action-Sound Relationships

The differences between digital and acoustic instruments raised in the previous section point to the importance of examining physical relationships between performers and instruments, and the way that performed actions correspond to sound. The discussion thus far has already made reference to Alexander Jensenius’ concept of action-sound, introduced in 2007. Here, we can consider some of the related details. The most basic distinction exists between action-sound relationships and action-sound couplings. Couplings represent correlations between action and sound that are bound by laws of nature. Jensenius gives the example of a glass falling toward the ground. As the glass falls, we anticipate a range of possible sonic outcomes: either the sound of shattering glass, or a simple “clink” (if we are lucky). The result of an infant crying would defy our understanding of reality, leading us to formulate other possible explanations. These tendencies are strongly linked to survival instincts, and it is difficult to imagine functioning in everyday life without the support of reliable action-sound couplings. All couplings are also action-sound relationships, but the relationship category is open to cases that are inconsistent and entirely fabricated. For instance, Chion’s examples of synchresis must be classified as relationships, not couplings.

In the case of alternate controller DMIs (AC-DMIs), action-sound couplings are essentially impossible, which impacts the expected set of movements that accompany such instruments. What are the physical movements associated with performing AC-DMIs? Imagining a familiar acoustic instrument (a violin, cello or contrabass) it is easy to visualize the vocabulary of motions required to drive it. As a cellist leans forward to reach the end of the fingerboard, we expect to hear sounds in a higher register. In contrast, for the AC-DMI performer to produce the expectation of a specific sound, predictable consequences must be established in the course of a performance. These consistent action-sound relationships might be completely fixed or only temporarily stable from one section to the next, as the specific mappings of an AC-DMI can be changed at liberty.

Multi-percussion setups provide a reasonable analogy to this situation, where many of the instruments may be found objects that audiences have never heard in a musical context. Such diversity is encouraged by mainstays of the percussion repertoire like John Cage’s 27’10.554” (1956) and Morton Feldman’s King of Denmark (1964), which call for fundamental instrumentation choices to be made at the discretion of the performer. Even for seasoned audiences, each performance of these pieces can require adjustment to new action-sound relationships. Members of the audience may have vague notions of the sounds that a particular set of objects will create (what Jensenius calls an action-sound palette), but accurate expectations on par with those accompanying familiar acoustic instruments are not in place until action-sound couplings are made manifest by the performer. Furthermore, through extended techniques and mallet changes, percussionists often subvert the expectations that begin to form in response to coupling perception.

In short, there is no reason to question the effectiveness of establishing and reassigning action-sound relationships in the course of a performance. Modern percussion performance practice has complicated our conception of musical instruments, and the field of AC-DMI design will continue to make its own contribution in this regard through the creation of novel action-sound relationships. More contentious is the notion of establishing state-sound relationships via the application of biosensing technology in musical performance. In relation to more conventional music-related movements (e.g. mouth, arm and leg motions), physical correlates to human emotional states — such as perspiration, heart rate and muscle tension — are practically imperceptible. As with musical dynamics, however, these factors are highly relative and have been successfully exploited in live performance.

Music-related Movement

In their 2000 article, “Gesture-Music,” Claude Cadoz and Marcelo Wanderley draw together and develop information from an impressive range of gesture-related theory and research, including work by Choi, Delalande and Ramstein. Building upon Cadoz and Wanderley’s three types of instrumental gesture, Jensenius outlines four areas of music-related movement (Jensenius 2007, 46–47):

The established terminology put forward in these frameworks is critical for the analysis of DMI performance movements. The following provides a cursory review of only the most directly relevant areas of music-related movement, and the reader is encouraged to refer to the original sources for more detail and context.

Of the four broad categories proposed by Jensenius, sound-producing actions and ancillary movements incorporate the most fundamental aspects of traditional performance movement. Cadoz, as well as Wanderley and Jensenius, describe two main types of sound-producing actions: excitation and modification. Jensenius further specifies the area with a varying scale of direct to indirect excitations, classified according to the level of remove between a player and the source of vibration. For instance, a pizzicato string articulation is purely direct, while a vibrating string that results from the mechanized activation of a piano key is more indirect. Rather than the instantaneous vs. continuous distinctions used by Cadoz and Wanderley, Jensenius describes excitation actions in terms borrowed from Godøy’s notion of the gestural-sonorous-object (Godøy 2006), i.e. impulsive, sustained or iterative. In the case of vibraphone performance, these descriptors would be appropriate for a mallet strike, a bowed bar, and a multiple-bounce roll, respectively. Both models include parametric and structural modification actions, exemplified by the changing of pitch on a string instrument’s fingerboard and the insertion of a mute into the bell of a trumpet, respectively.

The third type of instrumental gesture described by Cadoz and Wanderley — selection gesture — is not included in Jensenius’s area of sound-producing actions. Selection gestures relate only to choosing between different areas of an instrument and hence do not produce sound. The appropriate category for such motions is ancillary movement. Support movements also fall in this area, and both play an important role in the formation of short-term expectation. In the case of a pianist, the support movement of a raised arm for an excitatory sound-producing action telegraphs the sonic result of that action before it is heard. If complex æsthetic experience can be partly explained in terms of expectations, violations and explanations (Burns 2006), it is clear that supportive ancillary movements make a significant contribution to the process.

While there are a great number of further distinctions to be made within these frameworks, we can move on to introduce and apply some of them in context by considering specific examples of AC-DMI systems.

Case Studies

This section assesses three instruments in relation to the movement analysis frameworks cited above and emphasizes aspects of action-sound or state-sound relationships as fundamental characteristics of the various DMI systems currently in use. Video documentation of all three instruments within specific musical contexts is freely available and the reader is encouraged to engage with these resources as a crucial point of reference. The purpose of this section is not to provide an exhaustive categorization of performance events in each piece, but to make some general observations about the way that each example interfaces with existing terminology for describing real-time control of sound. Three basic criteria were used in order to choose the instruments considered below. First, their control interfaces can all be considered alternate. A specific focus on AC-DMIs is appropriate because the main concern of this paper is the free design of perceivable control, and alternate controllers impose the fewest pre-conceived ideas on performer-instrument interaction. Second, an effort was made to include AC-DMIs with both haptic and non-haptic interfaces. Put another way, some involve transitive action upon an external object, while others do not. Finally, this set of instruments falls across a considerable span of the continuum between overt and covert performance control, and raises both technical and conceptual differences between biosensor-based controllers and those with other means of capturing movement information.

The Silent Drum

Based on its name and appearance, Jaime Oliver and Matthew Jenkins’ Silent Drum (2008) may seem closer to what Miranda and Wanderley would term an instrument-inspired controller; however, its drum shell mounting is merely a means of suspending the actual interface, a flexible circular membrane. The performer generates control streams by pressing down on the membrane, deforming it into specific shapes that are captured via a high-speed digital camera, and analysed using custom software. Shape analysis is based on the identification of one primary peak (the deepest point) and several secondary peaks, which are then assigned different mappings based on class and position. For instance, the primary peak could be used to control the onset and fixed pitch of a tone, while secondary peaks are made to affect continuous aspects like loudness and timbre.

Figure 1
Figure 1. Jaime Oliver performing on the Silent Drum. The “peaks” of the drum surface (pressed downwards) can be seen here. Photo © Diego Oliver. [Click image to enlarge]

Of the instruments reviewed here, the Silent Drum is the only example involving clear transitive actions — the performer undeniably acts upon the instrument as a separate entity. Though intransitive actions are also capable of conveying a sense of effort (which is related to the perception of control), this feature is often more readily understood in terms of a subject / object relationship. Further, this scenario provides an extremely straightforward context for sound-producing actions. Relative to a pizzicato string pluck, excitation actions performed on the Silent Drum are indirect, as a great deal of technology stands between the controller and the final point of sound production. On the scale of DMIs, however — where the final site of sound generation is invariably a speaker cone — the drum’s excitation actions are relatively direct because the onset of sound events can be tied to an obvious playing surface. Drawing on Cadoz and Wanderley’s notion of the gestural channel, we can also say that performance of the Silent Drum makes use of the ergotic function, which is associated with forces applied to an object.

Generally, the drum’s excitation actions can be classified as impulsive, though subsequent parametric modification actions that exploit the instrument’s flexible surface are continuous. Pointing to what will likely be an ongoing need for revision in music-related movement terminology over time, some excitation actions on the drum are best described as anti-impulsive. This is exhibited in a moment of the first piece written for the instrument, Silent Construction I (2008–09), where the initiation of a vibraphone chord is connected with release of the membrane. Whatever the case, once the consequences of an excitation are understood, the support movements leading up to an action significantly contribute to a dimension of musical anticipation. That is, our perception of the performer’s approach to the playing surface as fast, slow, aggressive or tender is meaningfully connected to the sounds we hear, before we hear them.

For AC-DMIs, the nature of a selection movement may be quite different from that of an acoustic instrument. Rather than motion toward or away from specific areas of the instrument, a selection movement might be tied to the simple press of a button that initiates a pre-established remapping between control streams and synthesis parameters. In such a case, the different “area” of the instrument is virtual, not physical. The performance of Silent Construction I involves such selection movements (discreetly triggered by the foot), resulting in a series of unique action-sound environments as the piece unfolds. Depending on the scope of the remapping and how strictly we intend to hold to the basic DMI model, it may be more accurate to consider such transitions changes of instrument entirely. More to the point, overt selection movements also exist in the case of the Silent Drum. Based on the distinction between primary and secondary peaks described above, the performer can access different types of sound events by reaching toward the right or left of the primary peak. Additionally, different sounds can be accessed based on the depth of the primary peak at the moment when secondary peaks are created. This type of selection movement involves both hands — preservation of a steady primary peak depth with one hand, and motion to the left or right of it with the other.

The movement vocabulary surrounding the Silent Drum is compelling because it is novel and accessible at the same time. Action-sound environments employed by the performer are generally easy to grasp, but also rich enough in their mappings to give rise to spontaneous expressivity. 1[1. More information (texts, notation and audio/video documentation) about the Silent Drum can be consulted in the “Instruments” section of Jaime E. Oliver’s website.]

The BioMuse / Eric Lyon — Stem Cells

The BioMuse (Knapp and Lusted 1990) is perhaps the most well known musically oriented application of biosensing technology in use today. It has been used extensively as a compositional and performance medium by Atau Tanaka, chiefly for its data streams generated via electromyography (EMG) (Tanaka 2000). Delivered as MIDI control data, these streams furnish the user with information related to the activity of muscle tissue. The controller also provides MIDI data harvested from electroencephalogram (EEG) and electrooculography (EOG) signals, which reflect activity of the brain and eyes, respectively. Over time, the BioMuse system has been updated to include measurement of Galvanic Skin Response (GSR) and electrocardiogram (ECG) [Pérez and Knapp 2008]. Because of this diversity, Knapp refers to the BioMuse as an Integral Music Controller (IMC), which allows both conventional movement-based control and “a direct interface between emotion and sound production unencumbered by the physical interface” (Knapp and Cook 2005). Although this description seems to categorize the controller’s performance movements as strictly intransitive, this is not the case with every application. For instance, the BioMuse can be used in conjunction with acoustic instruments or other external objects that provide haptic feedback.

On its own, the BioMuse is only a controller and cannot properly be designated an instrument until its output is mapped to a specific sound generation system. A recent application of the BioMuse in Eric Lyon’s Stem Cells (2009) presents the opportunity to consider the controller in a specific instrumental context. Documented by Knapp and Lyon in 2011, the project took an existing composition for solo laptop and aimed to extend its performance control scheme to include movement- and emotion-based information. To explore distributed control, the measured emotional state of audience members was also incorporated into the musical system. 2[2. See “The Biomuse Trio in Conversation: An Interview with R. Benjamin Knapp and Eric Lyon” by Gascia Ouzounian in this issue of eContact!, where the members of the BioMuse Trio discuss the piece and its performance. Further information about the trio and video documentation of performances can also be consulted on the MuSE website.]

Regarding analysis of music-related movement, the most compelling aspect of Stem Cells is its innovative development of what can be called state-sound or thought-sound relationships. Specifically, the performed flow of the piece is dictated by transitions between sections that are “initiated by the evolution of emotional state of the performer” — such as a change from serenity to anger (Knapp and Lyon 2011, 418). Emotional state is inferred based on standard physiological information, including GSR, EEG, ECG, respiration rate and amplitude and facial EMG activity (Ibid., 416). Under this approach, the notion of a selection movement carried out to access different parts of the instrument must be adjusted to include the possibility of selection thoughts or intents.

Within each section, more overt control over sound is effected through entirely intransitive movements of the arms and head. Head turning actions are mostly classifiable as excitatory because they are repeatedly used to trigger sound events with clear onsets (such as the vocal samples that enter at the midpoint of the piece). Taken as a whole, the performer’s arm motions are best described as continuous modification movements, as the rotation and tension of the forearms is used to affect the pitch, loudness and tempo of selected components that emerge from the ongoing spectral texture. 3[3. See Fig. 3 in G. Ouzounian, “The Biomuse Trio in Conversation.”] There are exceptions, however, such as excitation actions of outward grasping that initiate multiple-note figures. In total, these applications align with Knapp and Lyon’s stated goal: to create “a more visible instantiation of the piece” based on both “gestures and changes of emotion” (Ibid., 419).

Figure 2
Figure 2. R. Benjamin Knapp in a performance / demonstration of Eric Lyon’s Stem Cells (2009) for visiting students at the Sonic Lab, Sonic Arts Research Centre, Queen’s University Belfast, 28 July 2009. Photo © Javier Jaimovich. [Click image to enlarge]

In their performance evaluation of the piece, Knapp and Lyon state that audiences reported high understanding of the relationship between physical movements and the sounds created; however, relatively few understood that changes in emotion were also used to control the piece (Ibid.). Surely, this relates to difficulty in visually distinguishing the performer’s emotional state from an external point of view. Cues corresponding to emotion vary in perceptibility, and also depend on a given audience member’s distance from the performer. In some moments of Stem Cells, changes in sonic texture occur despite the fact that the performer’s posture and facial expression seem fixed; at other times, slow, deep breaths and the opening or closing of eyes coincide with these structural boundaries. At even the lowest levels of perceptibility, however, the recognition of control under this scheme remains an open possibility, and requires that performers and audience members become acutely attuned to extremely subtle changes. In cases of minimal visibility, as gradual transitions effected via selection thoughts are perceived aurally, a partial inversion of the conventional scenario is achieved that enables listeners to understand the performer’s physical state based on qualities of the music. In an indication that some form of emotional empathy occurred, a strong correlation was found between the GSR signals of the performer and an audience member (Ibid.).

The Xth Sense

The Xth Sense system (or XS) was conceived from the outset as a means of exploiting the human body as a self-contained musical entity (Donnarumma 2011). It depends on basic vibrations of muscle tissue as both a source of control data and as an audio signal to be manipulated by these same data streams. Donnarumma’s notion of “biophysical music” (Donnarumma 2012, 1) lends a strong conceptual aspect to the project, where the key feature is a feedback loop in which actions affect sounds that affect actions. As opposed to the BioMuse (which captures EMG signals), the principal technique driving the XS system is called acoustic mechanomyography, or MMG. Simple unobtrusive microphones are attached to the performer’s forearms, delivering two audio signals to a computer running custom analysis software. After analysis and pre-processing of the control signals, each arm provides variations on a continuous parameter related to signal amplitude, and six discrete threshold-based triggers that indicate flexing of each individual finger and the wrist as a whole. In the opening moments of Music For Flesh II (2011), these control streams are utilized to effect subtle and continuous changes in timbre and to trigger pitched treatment of raw muscle sounds in relation to specific fingers.

To preserve the conception of the human body as an independent musical system, sound-producing actions under the XS are intransitive, comprised of a vocabulary of flexing motions performed in open air by the arms, wrists and fingers of the performer. But absence of an external sound-producing object does not prevent the XS performer from developing a palpable aura of effort. The blend of frozen postures and turbulent movements required to achieve specific audio output outwardly demonstrates the biological performance feedback system that the XS project engenders. With reference to Cadoz’ gestural channel, the lack of an object for the performer to manipulate precludes conventional involvement of the ergotic function in XS movements. Extending this concept, however, Donnarumma proposes that:

a performer can produce specific (physical) phenomena by mastering the tension of her own body (the ergotic function), while experiencing the enactment of a higher muscular and articulatory sensitivity (the epistemic function). [Donnarumma 2012, 3]

In this sense, the ergotic function is activated because force is applied to the flesh, and because the performer perceives these changes through the flesh, the epistemic function (i.e., perception of the environment through gesture) is also involved.

Figure 4
Figure 3. Marco Donnarumma performing using his Xth Sense system. Photo © Dimitris Patrikios. [Click image to enlarge]

Complicating matters further, the XS demands an extension of the traditional notion of direct vs. indirect excitation actions. From one perspective, a great deal of technology stands between the performer’s actions and the corresponding sounds that emanate from loudspeakers. On the other hand, when the fundamental sound being heard is the activation of muscle tissue, the related action can be thought of as maximally direct: the action is the sound. One level higher in the framework, with respect to the categories of excitation versus modification, it is difficult to classify any given action with finality. A sharp flex of the wrist excites sound in the muscle tissue, but depending on the mappings in use it may also modify its own sound. Such contradictions point to the fact that the ultimate value of consistent terminology in discussing or documenting such projects is the facilitation of discourse rather than strict categorization of the resulting sound types.

Video play
Video 1 (2:48). An excerpt of Marco Donnarumma performing Music for Flesh II (2011), an interactive music performance for enhanced body using the Xth Sense biophysical instrument, designed by the performer. The artist explains in a voiceover how the system works. Performed during the BEAM Night (Brunel Electronic and Analogue Music) at Cafe Oto in London (UK) on 4 April 2012.

Strictly speaking, the category of selection movements (and ancillary movements in general) is not open to movements that produce sound. Nevertheless, suspending the conceptual complexities of the XS for the moment, the identification of XS selection movements can be fairly straightforward, as they involve varying states of muscle tension and orientation that are visually apparent. For example, clenching of the fist, gentle fluttering of the fingers and extreme outstretching of the arms are typical in performance. Compositionally, different adaptive audio treatments can be prepared in advance to tie specific physical states to unique musical interaction schemes. To move between these planned systems and evoke appropriate types of raw sound material from the body, the performer must develop sophisticated control over degrees of tension in various physical positions. Outwardly, these selection movements are quite clear, establishing an embodied key to the different sonic textures used in a performance. For instance, the entrance of the performer’s left arm is a dramatic moment in Music for Flesh II, as it remains completely still during the introductory section. Raising of the left arm into a new position constitutes a selection movement employed to access a different sonic region of the instrument, at the same time reflecting large-scale musical structure by creating visual contrast from the introduction in terms of the performer’s posture. 4[4. More information (texts and audio/video documentation) about Xth Sense can be consulted in the “Works” section of Marco Donnarumma’s website. A “Media Gallery” featuring works and performances by Marco Donnarumma is also published in this issue of eContact!]

Conclusions

Based on this review, it is clear that a range of perceivable control exists among AC-DMIs that varies considerably in magnitude and nature. Frameworks for music-related movement must be stretched accordingly, even to the point of adjusting to more broad language. In the case of action-sound relationships, for instance, we can use more inclusive language and term these phenomena physio-sonic relationships when needed. Of course, the utility of existing terminology endures in discussions specific to movement-based control schemes. With respect to comparative instrument dimension spaces like that proposed in Birnbaum et al.’s 2005 article “Towards a Dimension Space for Musical Devices,” the variation in perceivable control seen in the cases above indicates that this feature could be incorporated as a useful axis of their basic model. They encourage additions and substitutions, but also emphasize that a standard set of axes may eventually emerge for particular types of systems. The identification of consistently useful dimensions for DMI description is therefore an important focus of research. Birnbaum et al.’s dimension space has been applied and modified in order to clarify conceptual aspects of general musical systems (Magnusson 2010), but thus far a specific analysis of DMIs has not been attempted with both Miranda and Wanderley’s controller spectrum and perceivable control included as axes. As embodied approaches to computer music performance continue to flourish, a thorough study along these lines would provide valuable insight for composers, performers, and instrument designers.

The Mimetic Hypothesis

The detailed treatment that music-related movement is given in the frameworks described above speaks to its importance as a musical dimension. Further, the previous section demonstrates that more general control scheme analysis based on these models can provide useful distinctions between instances of AC-DMI performance. Moving forward, in order to support the notion that physio-sonic relationships constitute a fundamental aspect of music perception, we will require more than well-formulated terminology. To that effect, a compelling intersection of abstract concepts and concrete findings can be seen between Godøy’s notion of the gestural-sonorous-object, Cox’s mimetic hypothesis, and the human audiovisual mirror neuron system.

Rolf Inge Godøy draws on a classical concept from electronic music — Pierre Schaeffer’s objet sonore — to assert that all sounds have implicitly gestural associations (Godøy 2006). This idea seems to contradict the standard notion of a sonorous object as a purely abstract sonic entity arrived at through a process of reduced listening that actively attempts to nullify the cultural and causal significations of a sound. In response to potential skepticism, Godøy is quick to provide Schaeffer’s own words, which acknowledge the fact that it is impossible to rigidly adhere to a single mode of listening, and that the focus of listening attention will inevitably pass unconsciously from one system to another. In other words, there is room to consider the sonorous object in conjunction with other modes of perception. In fact, Godøy claims that Schaeffer’s methods for excising a sonorous object from a recording — and the very descriptors he applied to his classifications in his 1966 Traité des objets musicaux — are themselves evidence of the connections between movement and sound. In Godøy’s words, “there is a gesture component embedded in Schaeffer’s conceptual apparatus” (Godøy 2006, 154).

Building on the relatively easy-to-accept notion that the perception of sound might generate images of related movements, Godøy reasserts his previously expressed belief in a motor-mimetic component of music perception:

There is an incessant simulation and reenactment in our minds of what we perceive and a constant formation of hypotheses as to the causes of what we perceive and the appropriate actions we should take in the face of what we perceive. I believe this points in the direction of what I would like to call a motor-mimetic element in music perception and cognition, meaning that we mentally imitate sound-producing actions when we listen attentively to music. (Godøy 2003, 318)

The instinctual need to account for the sounds in our environment does seem to extend to benign as well as life-threatening situations. With regard to the implications of motor-mimesis playing an active role in music perception, it would go far in explaining the frustrated reactions to laptop performance referenced above. It is not as if the process of relating the minimal actions of a performer to a dense stream of musical events is an outrageously difficult task for contemporary audiences — it is that such a relationship between performer movement and sound may not clearly invite processes of re-enactment or empathy. If human beings are unconsciously accustomed to following a musical performance through vicarious participation or imagined action, it is plausible that efforts to understand and relate to music largely depend on clear physio-sonic relationships.

Arnie Cox begins his presentation of a similar theory — the mimetic hypothesis — with an invitation:

Recall the Beethovenian theme of the last movement of Brahms’s 4th Symphony. As you recall, ask whether your voice is involved or activated in any way, whether imagining singing, or singing along, or feeling only the impulse to sing along. (Cox 2001, 195)

In what follows, Cox introduces the idea of subvocalization, which in its most subtle form is the mere impulse to vocalize. The article suggests that subvocalized mimetic response to music is responsible for both the culturally ingrained metaphor of “greater is higher” in relation to pitch, and the unidirectional application of vocally-inspired terminology to instrumental sound (e.g., cantabile, sotto voce, etc.). A fundamental assumption of the argument is that while children take part in overt mimesis, the mimetic participation of adults gradually becomes covert, yet never disappears completely. In the case of music perception, this would mean precisely what Godøy implies above — that we are covertly but incessantly imitating the sounds that we hear.

Subvocalization is the type of motor-mimesis that Cox sees as most important, but in the case of musical instruments, he suggests that a range of motoric responses exist. According to Cox, upon hearing sounds we first react through motor mimesis, then compare that motor response to motor patterns associated with our previous experience of creating the sound (or a similar sound) ourselves. This is a direct comparison if we actually have experience making the sound in question, and indirect if not. Thus, when we hear an instrument, mimesis enables both a direct comparison to our own experience of making such sounds, as well as an indirect comparison to making similar sounds via other types of action (perhaps vocalizing).

Suspending any disbelief that we automatically engage in such a process, we can consider the implications as explained by Cox. The most radical suggestion is that “speech imagery and musical imagery are actually special cases of motor imagery in general” (Cox 2001, 201). That is, our systems for imagining the sounds of speech and music could be subsystems of imagining movement. If true, this would have further implications in the area of musical affect — a search for understanding why it is that music should elicit such strong emotional responses. In Cox’s words, “the hypothesis suggests that muscular-emotional response to music is… integral to how we normally perceive and understand music, because we normally imagine (most often unconsciously) what it is like to make the sounds we are hearing” (Ibid., 205).

The notions of musical mimetic response put forward by Godøy and Cox, compelling as they may be, cannot be taken at face value; however, their significance to discourse on AC-DMI design and performance is obvious. The most extreme consequence of these theories is that music is understood only through movement. DMI controllers such as the Silent Drum and Xth Sense easily engage with and reinforce this idea, but an interesting complication arises with purely thought-based control via the BioMuse. In conjunction with motor responses, we can entertain the possibility of mimetic thought responses and consider how the two might relate.

The Mirror Neuron System

Neuroscience research from the early 1990s identifying so-called mirror neurons in the macaque monkey premotor cortex has garnered a great deal of attention in recent years. Here, it provides support for theories of embodied music cognition. Mirror neurons fire during both the execution and mere observation of specific actions (Rizzolatti and Craighero 2004). By recording the discharge patterns of mirror neurons, it has been established that this area of the brain reacts similarly whether a monkey picks up a piece of food or observes another monkey picking up a piece of food. This phenomenon occurs at various distances, with various objects, and regardless of whether a monkey or a human carries out the observed action. A hypothesized function of these neurons is that they play an important role in processes of imitation and action understanding.

Giacomo Rizzolatti and Laila Craighero cite two experiments that test this hypothesis (Umiltà et al., 2001; Kohler et al., 2002). In the latter study, the authors exploited actions with prominent sonic by-products to show that movement can be understood without visual information. Thus, in the absence of visual cues, it is suggested that mirror neurons mediate understanding of actions that are taking place. The response of mirror neurons was recorded in relation to the sight and sound (V+S) and sound alone (S) of two actions: paper ripping and the dropping of a stick. In both the (V+S) and (S) cases, increased activity of these neurons clearly aligned with the sound event, with a slightly weaker response in the case of (S). It should be noted that the authors recorded responses to non-action-related sounds as well. Neither white noise nor synthesized clicks elicited an increase in activity whatsoever, indicating the specific significance of established action-sound relationships. The subset of mirror neurons that responded strongly in both the (V+S) and (S) cases were named audiovisual mirror neurons.

Audiovisual mirror neuron research has established that for macaque monkeys, sound perception involves the firing of cells that are associated with the actual execution of movements known to make the perceived sound in the first place. According to these results, sound literally elicits an appropriate motor response. With regard to whether or not a similar system exists in human beings, Rizzolatti and Craighero’s references and subsequent studies carried out by others (Aziz-Zadeh et al., 2004; Gazzola et al., 2006) point to the affirmative. The relevance of these findings to the mimetic hypothesis is that sounds alone may indeed trigger mimetic responses as we listen to music.

Building off of studies establishing human motor response to speech sounds, Lisa Aziz-Zadeh et al. set out to investigate other action-related sounds. Paper tearing and typing sounds (action-sound relationships associated with the hand) and control sounds (footsteps and thunder) were played to participants while transcranial magnetic stimulation (TMS) was applied. During this process, motor evoked potentials (MEPs) were recorded from the hand muscle associated with the stimulated hemisphere. The study documented significantly larger MEP amplitudes when participants heard the hand-related sounds during stimulation of the left hemisphere, indicating that “in the left hemisphere, actions are coded through auditory, visual and motor components.” In short, simply hearing hand-related sounds resulted in motor response, while footsteps and thunder clearly did not. This is in spite of the fact that participants did not report mentally imagining the actions associated with sounds presented to them during trials.

In 2006, Valeria Gazzola, Lisa Aziz-Zadeh and Christian Keysers investigated audiovisual mirror neuron properties from both directions, recording activity while participants listened to sounds and when they later performed the actions associated with these sounds. Monitoring was carried out using functional magnetic resonance imaging (fMRI). The study demonstrated further selective characteristics of the mirror system, identifying regions that were activated by either the sound or performance of hand-related actions but not mouth-related actions, and vice-versa.

Music-specific studies making use of fMRI have shown that levels of musical training have a strong effect on response patterns. Thus, we must be careful not to make generalizations concerning perception with respect to these issues. In 2005, Haslinger et al. presented pianists and a non-musician control group with video of piano playing actions without sound. Pianists exhibited activation in auditory areas, while non-musicians did not. Consistent brain activity unique to professional pianists vs. non-musicians when either listening to piano tones or playing a mute piano keyboard was identified by Marc Bangert et al. in 2006.

Changes in motor response have also been tracked relative to short-term musical training. Focusing on a specific action-sound relationship — right-handed piano playing — an experiment described by Amir Lahav, Elliot Saltzman and Gottfried Schlaug in 2007 required non-musicians to learn 24-note, 15-second melodies by ear, and later monitored frontoparietal motor-related brain regions via fMRI during a variety of listening situations. Participants were instructed to remain completely still while listening to eliminate the possibility that recorded motor activity could be due to actual movement. The authors identified motor areas that were active only when participants heard the melodies they had practiced. Unfamiliar melodies composed of different pitches failed to cause any such response; however, unfamiliar sequences made up of the same pitches used in a practiced melody triggered a degree of motor activity as well, despite the fact that “subjects were completely unaware… that the piece was composed of the same notes as the trained-music,” and that unfamiliar sequences using non-practiced pitches triggered no activity. Musical novices thereby exhibited appropriate motor response based on absolute pitch.

If the mere sound of a piano can evoke motor imagery of piano performance movements and vice-versa, could listening to a recording of an AC-DMI performance by artists that we have witnessed in action elicit appropriate motor mimesis? This would require that connections between action and sound are able to form within a very short time span. A statement from David Rokeby, creator of the Very Nervous System, which tracks body motion to control sound, seems to confirm this possibility anecdotally:

An hour of the continuous, direct feedback in this system strongly reinforces a sense of connection with the surrounding environment. Walking down the street afterwards, I feel connected to all things. The sound of a passing car splashing through a puddle seems to be directly related to my movements. (Rokeby 1998, 28)

In Rokeby’s case, the consistent practice of artificially triggering sound via body motion produced a state of mind in which he perceived his body to be controlling independent sound events outside of the video tracking environment. This perceived control could actually be the result of a learned mimetic response.

The full implications of Rizzolatti et al.’s theories of action understanding and reproduction via the human mirror neuron system are contentious (Turella et al., 2009; Hickok 2009). However, the notion that viewing actions or simply hearing sounds associated with certain actions triggers activity in the motor systems of our brains appears to be firmly established. At the very least, conclusions from the research reviewed in this section are relevant to both Godøy and Cox’s propositions. To a certain extent, their intuitions are supported by empirically determined evidence.

Conclusion

Indications that music perception involves covert motor mimesis help us view appreciation of instrumental virtuosity in a more favourable light, and provide meaningful insight for creative choices centred on perceivable control in DMI design. By challenging well-worn paths of motor mimesis in music, biosensor-based DMIs present a compelling special case, exploring the extent to which mimesis is a flexible phenomenon based on context and experience, and stretching existing frameworks for describing music-related movement. They also raise the opportunity to identify palpable state-sound relationships relative to emotion, which carry implications beyond the scope of musical performance.

The performance of sound via thought poses the most difficult questions with regard to the notion of mimesis. If we accept Cox’s proposal that listeners process sounds by imagining how to create them, it is likely that their responses will be based primarily on the understood origin of the synthesized sound. For example, when thoughts of anger are made to trigger recordings of a crash cymbal, it seems safe to assume that movements associated with cymbal performance will dominate mimetic processes. However, with less clearly defined sonic origins in play, the possibilities are quite open. It is plausible that our ability to associate thoughts with sounds could be refined through continual performance and perception of music generated in this manner, enhancing our awareness of the subtleties associated with emotional states in general. After years of experience, we might ask whether strong physio-sonic relationships are capable of triggering mimesis of thoughts rather than motor patterns. It is no exaggeration to say that there are both musical and social consequences tied to continually redefining physio-sonic relationships. As various combinations are played out in the arena of musical performance, we can only speculate on the repercussions.

Bibliography

Alburger, Mark. “Z’wonderful, Z’marvelous: An Interview with Pamela Z.” 20th Century Music 6/3 (1999), pp. 1–16.

Aziz-Zadeh, Lisa, Marco Lacoboni, Eran Zaidel, Stephen Wilson and John Mazziotta. “Left Hemisphere Motor Facilitation in Response to Manual Action Sounds.” European Journal of Neuroscience 19 (January 2004), pp. 2609–2612.

Bangert, Marc, Thomas Peschel, Gottfried Schlaug, Michael Rotte, Dieter Drescher, Hermann Hinrichs, Hans-Jochen Heinze and Eckart Altenmüller. “Shared Networks for Auditory and Motor Processing in Professional Pianists: Evidence from fMRI conjunction.” NeuroImage 30/3 (April 2006), pp. 917–926.

Birnbaum, David M., Rebecca Fiebrink, Joseph Malloch and Marcelo M. Wanderley. “Towards a Dimension Space for Musical Devices.” NIME 2005. Proceedings of the 5th International Conference on New Instruments for Musical Expression (Vancouver: University of British Columbia, 26–28 May 2005), pp. 192–195.

Burns, Kevin. “Bayesian Beauty: On the ART of EVE and the act of enjoyment.” AAAI 2006. Proceedings of the AAAI06 Workshop on Computational Aesthetics 2006. Boston, Massachusetts, 16–20 July 2006.

Cadoz, Claude and Marcelo Wanderley. “Gesture-Music.” Trends in Gestural Control of Music. Edited by Marcelo M. Wanderley and Marc Battier. Paris: IRCAM — Centre Pompidou, 2000, pp. 71–94.

Chion, Michel. Audio-Vision: Sound on Screen. Trans. Claudia Gorbman. New York: Columbia University Press, 1994.

Cox, Arnie. “The Mimetic Hypothesis and Embodied Musical Meaning.” Musicæ Scientiæ 5/2 (Fall 2001), pp. 195–212.

Donnarumma, Marco. “Xth Sense: Researching muscle sounds for an experimental paradigm of musical performance.” LAC 2011. Proceedings of the Linux Audio Conference 2011 (Maynooth, Ireland: National University of Ireland Maynooth Department of Music, 6–8 May 2011).

_____. “Music for Flesh II: Informing interactive music performance with the viscerality of the body system.” NIME 2012. Proceedings of the 12th Conference on New Interfaces for Musical Expression (Ann Arbor MI, USA: University of Michigan at Ann Arbor, 21–23 May 2012).

Gazzola, Valeria, Lisa Aziz-Zadeh and Christian Keysers. “Empathy and the Somatotopic Auditory Mirror System in Humans.” Current Biology 16 (September 2006), pp. 1824–1829.

Godøy, Rolf Inge. “Motor-Mimetic Music Cognition.” Leonardo 36/4 (August 2003), pp. 317–319.

_____. “Gestural-Sonorous Objects: Embodied extensions of Schaeffer’s conceptual apparatus.” Organised Sound 11/2 (August 2006) “Identity and Analysis,” pp. 149–157.

Haslinger, Bernard, Peter Ehard, Eckart Altenmüller, Ulrike Schroeder, Henning Boecker and Andrés Ceballos-Baumann. “Transmodal Sensorimotor Networks During Action Observation in Professional Pianists.” Journal of Cognitive Neuroscience 17/2 (February 2005), pp. 282–293.

Hickok, Gregory. “Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans.” Journal of Cognitive Neuroscience 21/7 (July 2009), pp. 1229–1243.

Jensenius, Alexander R. “Action-Sound: Developing methods and tools to study music-related body movement.” Unpublished PhD dissertation. University of Oslo, 2007.

Knapp, R. Benjamin and Hugh Lusted. “A Bioelectric Controller for Computer Music Applications.” Computer Music Journal 14/1 (Spring 1990) “New Performance Interfaces (1),” pp. 42–47.

Knapp, R. Benjamin and Perry R. Cook. “The Integral Music Controller: Introducing a Direct Emotional Interface to Gestural Control of Sound Synthesis.” ICMC 2005: Free Sound. Proceedings of the International Computer Music Conference (Barcelona, Spain: L’Escola Superior de Música de Catalunya, 5–9 September 2005).

Knapp, R. Benjamin and Eric Lyon. “The Measurement of Performer and Audience Emotional State as a New Means of Computer Music Interaction: A Performance Cage Study.” ICMC 2011: “innovation : interaction : imagination”. Proceedings of the International Computer Music Conference (Huddersfield, UK: Centre for Research in New Music (CeReNeM) at the University of Huddersfield, 31 July – 5 August 2011), pp. 415–420.

Kohler, Evelyne, Christian Keysers, Maria Alessandra Umiltà, Leonardo Fogassi, Vittorio Gallese and Giacomo Rizzolatti. “Hearing Sounds, Understanding Actions: Action representation in mirror neurons.” Science 297 (August 2002), pp. 846–848.

Lahav, Amir, Elliot Saltzman and Gottfried Schlaug. “Action Representation of Sound: Audiomotor recognition network while listening to newly acquired actions.” Journal of Neuroscience 27/2 (January 2007), pp. 308–314.

Magnusson, Thor. “An Epistemic Dimension Space for Musical Devices.” NIME 2010. Proceedings of the 10th International Conference on New Instruments for Musical Expression (Sydney, Australia: University of Technology Sydney, 15–18 June 2010)., pp. 43–46.

Malloch, Joseph and Marcelo Wanderley. “The T-Stick: From musical interface to musical instrument.” NIME 2007. Proceedings of the 7th International Conference on New Instruments for Musical Expression (New York: New York University, 6–10 June 2007).

Miranda, Eduardo Reck and Marcelo Wanderley. New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. Middleton WI: A-R Editions Ltd., 2006.

Miyama, Chikashi. “Peacock: A non-haptic 3D performance interface.” ICMC 2010. Proceedings of the International Computer Music Conference (New York: Stony Brook University, 1–5 June 2010), pp. 443–445.

Oliver, Jaime and Matthew Jenkins. “The Silent Drum Controller: A New percussive gestural interface.” ICMC 2008. Proceedings of the International Computer Music Conference (Belfast: SARC — Sonic Arts Research Centre, Queen’s University Belfast, 24–29 August 2008).

Ostertag, Bob. “Human Bodies, Computer Music.” Leonardo Music Journal 12 (December 2002) “Pleasure,” pp. 11–14.

Pérez, Miguel A. and R. Benjamin Knapp. “BioTools: A Biosignal Toolbox for Composers and Performers.” Computer Music Modeling and Retrieval. Sense of Sounds: Lecture Notes in Computer Science. Edited by Richard Kronland-Martinet, Ystad Sølvi and Kristoffer Jensen. Heidelberg: Springer Berlin, 2008, pp. 441–452.

Rizzolatti, Giacomo and Laila Craighero. “The Mirror-Neuron System.” Annual Review of Neuroscience 27 (2004), pp. 169–192.

Roads, Curtis. Microsound. Cambridge, MA: MIT Press, 2002.

Rokeby, David. “The Construction of Experience: Interface as context.” Digital Illusion: Entertaining the Future with High Technology. ACM Press, 1998, pp. 27–47.

Schaeffer, Pierre. Traité des objets musicaux. Paris: Éditions du Seuil, 1966.

Stuart, Caleb. “The Object of Performance: Aural performativity in contemporary laptop music.” Contemporary Music Review 22/4 (2003), pp. 59–65.

Tanaka, Atau. “Musical Performance Practice on Sensor-based Instruments.” Trends in Gestural Control of Music. Edited by Marcelo M. Wanderley and Marc Battier. Paris: IRCAM — Centre Pompidou, 2000, pp. 389–406.

Turella, Luca, Andrea C. Pierno, Federico Tubaldi and Umberto Castiello. “Mirror Neurons in Humans: Consisting or confounding evidence?” Brain and Language 108 (January 2009), pp. 10–21.

Umiltà, Maria Alessandra, E. Kohler, Vittorio Gallese, Leonardo Fogassi, Luciano Fadiga, Christian Keysers and Giacomo Rizzolatti. “‘I know what you are doing’: A Neurophysiological study.” Neuron 32 (2001), pp. 91–101.

Waisvisz, Michel. “THE HANDS: A Set of remote MIDI controllers.” ICMC 1985. Proceedings of the International Computer Music Conference (Vancouver BC, Canada: Simon Fraser University, 1985).

Social bottom