Twiddling and Twerking
Thoughts on electroacoustic music performance
In this paper, we take a broad (and slightly wary) view of the state of performative electroacoustic music, from the perspectives of both performer and audience. We first consider performance gestures that follow from interaction with music technology (“twiddling”). We then assess the role of conspicuous arrays of technology, such as might be placed on a stage or installed in a concert venue for acousmatic music (“twerking”). Along the way, we will also consider the blurring of boundaries between popular electronic music and (so-called) “serious” electroacoustic music.
Our focus here is on situations where performers and technology are in the same physical (or virtual) space as the audience and the sound is either created or significantly manipulated in real time.
Those criteria become somewhat nuanced with acousmatic music performance, where we encounter such variables as the relative visibility of the speaker system and the degree of performer involvement in the diffusion. For example, an actively diffused work, presented with loudspeakers and mixing desk clearly visible, will certainly invoke a very different performer-audience relationship than the passive presentation of the same piece in a darkened hall.
The daunting scale and visual complexity of early electroacoustic instruments probably encouraged early performers to approach this technology rather cautiously. Meanwhile, popular musicians evolved their own ways of interacting with music technology. For example, in the late 1960s and early 1970s, Keith Emerson, of Emerson, Lake and Palmer fame, became widely known for his highly theatrical interactions with his instruments — interactions that sometimes included stabbing the keyboards of Hammond organs with knives at peak moments in his performance.
But other (less dramatic) electroacoustic performers increasingly incorporated visual elements as well — multi-media “happenings”, projections or video playback, and more recently, colourful blinking controllers optimized for beat-based electronica. In parallel with this evolution, our audiences may now even expect electroacoustic performances to be visually interesting.
After all, for an audience, a theatrical jab at a brightly lit controller button (especially a blinking one) is visually much more compelling than the almost invisible press of a laptop key — even if the audible result is the same. Similarly, a performance that incorporates video screens, with close-ups of the artist engaging with their technology, will take on an enlarged cinematic reality, much like what happens in large-scale popular music performances.
Let us consider some implications of this evolution.
In our present discussion, we use “twiddling” to refer to visible performer interaction with music technology.
The term “twiddling” traditionally evokes small, distracted movements. But in any technology-exponentiated context — electronic or acoustic — movements may have audible consequences of far greater magnitude than their physical scale would suggest. For example, a delicate key press on a pipe organ may produce a loud, complex, densely layered sound. And, in a contemporary EA performance, a single push of a controller button may initiate a lengthy, complex sequence of musical events. Of course, the converse can also be true: a performer may make a highly theatrical gesture that invokes an almost insignificant musical result.
We can find a similar range in the feedback that controller devices give to their operators. An action that invokes significant audible change may produce very little visible feedback from the device, while a gesture that has been programmed to produce minimal audible impact may still invoke a frantic flurry of pretty blinking lights.
In each situation, we might ask ourselves: Are the performer gestures and the visible device feedback primarily functional or largely theatrical? How do visible actions correspond to sonic results? And how do performance gestures in electroacoustic music correspond to those associated with more familiar acoustic instruments?
Traditional instrumental performance can incorporate a very wide range of gestures, with an equally wide range of audible results. Some of those gestures are, of course, purely functional for sound production. Others may be entrenched in the theatrical rituals of the concert hall, where even a page turn can become a self-consciously articulated part of the performance.
Norman Stanfield, an ethnomusicologist at the University of British Columbia, explains it this way:
To the audience, the performer may appear to be swept up in a sea of emotion, but in fact they are conducting themselves in a very calculating manner, just like an actor. (Stanfield 2014)
But, more pragmatically, while audiences may not be able to predict the precise sonic result of a performer’s visible interaction with an instrument, even those unfamiliar with that instrument can often make enough of a correlation between visible and audible to engage in a meaningful way with the performance.
More importantly, that correlation will likely still apply the next time they hear and see a performance on that instrument. In contrast, with programmable controllers and electroacoustic instruments, the connection between performer gesture and audible result is not only probably quite impenetrable to the audience, it can also be completely different in each performance situation.
Not so long ago, devices for making or controlling electroacoustic music sported a single red light to signify that they were receiving power, while the often obscure physical positions of knobs, faders, switches and patch cords provided the only other visible feedback to both performer and audience. Since then, we’ve transitioned from red bulbs to blue LEDs; from passive, to motorized, to virtual faders; from dedicated switches to programmable buttons; and from hard-limited analogue potentiometers to continuous rotary encoders whose physical position is no longer consistently meaningful.
We’ve moved from plug, switch, slide and turn, to tap, swipe, pinch and rotate — stopping along the way to toggle, twist, punch and press.
This evolution from dedicated single-function hardware controllers to digitally assignable multi-function ones has even further obscured the meaning of a particular on-stage action or device response, for both performers and audiences. The blinking lights so common in current commercially available controllers can provide meaningful feedback to performers, particularly those creating the same kind of music the device designers anticipated. But for broader musical applications — and, perhaps more significantly, for most audiences — they may simply add an ambiguous layer of visual theatricality to the performance.
Note that our concern here is ambiguity, not visual theatricality per se. In fact, presenting our work in a performance context will invoke enough of the traditional performer-audience relationship to make all of our visible actions on stage — theatrical or not — an important part of the audience experience. Our instrument (or the interface we use to engage with our meta-instrument) may not be made of exotic wood or polished brass, but our physical engagement with it is still the main visible link between our musical ideas, the music, and the audience.
The shape and playing position of the controller devices we use is significant. With relatively few exceptions, the current trend has moved from the near-vertical work surfaces of the early ARP and Moog instruments to the flat, horizontal shapes of modern dedicated controllers and their software equivalents in flat, horizontal smart phones or tablets.
Just as we do with our phones, we tend to engage with modern flat controllers in a stooped or hunched posture. Ignoring for the moment any long-term health effects, the result is that we are also depriving the audience of much of their potential connection with the performer-instrument interaction. The meaning of that interaction may still be largely inscrutable, but if the audience cannot even see what’s happening, we have closed the curtain before the performance even begins.
The use of video projection may help the audience see performer-controller interaction, even with flat controllers. But the knowledge that even their most minute gestures will be shown on screen “larger than life” can also encourage changes in performer behaviour.
Some artists may perform with self-conscious care, effectively turning their performance gestures into a kind of dance. But others may be much less comfortable with the scrutiny, and shrink nervously from the camera or awkwardly exaggerate their movements.
Some independent artists do choose to ignore the marketing pressure to use DJ or electronica-focused controllers in performance, and instead employ simple custom-made devices, nostalgically “retro” technology or uncomplicated mass-produced guitar stomp-boxes.
Because these devices usually provide relatively minimal visual feedback, their use may permit both audience and performer to focus more readily on the sonic result rather than on the technology interaction — in other words, to generate a relatively more acousmatic experience.
Montréal-based creators Martin Messier and Jacques Poulin-Denis took a different approach to visual discretion and deliberately camouflaged their controllers as part of the theatrical set in Le projet pupitre (The Pencil Project). This strategy allowed them to be both “liberated from the computer screen and equipped with hands-on objects” (Messier and Poulin-Denis 2014).
But a few artists have chosen to directly confront the limitations or biases of both the laptop and the current generation of popular music-oriented devices, and invent their own digital controllers. A contemporary Canadian example is the Arc, designed by composer Andrew Staniland and engineer Scott Stevenson, at Memorial University in Newfoundland.
Staniland describes how the device addresses performance concerns:
In performance… the Arc feels like a responsive instrument, where the MIDI fader box feels stiff and uncommunicative with the audience. Performing live electronics is always problematic, especially when the performer is buried behind a laptop. The Arc instruments help make electronic music performance more engaging for both performer and audience. (Staniland 2014)
And Stevenson explains the device’s implications for an audience:
The core problem we were trying to solve with the Arc was that traditional desktop interfaces often don’t allow the audience to view the performer’s cause-and-effect relationship with their sounds…. So the core difference we tried to achieve (and believe we did) was allowing the audience a clear view of this cause-and-effect relationship. (Stevenson 2014)
In shape and playing position, the Arc design seems to echo a little of both guitar and accordion. It has buttons and slider strips, with understated illuminated feedback visible to both player and audience.
Like any other digital controller, of course, the connection between action and sound ultimately depends on the programming. But, unlike many others, this device was consciously designed to facilitate both meaningful performer gestures and — perhaps more significantly — audience engagement with the purpose of those gestures.
Twerking has been defined as “dance to popular music in a sexually provocative manner involving thrusting hip movements and a low, squatting stance” (Oxford Dictionaries 2014).
For our present purposes, we have adapted the term to mean an unrestrained display of one’s technical assets — in other words, any substantial, highly visible array of technology that attracts the attention of an audience.
In those situations, we may ask ourselves: Is all this technology required for the performance? Does its visibility enhance the audience’s response to the music, or potentially distract them from it?
For those who engage deeply with acousmatic music, there is (one hopes) no comparison between the elegant disposition of expensive loudspeakers around a concert hall and the wild shaking of body parts in someone’s face. But for more casual audience members — whose aural and technological expectations might be based on earbuds or an iPod dock with tiny speakers — a ring of very large loudspeakers, out of which will likely come sounds they might not relate to anyway, may indeed be an ostentatious display of “booty” (though perhaps more in the sense of pirate treasure than human anatomy).
For perspective, the BEAST system at the University of Birmingham, which is a frequent benchmark for sophisticated diffusion systems, has routinely employed 80–90 loudspeakers, arranged on up to four different elevations (Wilson 2014).
Of course, a conspicuous display of music technology is by no means something new, nor is it restricted to spatialized electroacoustic music. Consider, for example, the ubiquitous stack of Marshall amplifiers that used to signify that a rock band was serious and hardcore — at least until they struck an out-of-tune chord or broke a string.
We noted earlier how the audience-performer relationship will differ for active visible diffusion, as compared with a less visible, passive presentation. We can make an interesting extension to this notion of visibility. If a diffusion performer moves visibly to the mixing desk at the start of a piece, that move sets up the expectation, for an experienced audience at least, of real-time intervention, and lends the endeavour an air of “performance” — regardless of whether the performer actually does much more than press “play”.
If the performance takes place with lighting that focuses the audience’s attention on the sound system, then that audience may even assign the role of “performer” to the speaker system itself. Our instinct to localize sound sources is strong and if we have any difficulty correlating the performer’s moves with what we hear, we can simply change our focus to what we perceive to be the source of the sound in the concert hall.
It seems, then, that our conspicuous displays of technology may play a larger — and more active — role than simply impressing or distracting our audiences.
The mixing desk — a key component of an active diffusion system — has simultaneously grown in technical sophistication and shrunk in physical presence. Similar to other performance technologies, we have moved from large analogue devices with single-purpose faders and knobs to smaller digital boxes with multi-function controllers. And there is now a further shift away from physical controls to screen-based ones, with discrete fingertip gestures replacing the up-and-down movements of a fader or the twist of a knob.
Operating a classic diffusion desk has always required some physical movement — manipulating multiple, long-throw faders and shifting from one fader range to another — and there was at least a possibility that the audience might understand something of the meaning of those movements. In contrast, the actions used to diffuse a piece of acousmatic music with a screen-based controller will be largely opaque to the audience.
However, there are also developments that might perhaps help restore or enhance the visible connection between action and sound in performed acousmatic music. Some presenters have explored the use of gestural controllers that map body-scale physical movement to sound movement in a large performance space. These devices may well provide a more understandable correlation from the audience’s point of view, though their use also potentially opens the door to new levels of theatricality in diffusion — which could be a somewhat “mixed” blessing.
The laptop orchestra presents its own subset of performance considerations. On the one hand, there seems to be little that is gesturally theatrical about a collection of people quietly clicking laptop keys with eyes glued to individual screens.
But, while there may be only modest physical interaction between performer and device — and even less between performer and audience — we could reasonably ask whether the display of technology itself is distracting from or supporting the musical purpose.
In other words, and similar to the situation with large-scale diffusion systems, we must consider to what extent the audience’s attention is drawn to the technology on stage (which may still sometimes be seen as “new” or novel), rather than to the musical result — and what the impact of that change of focus might be.
When it’s all about you…
Before we finish, we should consider again who actually benefits from all this (alleged) twiddling and twerking. Is it useful or valuable for an audience to see theatrical gestures, blinking controllers or huge arrays of loudspeakers? Do such visual displays help with their understanding and appreciation of our music?
Clearly, at times, the answer is yes. From the audience point of view, a visible, kinæsthetic connection between gesture and sound, or between sound and physical source, can intensify the response to a work of sonic art.
For example, being conscious of speaker placements may enhance the perception of spatial location or movement in the music and thereby highlight the work of the composer or the diffuser. And a measured dose of theatricality by a performer can increase the audience’s awareness of musical gestures and, by extension, help with their understanding of the content and shape of the music. Similarly, from the perspective of the creator, visual feedback from controllers can be very helpful for both creation and performance, providing feedback that at least partially echoes the instrumentalist’s relationship with an acoustic musical instrument.
But there are certainly also times when elaborate gestures, blinking lights or massive technology displays seem to exist more for the performers’ self-satisfaction than for practical purposes.
So please allow me to introduce a third and final descriptor, a kind of meta-category for those practices that benefit neither the creative process nor the audience’s appreciation of the work. Joining twiddling and twerking in our pantheon of potentially problematic performance practices, I would now like to welcome: “twanking”.
Messier, Martin and Poulin-Denis, Jacques. “Le projet pupitre.” Mutek project profile, 29 May 2008. http://www.mutek.org/en/hub/artists/445-martin-messier-jacques-poulin-denis-le-projet-pupitre [Last accessed 24 December 2014]
Oxford Dictionaries. “twerk.” http://www.oxforddictionaries.com/definition/english/twerk [Last accessed 24 December 2014]
Stanfield, Norman. “Performativity.” [n.d.] Available online at http://blogs.ubc.ca/normanstanfield/music-matters/performativity [Last accessed 5 August 2014]
Staniland, Andrew. Personal communication (email). 21 July 2014.
Stevenson, Scott. Personal communication (email). 22 July 2014.
Wilson, Scott. Personal communication (email). 17 February 2014.