eC!

Social top

English

Diegesis as a Semantic Paradigm for Electronic Music

In the field of narratology, diegesis is known as “the spatiotemporal universe of a story” (Genette 1969, quoted in Bunia 2010). This concept can be traced back to Plato’s dichotomization of narrative modes into imitation and narration, or mimesis and diegesis (Plato 1985 [c. 380 BC]). However, it has since yielded various incarnations that have been used for describing narrative structures in art and situating the components of an artwork in relation to one another. On a meta-level, the resulting narratological perspectives also provide insights into the fabric of the artistic experience through delineating relationships between the artist, the artistic material and the audience.

As an artistic form of temporal nature, music too prompts narratives. This, however, occurs in the abstract realm of the musical sound. Narratives are conveyed to the listener through a culturally embedded musical language that has been established over the course of centuries. The outcome of a musical experience, therefore, is the material’s unmediated transition to emotion. Electronic music, on the other hand, fosters an entirely new vocabulary of sounds. Extending beyond the well-ingrained structures of a traditional musical language, this new material engages with the cognitive faculties of the listener, inducing a layer of meaning attribution amidst the continuum from material to affect.

Consequently, electronic music assumes a mimetic role: the listeners are presented with sounds that represent extra-musical events while the medium of the recounting remains the same as that of the recounted. However, when this material meets the æsthesic capacity of the listener, the physical artifact is inevitably succeeded by the manifestation of a narrative. Therefore, a diegesis emerges in the intellectual domain. The cognitive processing of the music institutes a bond between the mimetic and the diegetic: the figure and ground relations between musical gestures extend beyond that of physical forms, and a narrative unfolds both in the spatial domain of the concert hall and in the semantic space superimposed onto this domain by the listener.

This article approaches the matter of “Inner and Outer Sound Places” by investigating the semantic and the spatial dimensions of electronic music and the contacts between these two dimensions. Explicit and implicit sonic worlds are discussed on the axes of focus and proximity in an effort to elicit new perspectives towards the concepts of figure and ground in electronic music. Diegesis is utilized as a paradigm to explain the tension / interaction between near and far while questioning the extents to which the listener is inside or outside the musical material. Rather than merely contrasting the mimetic and the diegetic aspects of electronic music through a dichotomy between the physical and the intellectual, this article studies the role of their coexistence in actively shaping our experience of electronic music.

The inferences that lead to the formulation of the aforementioned perspectives are extracted from both the author’s artistic practice over the years and the experimental data obtained from extensive subject group studies which were conducted to investigate the cognitive foundations of electronic music. In the context of this article, the experimental data is used to substantiate the remarks on the experience of electronic music. The theoretical framework of the diegesis approach is therefore motivated with real world examples.

From Plato to Genette

Throughout history, the concept of diegesis has come to assume several meanings, most of which can be associated with the modern field of narratology. Plato describes diegesis as a form of representation, in contrast to mimesis. While in mimesis, events “either past, present, or to come” are presented through imitation, diegetic representation utilizes narration (Plato 1985 [c. 380 BC]). Therefore, under Plato’s taxonomy of narrative forms, theatre is mimetic, because actors imitate (i.e. re-enact) situations; poetry, on the other hand, is diegetic because the poet, speaking in her own person, recounts events as a narrator who is external to the immediate world of the story.

Art forms, and moreover, how artistic material is experienced, have indeed evolved considerably since Plato’s delineation of these concepts. For example, in Plato’s time, reading poetry was not a recreational activity one would enjoy individually. Rather, the artists themselves spoke their poems to the public during gatherings. Due to changes in such practices, concepts of diegesis and mimesis have also been redefined several times to accommodate new art forms and new æsthesic routines. These redefinitions have inevitably bore contradicting views amongst theorists and, furthermore, have led to various art-form-specific demarcations of the terms.

In film, for instance, the sounds that occur within a scene (e.g., dialogue between two characters, music coming from a radio in the scene) are considered diegetic, while the film score that emerges from outside of the universe of the story is labelled as non-diegetic sound (Taylor 2007). Film theorists also evaluate the concept of diegesis in a broader sense. A prominent perspective is that all film is diegetic because the director chooses certain parts of the story’s universe to be displayed on the screen and therefore assumes the role of a narrator; all that is going on on-screen is illusory (Hayward 2006). But then we can question which form of art fails to meet this criterion. Even a generative artwork necessitates a moment in time when the artist initiates an algorithm, thus creating a narrative context for the piece. From this perspective, all artistic material can be deemed to display diegetic features even when they are communicated through a representational form that is mimetic in the Platonic sense.

Famous narratologist Gerard Genette applied the term diegesis exclusively to literary theory. By describing such subcategories as extradiegetic and heterodiegetic (Genette 1980), he created a narratological terminology, which he utilized to situate the author, the reader and the components of a literary text (i.e. characters, venues, time) in relation to one another. Differentiating between cascading layers of a narrative by starting from the physical world of the author on the outmost level, Genette traces out the concept of diegesis as the spatiotemporal universe to which the narration refers. Therefore, in his terminology, diegetic is “what relates, or belongs, to the story” (Genette 1969, quoted from Bunia 2010). Here we can observe a thread emerging between Genette’s literary definition of the term and its usage described above in the context of film where a diegetic sound originates from a source that belongs to the scene.

Narrativity and Meaning

Our understanding of time is a result of the “experience of successions” (Fraisse 1963, 1) and we constantly build narratives out of our sensory experiences “by anticipating the future and relating current perceptions to the past” (Roads [forthcoming]). The temporality of music, therefore, implies an inherent narrativity. The listener inevitably extracts a narrative from her musical experience due to the simple fact that a piece of music encapsulates a series of events between a starting point and an anticipated ending in the future. The extent to which the extracted narrative is concordant with the composer’s design does not impact its materialization. This narrative, however, emerges in the abstract realm of the musical sound.

Our perception of music is intrinsically a cultural phenomenon, as both the material and the language of music are fabricated: the sound of a modern instrument is the result of a man-made design specific to a culture’s musical heritage and it does not exist in nature in its pure form. The language through which an instrument speaks is also synthetic in a similar fashion: the constituent structures of a musical language, such as melodies and harmonies, are abstract concepts that have been established over the course of centuries. Throughout the history of music, these fabricated structures have been engraved to our deep-seated mechanisms of music perception. Music has managed to gradually reverse-engineer semiosis, as the sign (i.e. the abstract components of a musical language) has come to synthesize the referent (i.e. affective appraisal). We, therefore, appreciate music through a culturally-idiosyncratic musical language and there remains no delegation in between as the material ascends to the affect (Deleuze and Guattari 1994, 466). The resulting musical experience is one of emotions.

But with electronic music, we witness a shift in musical material. Transgressing the limits of physical instruments, this relatively new form of musical expression reaches beyond the so-called musical sound and renders any sound within the limits of human perception a material for music. As a result, the experience of electronic music significantly diverges from that of instrumental music. Common notions of music fail to suffice in describing the genre and we therefore experience it within a much broader domain of cognitive associations: when we encounter sounds and forms that fall outside the vocabulary of our culturally embedded musical language, our “ear-witness accounts” (Truax 2001, 17) — our memories of previously observed sonic events — remain as the references for processing this experience. Therefore, amidst the material’s ascent to affect, there emerges a mediating layer of meaning attribution, and a new continuum from material to meaning to affect materializes (Çamcı 2012).

This, however, should be considered not a total departure from traditional musical practices, but rather an amalgamation of languages old and new; an expansion of the spectrum of musical experience. The mediating layer introduced by electronic music represents a cognitive continuum from abstract to concrete which is now an instrument for the composer. The listener, inhabiting the spatial domain of the concert hall, superimposes semantic representations over her physical experience of the sounds, and her affective appraisal of the artwork is immanently informed by this act.

As a result, narratives take new forms. In his seminal book Emotion and Meaning in Music, Leonard B. Meyer briefly touches upon the concepts of image processes and connotations in instrumental music, as he analyzes the interplay between memories and emotions in musical experience (Meyer 1961). In the concise final chapter of the book, Meyer outlines these concepts without a consideration of electronic music. They can, however, be intuitively adapted to the experience of the genre. Connotations are interpersonal associations between musical and extra-musical objects, standardized in cultural thinking: Meyer describes how certain instruments, like the gong, is linked by the listeners to the Orient, or how the composers of the Baroque period devised connotative symbols by which a melody could represent an individual. The recognition of such connotations necessitates however, in Meyer’s words, habituation and automatism, which is obtained over time and after “repeated encounters with a given association” (Ibid., 260). He further explains how dynamics in music can be likened to the experience of life and are capable of arousing intercultural connotations through perception of motion. Upon delineating conscious and unconscious image processes, he explicates a further taxonomy for the former and describes private and collective conscious image processes. While private image processes relate “only to the peculiar experiences of a particular individual,” collective image processes are “common to a whole group of individuals” (Ibid., 257).

The modern age brought upon a homogenization of urban soundscapes. Moreover, contemporary veins of communication made a global acoustic acculturation possible. The modern electronic music audiences share a common library of ear-witness accounts. Recent experiments on the cognition of electronic music conducted by the author of this article with participants from different nationalities, age groups and musical backgrounds have indeed highlighted the prominence of collective image processes in the context of electronic music. But, how do such cognitive idiosyncrasies of electronic music affect the narrative disposition of the genre? To address this question, we will situate the electronic music experience in a wider context of artistic practices by inheriting, adapting and melding various taxonomical perspectives on the aforementioned narratological concepts.

A Multi-Perspective View of Diegesis

Coexistence of Modes

As previously described, in Plato’s categorization, tragedy and comedy (i.e. theatre) are mimetic modes of expression, while poetry and mythology (i.e. literature) are diegetic. The distinguishing factor between the narrative actions pertinent to each mode, namely re-enactment and recounting, is whether the medium of representation remains the same as that of the represented; or in other words, whether there is a mediation between the expression and the expressed. Electronic music, in this sense, is mimetic. While it may represent extra-musical events, it does so through connotations of sound. It is not narrated like the diegetic poetry, but speaks for itself; it represents not as a mediator but as a portion, or an abstraction, of reality. The loudspeaker will detach the sound from its source but the medium of the phenomenon remains unchanged.

However, electronic music does evoke more than memories of sounds, just as an environmental sound signifies more than its physical entity. Every sound we hear ignites a semiotic web, which allows us to imagine and comprehend more than what the sound immediately represents. The mimetic acting of tragedy engenders a similar reaction. What we witness on stage is just a portion of the world we imagine and situate the characters within. Here, a bond materializes between Plato’s mimesis and Genette’s diegesis. Although a narrative form might be purely mimetic, it will nevertheless imply a spatiotemporal space different to which the audience inhabits. Electronic music presents to the listeners sounds that represent events; it does not speculate about — or recount — sounds. Electronic music is therefore mimetic in the spatial domain of the concert hall, but it creates a diegesis for the listener in the semantic domain.

While there is no narrator in music similar to that in a literary text, the listener partly assumes this role by building a narrative out of her experience. Narration “can inform us about a universe and yet restrict its information to a small set of events and characters populating this universe” (Bunia 2010). The artwork does not need to provide every element of the diegesis, since the listener expands the spatial domain with the semantic by filling in the gaps. The imagined narrative invigorates the diegesis. This license of the listener is further apparent in Souriau’s interpretation of diegesis, which he describes as “all that belongs, ‘by inference’ to the narrated story…” (Gorbman 1980).

(Re)presentation

However, the listener is absent from the artwork’s universe. If we go a bit further with the adaptation of dramaturgical concepts, electronic music engages with the listeners in a similar fashion to representational acting. This type of performance ignores the presence of an audience and situates them outside the context of the unravelling universe of the story. This is unlike presentational acting, which acknowledges the audience and moreover, addresses them. Famous Russian actor Constantin Stanislavski’s typology of these terms, although entirely different, also bears new threads across different art forms. Stanislavski asserts that the presentational actor “must live the part every moment that [she is] playing it” (Stanislavski 1948) and expose the character through her understanding of it, becoming one with her role. The representational actor on the other hand, does not live the part but plays. The actor “remains cold toward the object of his acting but his art must be perfection” (Ibid.).

Parallel to Stanislavski’s definition of presentational acting, American visual artist Sanford Wurmfeld describes presentational art to be “structured by a human being and presented as a statement… to be experienced or received by an active viewer. By its sensory nature, such art is untranslatable and the ideas or feelings transmitted by it are tied to the particular object that expresses them” (Wurmfeld 1993). The ideas of untranslatability and affect’s attachment to the art-object intrinsically relate to the previously described experience of a musical material that is expressed through a culturally embedded musical language, whether it may be in the context of an instrumental or an electronic work. Could we therefore align this experience with Wurmfeld’s and Stanislavski’s definitions and classify it as being presentational? Regardless, the material, with which electronic music amalgamates a traditional musical vocabulary, manifests a distinct representational capacity while leaving the audience outside of the universe it implies.

Diegesis and Cognition of Electronic Music

Our auditory systems allow us to perform the acts of foreground and background listening simultaneously. This way, we can achieve a Gestalt perception of our daily soundscapes with certain sonic phenomena highlighted as figures while others remain out of focus. Recent studies have shown that the semantic content of a sound that we encounter in a daily context can subdue its physical attributes, as we categorize it amongst other environmental sounds (Gustavino 2007). We make sense of such auditory or otherwise sensory environmental stimuli through units of meaningful events and cognitively prioritize them in relation to one another. This nature of the human auditory system is in effect during music listening as well.

A Cognitive Experiment on Electronic Music

A series of cognitive experiments were conducted by the author to investigate the experience of electronic music and the communication between the composer and the listener. The experiment model was designed to harness a comprehensive and diverse set of data while remaining faithful to a music listening experience. For this purpose, the design involved two sections. During the first section, the participants were asked to listen to a piece of electronic music without being provided with any questions or tasks; this was aimed at eliminating an experiment bias to a feasible extent. Following this first round of listening, participants were asked to write down their general impressions in a form of their choice. In the second part of the experiment, the participants were asked to listen to the same piece, this time accompanied by a software which allowed them to type descriptors as to what they might have felt or imagined in real time. The responses were timed to be later on compiled and visualized on a timeline for analysis and cross-evaluation with general impressions. Further details regarding the experiment method and preliminary results are published in a recent article (Çamcı 2012).

The experiments were conducted with two pieces which exhibit similar structural formations, phrase lengths and overall durations. The composition of the first piece, Birdfish, followed a concrete narrative with real-world references. In order to establish the intended diegesis, certain sounds within the piece were designed to display highly representational qualities. On the other hand, the second piece, Element Yon, consisted of abstract sounds, such as tone sweeps and pulses, which evaded concrete references. Although no recorded sounds were used for either work, the raw and synthesized nature of the sounds was maintained throughout the second piece. While the first piece prompted a considerable amount of descriptors, which mostly identified sonic events and sound sources, the second piece induced much less descriptors. Furthermore, the responses for this piece mainly delineated affective regions: unlike the descriptors provided for the first piece, both the general impressions and the real-time inputs for the second piece referred to larger motifs and areas in the piece, rather than specific moments.

During the general impressions section of the experiment with the first piece, the listeners either created lists of descriptors, or wrote down narratives concealing descriptors that would appear again during the real-time input section. These descriptors mainly denoted the immediate sonic actors that populated the diegesis. For the second piece however, the listeners commonly chose prose form to reflect their general impressions; the responses mainly comprised affective appraisals and evaluations of the physical qualities of the sounds. The real-time responses provided in the second section also considerably lacked source descriptors and consisted mainly of nouns and noun phrases which described emotions and spatial attributes; attempts at meaning attribution was considerably less than those observed for the first piece.

The experiment results for the second piece in general displayed a much more articulated sense of the self. While for the first piece the participants commonly assumed a role of the outside viewer who observes and reports the unfolding of certain events (i.e. “… has happened.”), for the second piece, a common tendency was to reflect through the first person (i.e. “I felt…”). In this sense, the experience of the first piece can be likened to that of a representational acting performance, during which the audience is situated outside the diegesis. The second piece, on the other hand, could be interpreted to posses more presentational qualities, as the responses displayed further involvement and a sense of being personally addressed.

Near and Far

Meyer explains, in the context of instrumental music, that the connotative capacity of a musical phrase is intrinsically connected to how much it diverges from a “neutral state” (Meyer 1961). We can assume that a ground element in music, such as an accompaniment texture, sets a neutral state in terms of spatial attributes, for a melody to diverge from. Taking the cognitive idiosyncrasies of electronic music into consideration, we can talk about semantic dimensions of figure and ground. Meyer’s rationalization of connotative capacities, which come to being through contrast, can also be applied to the semantic domain. The representationality of sounds in electronic music affects their cognitive hierarchy. We tend to position sounds that are more concrete, or easier to identify, as figures, while the disorientation caused by abstract sounds may enhance their reception as textural elements. On the other hand, highly representational phrases can also be made into ground elements as semantic neutral states, based on how the composer sets the diegesis. While the semantic clarity of a sound object can set it apart from the ground within a piece, it also engenders a schizophonic (Schafer 1977) experience which accentuates the contrast between the narrated universe and the spatial domain of the concert hall. In other words, the concreteness of a representation instigates a tension between what is physically near and what is semantically far.

Tension is, however, a by-product of interaction. The interplay between the spatial and the semantic attributes of a sound implies contacts between the concert hall and the diegesis. A sound can travel from an alien territory into the concert hall and weave a contact between the representational and the presentational. A stark example to this phenomenon is evident in Luigi Nono’s La Fabbrica Illuminata, a 1964 piece for voice and 4‑channel tape. The piece exhibits a mixture of live and recorded voices in multi-channel accompanied by electronic sounds as it narrates a story about textile workers. For the fixed sounds, Nono made location recordings at the factory in which this story originally took place. The voices on tape transform from quiet speech into loud vocal parts and mix with the live singing. Quiet sections of the recorded voices create the illusion of a mumbling crowd, which could easily be mistaken for the audience at the concert hall where the piece is being performed. While the performance of the singer embodies a more traditional musical act, it also serves to anchor the experience in the physical domain. This amplifies the disorientation when the recordings of the mumbling voices suddenly turn into roaring vocal lines that are now clearly in a space different to which the audience inhabits. The listener travels back and forth between the concert hall and the locked down factory which trapped the workers in the fire that killed them and the journey amounts to an immensely eerie experience through the interplay between the explicit and the implicit worlds.

Semantic and Spatial Attributes

Semantic attributes are indeed immediately attached to physical ones. Spatialization and loudness determine the physical proximity of a figure. In tandem, these two parameters help establish the semantic concept of motion. Sounds from stationary speakers follow choreographies designed by the composer and imply for the listener an animation of objects, albeit detached from any actual moving sound source. These objects can be cognitively abstract or concrete; regardless, the listener hears — and furthermore imagines — beyond the mere changes in parameters and extracts the Gestalt (i.e. the motion) emerging from the interplay between them. Not going against intuition, experiment results have revealed that motion charges a sound with figure characteristics and generate, in the listener’s mind, representations that signify both the motion itself and, moreover, what it is that moves. The figure characteristics of a motion can be dampened however, by introducing periodicity, which would semantically push the moving object out of focus.

As for ground elements, spatialization and amplitude, along with spectral dynamics, can set reverberant characteristics of a sound, which in return establishes another semantic concept of space or location. This highly representational concept transcends the metaphor of musical ground: once it is semantically attributed to a texture, a scene is set for successive figure gestures, which will then be evaluated by the listener in reference to where they occur. This conditioning does operate both ways, since the semantic content of a figure gesture will inevitably feed back into how a consecutive ground gesture will be received. Each new material primes the listener contextually for what is to come, and the flow of gestures amounts to a constant semantic realignment. Even when an explicit sound element is removed from the scene, it implicitly persists. Diegesis established thus far in the piece maintains a semantic context. In the listener’s mind, diegetic actors from before interact with diegetic actors of the now, and the listener starts filling in the gaps. Through imagining the implicit world of a piece, the listener, for example, can obtain semantic polyphonies from spatial monophonies and construct implied figure and ground relations. This act of world-making renders the listening experience much more immersive.

The phrase length of a musical gesture can be used to emphasize a figure, in a temporal contrast to its ground. But furthermore, microsounds, which are the building blocks of granular synthesis, are commonly described to display particle characteristics. This likeness of granular synthesis to a physical model evokes real-world references. Such connotations are rather accentuated in the listener responses both as textural and figure elements. While the specificity of the imagined object can be obfuscated (e.g., water versus glass), the process the object is going through (e.g., flow, rain) is less ambiguous. Therefore, granular synthesis is a fairly prominent technique for establishing the fabrics and the dynamics of the diegesis while less amorphous granular gestures are capable of generating concrete figures that inhabit this diegesis.

Boundary cases in loudness and frequency prompt a notable phenomenon: besides the obvious potential of these two parameters to delineate the spatial proportions of a sound object, spectral extremes at sizeable amplitudes, such as very high and very low frequencies that are clearly audible, tend to pierce through diegesis and become corporeal. The listener becomes aware of her act of listening and returns to the spatial domain of the concert hall. This self-awareness is akin to that is experienced by an audience member who is personally addressed by the actor during a play. The music becomes a presentational object.

Musical Forms as Diegetic Actors

The previously described layer of meaning attribution has a peculiar impact on how the listener engages with more traditional forms of musical material in the context of electronic music. Examples to such forms can be an intelligible melody and accompaniment section, a grid-based rhythm or even a gesture that displays a timbral similarity to a physical instrument. While these would cause an immediate affective appraisal during an instrumental music performance, experiment results revealed a meta-evaluation of such forms when encountered in an electronic music piece. Prior to an affective appraisal, the listener first identifies the phenomenon as the musical form it is, situated in the universe of the piece. That is to say, abstract musical elements effectively turn rather concrete and become diegetic actors — or objects — in the context of the piece, almost like a television in a movie scene. To describe one of such forms found in the piece Birdfish, Nicolas Deletaille, a Belgian cellist, used the phrase “musical souvenir”. This expression appropriately illustrates the diegetic quality assumed by a traditionally musical form in the context of an electronic music piece.

Conclusion

Just as we can evaluate the extent to which the listener is inside or outside the musical material, we can also question how much of the material is internal or external the listener? How much of the content of an electronic music piece is objectively out there in the concert hall? From a poietic standpoint (i.e. looking from within the artwork) the listener is outside of the diegesis. But the listener, when exposed to the artwork, constructs a semantic universe around herself in the physical domain of the concert hall and observes. When Meyer describes collective image processes, he refers to a common ground shared by every listener, which will inevitably be appended by the private, or individual image processes and ultimately amount to an affective appraisal. The emotional assessment of sounds will naturally be attached to our individual experiences. However, the layer of meaning attribution where the diegesis emerges, does indeed instigate overlapping universes amongst different individuals — listeners and composers alike — owing to a shared library of ear-witness accounts. The semantic aspect of electronic music is therefore highly deserving of further investigations, since a composer’s orchestration of cognitive cues can play a significant role in shaping the experience of a piece.

The diegetic approach bears new perspectives towards understanding and communicating this experience. Through a historical overview of diegesis, this article highlighted the bonds that can be formed amongst the various interpretations of this concept and discussed how these interpretations relate to electronic music. An amalgamated view of representational modes was delineated to situate the electronic music listener in a broader context of artistic forms. The notions of proximity and figure-ground organization were outlined in terms of semantic and spatial characteristics of sounds. By facilitating experimental data that articulate the cognitive idiosyncrasies of electronic music, a semantic paradigm was formulated to describe the narrative disposition of electronic music.

Bibliography

Bunia, Remigius. “Diegesis and Representation: Beyond the Fictional World, on the Margins of Story and Narrative.” Poetics Today 31/4 (Winter 2010), pp. 679–720.

Çamcı, Anıl. “A Cognitive Approach to Electronic Music: Theoretical and Experiment-based Perspectives.” ICMC 2012: “Non-Cochlear Sound”. Proceedings of the International Computer Music Conference 2012 (Ljubljana, Slovenia: IRZU — Institute for Sonic Arts Research, 9–15 September 2012).

Deleuze, Gilles and Félix Guattari. “Percept, Affect and Concept.” In The Continental Aesthetics Reader. Edited by Clive Cazeaux. New York: Routledge Publishing, 2000.

Fraisse, Paul. The Psychology of Time. New York: Harper & Row, 1963.

Genette, Gérard. “D’un récit baroque.” In Figures II. Paris: Seuil, 1969, pp. 195–222.

_____. Narrative Discourse. Translated by Jane E. Lewin. New York: Cornell University Press, 1980, pp. 212–62.

Gorbman, Claudia. “Narrative Film Music.” Yale French Studies 60 (1980), pp. 183–203.

Gustavino, Catherine. “Categorization of Environmental Sounds.” Canadian Journal of Experimental Psychology 61/1 (2007), pp. 54–63.

Hayward, Susan. Cinema Studies, The Key Concepts. 3rd edition. New York: Routledge, 2006.

Meyer, Leonard B. Emotion and Meaning in Music. Chicago: University Of Chicago Press, 1961.

Plato. The Republic. Trans. by Richard W. Sterling and William C. Scott. New York: Norton, 1985.

Roads, Curtis. Composing Electronic Music: A New Aesthetic. New York: Oxford University Press, Forthcoming.

Schafer, R. Murray. Tuning of the World. New York: Knopf, 1977 (Arcana Editions).

Stanislavski, Constantin. An Actor Prepares. Trans. by Elizabeth R. Hapgood. New York: Routledge, 1948, pp. 18–22.

Taylor, Henry M. “Forum 2: Discourses on Diegesis — The Success Story of a Misnomer.” Offscreen 11/8–9 — Sound in the Cinema and Beyond (August–September 2007).

Truax, Barry. Acoustic Communication. Westport: Ablex Publishing, 2001.

Wurmfeld, Sanford. “Presentational Painting.” Catalogue of the “Presentational Painting” exhibition in the MFA Building, 20 October – 20 November 1993. Hunter College Art Gallery.

Social bottom