Contemporary Problems, Interventions and Results
See preceding section:
3. Tradition and Attitudes
4. The Contemporary Audio Reality
Historically, these strategies — reductionist, to say the least — were only used systematically during a relatively short period, and were called into question starting in the early 1970s, with the advent of FM and home hi-fi systems, both of which rendered them obsolete. Ten years later, the sudden and widespread success of Solid State Logic consoles gave new impetus to such approaches. Their trademark metallic, harsh sound is particularly noticeable in the repertoire of the New Wave milieu, which flourished in the 1980s. With hindsight it becomes clear that these recipes could only function when applied to a musical style which itself utilizes spectrally-limited sources: in New Wave, examples include beat boxes and bleating voices. Sound engineers were in effect only reiterating the already limited range of the sources. The ongoing “Level Wars” have been a more lasting and negative repercussion of such techniques. Elsewhere, the extremely low success rate of these strategies can be explained by the somewhat paradoxical fact that they make significant use of the phenomenon of acoustic masking.
This brings up an interesting point: we could, in retrospect, represent the general evolution of audio quality since the 1950s as a trajectory from masking to transparency. Largely a fantasy until very recently, transparency relies on a network of inseparable high-level conditions. In recent years we have witnessed a series of advances which impact production techniques, described elsewhere in this document, and today capped by the arrival of high-definition digital recording and corresponding distribution mediums, DVD-Audio and SACD. We can now confirm practically what we knew for years — that the advantages of transparency over masking are tremendous. As of the moment we no longer “hear” the production in action and can experience the artistic message more directly, we can finally turn the page of a folkloric era in the history of audio production.
As it stands — it bears repeating — the production of a transparent sound is dependent on a subtle, complex, but increasingly reliable ensemble of factors that institutions teaching electroacoustics have not (yet) taken upon themselves to transmit. Lack of means is an indisputable obstacle but it should not inhibit the learning of new approaches to exploring euphony and perceptual openness, as these constitute much of the base of contemporary perspectives on audio production. Several improvements could be accomplished right away with a little training and a monitoring situation which, if perhaps not at a reference level, should at least be decent.
4.1 Choice of Materials
The composers’ selection process must cease to be arbitrary and begin to respect a minimum of qualitative criteria. Acceptable electroacoustic materials are those which:
- are robust, in order to better survive typical electroacoustic treatments, which can sometimes be abusive;
- have a rich and consistent spectrum, with each frequency range occupied in a minimally continuous manner; splintering in several narrow zones is a symptom of thinness, the final stage before irritation;
- are “powerful”, both in their use and articulation of space and in the dynamic range covered, in a comfortable manner and with neither ambiguity nor excess;
- have a neutral spectrum, without which a number of problems are likely to appear when attempting to EQ, such as conflicts with neighbouring resonant bands, unpredictable behaviours within larger frequency bands needing to be cut or boosted: noise, irregularities in the response, etc.
These criteria — particularly robustness and power — seem subjective only because of the difficulty to effectively express them in words. Although their perception becomes automatic with experience, it is nonetheless essential to have a minimum of “guided sessions” with someone who can transmit this experience. As the reader may not have immediate access to such a resource, here is a series of examples of synthesized materials which respond overall to these criteria. They are presented in their raw state, and have not been subjected to any processing outside of the software that produced them:
Compare with this series of materials generated by a pop synthesizer, which clearly do not meet the criteria:
Firstly, the colouration of the sounds’s character is obvious. Further, they are emaciated, precarious, and they occupy the space in a bloated manner, always on the brink of falling to pieces.
4.2 Layering
It is important for composers to realise that despite what they might have been taught, the extent of the usable dynamic and spectral range is no longer limited. Richness and spectral complexity are not enemies, but rather advantages to exploit with finesse, focusing one’s attention as objectively as possible on euphonia.
This said, there are no systematic and comprehensive regulations concerning the interaction and interpenetrability of distinct layers of sound(s), and it is doubtful that it will ever be possible to formulate such regulations as a whole. But from the moment they have an effect, they must be respected. Through attentive and ongoing perceptual “ear training”, composers can develop their own personal “lexicons” of sound combinations, which can be expanded and improved upon over time. We know, for example, that materials which are resonant, coloured or even over-produced do not interact well with other elements, nor do they stack well, unlike materials which meet the above criteria. Let’s return to the example of inadequate materials:
Attempting to retain the least amount of clarity in the combination of such sources is a lost cause, as evidenced by the following mix of the sources we just heard:
Using materials from another source, here is another example of this difficulty. Individual materials:
Mixed materials:
From these two examples it is easy to understand the dilemma of composers who only have such source materials at their disposition: each new layer is a regression in the transparency and an advance towards irritation. We can now judge how effectively sources which meet the criteria, those heard above, lend themselves to accumulation, with virtually no loss of global transparency. Individual materials:
Mixed materials:
Despite the fact that no processing has been applied to this mix, the difference is remarkable. The procedure for selecting source materials seems to be evident, but it should be noted here that if an arduous operation of equalization and dynamic control is necessary to mediate the co-existence of two sonic layers — even “adequate” ones — today we are inclined to avoid using them together at all… rather than struggle in vain and compromise the quality.
4.3 Processing
Even on a modest workstation there are usually a number of processing tools to perform the same tasks, and the user should try all the tools with equivalent functions, retaining only those that are the most transparent. This constitutes the first step in a process leading to the acquisition of more and more serious tools and to the elaboration of parameter specifications which are less and less intrusive on the user’s work.
Because of the depth of their transformations, certain processing tool collections and some software seem to target electroacoustic composers. We should be careful here, because these tools often have catastrophic effects on the quality, causing intense and systematic colouration, thinness of sound, noise, clicks, distortion and a ludicrously large dynamic range. Experience shows that these faults are very difficult to correct in the mastering stage, even when the materials are delivered in stems. Radical transformation of source materials is at the base of electroacoustic composition, but here it is accompanied by a costly compromise: the auditory comfort of the audience is sacrificed. This situation is completely unacceptable, not only for the future of the genre, but also for the security of the listeners, particularly in concert or for headphone listening. Other avenues must be explored to transform the sounds in a manner that is less destructive. Without a doubt, more meticulous working methods are necessary to develop these avenues, as we need to concern ourselves with not only the æsthetic results, but also the degradation of audio quality throughout the process.
Finally, it is possible to precisely determine the line between compositional processing and optimization manipulations, which should encourage the composer to not only carry out the compositional processing with care and restraint, but also to refrain from doing any kind of optimization, leaving it to a professional mastering / mixtering engineer.
Throughout the preceding sections, we have sought to define the evolution, standpoints and perspectives for improvements in electroacoustics and commercial production in regards to audio quality. These reflections were necessarily general and can now be enhanced and elaborated by examining, in closer detail, the efforts being made at present to improve the quality of audio in electroacoustic production. In the current phase, these efforts are concentrated on mastering.
5. Current Audio Problems
For all the reasons listed above, electroacoustic works received by the mastering engineer are characterized by a certain number of problems that are as typical as they are recurrent, and which can easily be classified in a limited number of categories. Rather than describe the categories themselves, it seemed that it would be more effective to illustrate them using examples from two recently-mastered discs. (2) It is important to note that the only reason these discs were chosen as examples was because both composers were willing to allow their work to be used for this exercice and not because they contain a greater presence of audio problems than the gamut of electroacoustic discs mastered to this day.
Each example is first presented in its original form, as mixed by the composer, followed by the mastered version.
5.1 Aggressive Highs and Hi-mids
This is the most commonly-found and most characteristic problem in electroacoustic production today, and examples of this category are abundant:
The appreciation of this excerpt, which contrasts a serene atmosphere with an underlying tension, is marred by the bell sounds, which are too piercing, and a constant whistling sound at an almost painful level. The corrections applied aim to offer some relief to the ear by giving a more delicate quality to the bells and making the whistling less persistent.
The following example mainly concerns the hi-mid range:
The harsh quality of some of the elements is still present, but the texture of the whole is now more compact and dynamic, while the nasal-sounding timbral component is no longer dominant:
5.2 Imprecision of the basses / integration of the ultra-basses
Here the problem is the lack of continuity between the ultra-bass frequencies and the rest of the content. The consequence of this discontinuity is that the strong presence of the basses is unsuccessful in conjuring the thin sensation which otherwise characterizes the passage.
The ultra basses have been “replaced” by a higher frequency, which is more successful in solidifying the piano and vocal sounds. The action is more lively and less disembodied.
Another example follows which concerns another problematic aspect of bass frequencies. Despite the fact that it borders momentarily on excess, the bass frequencies here are lacking in fullness:
The solidity of the bass frequencies is now evident, although the sensation of excess has been eliminated. Now the passage is more coherent and dramatically effective:
5.3 Resonances throughout the Spectrum
This example has several “bubbles” that surface at varying frequencies and with a kind of energy that causes the ear to retract into a protective state. The weak level of the background layer prompts even more focus on the individual explosions.
The problem of resonances used to an expressive end is not easy to solve, particularly with the type of short resonances which characterize this example. Application of equalization would be incredibly time-consuming, because it would be necessary to find the central frequency of each event, one at a time. Compression would be a quick solution, but it would need to be radical — with a very low threshold and a very high rate of compression — if it is to prove effective in the great variety of resonant events present in this example. This would narrow the excerpt by reducing the dynamic range to a thin band. In both cases, the effects of tension and contrast which animate the passage would disappear, along with its interest. The solution here was to conserve the essential of the resonances but to give them support in the bass frequencies, The reinforcement of the underlying texture has removed a little of the subjective energy of each explosion, so that the ear no longer reacts in a defensive manner, and the passage functions without impediment.
The following excerpt shows an even greater diversity in the presence of resonances, a good example of the current mindset, which associates resonance with the middle frequency range or the upper part of the low register:
Once again, the corrections aimed to give more body to the underlying texture in a manner which takes some of the edge off the most aggressive resonances. There are more differences between the two versions than it seems:
5.4 Lack of or Ambiguous Spatial Definition
Here the lack of clarity and definition of the layers — for example between the walking and the percussive elements — is the direct cause of the ambiguity of the spatialization. Mid-frequency concentration is quite clearly the cause of the problem.
Although the general direction of the steps remains ambiguous, the space in general has been clarified and the elements are now deployed with precision.
Aside from cumbersome colouration of the high frequencies, this lengthy excerpt does not present many obvious faults:
… except that it isn’t really convincing. It is lacking in the sort of immersive power that it seems to suggest and therefore does not develop the desired tension. Once again the problem is spatial distribution. Here is the result of the mastering:
The image is larger, more encompassing and the events succeed each other with the command needed to convincingly build up suspense.
5.5 Thinness, Dryness and Granular Character
When describing an entire family of problematic audio conditions, these three terms are often used synonymously. They all appear in the following example but can be identified separately. Thinness is found in the timbre of the principal instrument. The dryness manifests itself in what seems to be systematic cleanup of each element’s final resonance, and is more exposed — rather than compensated — by the addition of the short reverberations. A very audible granular character is evident in the final crescendo in the high frequencies:
The processing aimed to recover some roundness and naturalness for the passage. The reduction of the high frequencies corrected as much of the granularity that was possible, and even if nothing could be done directly to make the resonances seem more natural, the dryness no longer draws attention:
5.6 Flimsyness and Imprecision
If the snare drum shot is an example of a flimsy attack, imprecision is the dominating characteristic of this excerpt. Excessive resonance in the mid-highs and heavily-coloured reverberation are probably the cause:
The snare drum now has all the impact it normally commands, but the choir which emerges from it is also more realistic. The subsequent elements are better integrated into the overall gesture, and the corporal fluidity of the passage is immediately palpable, whereas previously it could only be perceived following an abstract analysis.
In the following example, the imprecision is a direct result of an excessive dynamic range. A rather violent start in the highs gives way to a passage sits on average around -40 dB. To maintain interest at such levels, not only should the resolution of the sounds be exceptional, but the sounds themselves should also have significant timbral richness. This is not the case here:
Here is what mastering does for this excerpt:
The highs for the two inititial events were controled and the remainder of the passage has been raised to sit around a more reasonable average of -25 dB. This previously inconclusive passage now appears as a series of transformations: its æsthetic intention is unchanged, but it is delivered with much more efficiency.
Another example, from the same piece:
Softness characterizes the series of percussive sounds rising in pitch, and the imprecision is mostly evident in the low frequency sounds appearing three seconds after the start of the excerpt. The strings are also an important factor in the lack of definition because of their conflictual interaction with the main percussive crescendo.
The percussive sounds have a much sharper attack, allowing them to remain in the forefront throughout the entire crescendo, despite the intervention of the strings. The basses are more dense and precise.
See following section:
6. Interactions with Composers
Social top