Contemporary Problems, Interventions and Results
1. Production Techniques: An update
Mastering in electroacoustics is today a reality, thanks to, among other things, the concrete endorsement of one of the most important labels specialised in the genre, empreintes DIGITALes. It goes without saying that this would not have been possible without the active collaboration of the composers represented by the label, themselves influenced by the larger community of electroacousticians. This is another way of saying that the progressive professionalisation of electroacoustic production methods — of which mastering represents the first historic step — indeed responds to a necessity. Throughout this series of articles we will not only recognize and identify the practical details of this new approach and the concrete obstacles it encounters, but will also reflect on the possibilities of further development available today.
As the process which permitted commercial productions to surpass electroacoustic production in terms of sound quality is still fresh in our minds, it may suffice to refer back to it for a moment, in order to consider the work remaining to be done.
- Exportability: “Corrective” Mastering had already been imposed as the sole short-term solution to this problem by the end of the 1960s.
- Aural Fatigue: this had not become a problem — in small commercial productions — before the end of the 1980s, with the appearance of home studios as the exclusive workspace of the composer, who assumed the role of arranger, sound engineer and producer. This problematic situation — one operator in a single production space — was nevertheless restricted to demo studios and local production units and was never able to cause the damages it was responsible for in electroacoustics, where the home studio began to be regarded as a self-sufficient production venue. By contrast, it was during this same period that larger commercial productions witnessed the appearance of studios and engineers specialised in mixing, which effectively minimised this problem.
Even if it lacks the means to invest in similar resources, in 2007 it should be possible for electroacoustic production, when mastered in “stems” (1), to at least reach an intermediary stage in quality, notably through the use of contemporary mastering tools. But there is no reason for progress to stop there:
- Starting in the mid-1990s, in order to counterbalance the destructive effects of corrective mastering, mixing studios and then recording studios decided to install systems of reference monitoring; mastering can now evolve into a finer type of intervention, sometimes referred to as “sweetening”, a term borrowed from the “audio for image” milieu;
- the next refining step is mastering in stems, mentioned above. Pop productions begin to use this approach starting in 2000; in electroacoustics, albums to be mastered are delivered in this format in 40% of the cases;
- more recently, entire mixing sessions are exported to commercial mastering studios.
The propositions made in the accompanying article on Mixtering will make it possible for elecctroacoustics to regain some of the ground it has lost in terms of sound quality. It may seem cynical to mention, but the present decline in pop CD sales may forewarn of a stagnation in investment in the development and acquisition of cutting-edge production equipment within the commercial sector: a moment’s respite in the acquisition race which the electroacoustic milieu could turn to its advantage in order to regain lost ground. This final surge will only be possible with improvements to the quality of source materials and a professionalisation of the listening conditions in production studios.
2. Monitoring and Sources
Aside from the costs in acquiring it, which may be considerable, the most disconcerting characteristic of reference monitoring is its honesty, to which it is not necessarily easy to accustom oneself. If the best-produced discs are found to have an exceptional sound on such a system, both more detailed and more pleasing than on even supposedly high-end domestic systems, average or deficient productions are immediately displeasing. Using such a system for leisure listening seriously calls into question the make-up of one’s music collection. In particular, most electroacoustic productions suffer greatly on such systems, for all the reasons given throughout this series of articles. This explains, to an extent, part of the skepticism found amongst composers in regards to this kind of system — a tool largely accepted in other milieux; however, the explanation is insufficient and requires development.
Even if we succeeded in restricting the use of this type of system solely for production work (hardly conceivable from a practical standpoint), we would still have difficulty managing the situation due to a series of problems related to the source. These problems can be summarized via the following dilemma:
- the sources to which electroacoustic production has access are often structurally deficient from the point of view of audio quality;
- heard in a flat monitoring situtation, these sources reveal themselves to be dissatisfactory, or even unusable, and they inevitably stifle compositional inspiration;
- better source materials cannot be produced without access to superior technical means;
- reference monitoring is one of these means, just as essential as the others.
The simple solution to this dilemma is obviously to remain content with biased monitoring systems which give an unrealistically favourable impression, with the result that the electroacoustic milieu remains, definitively this time, “behind the times” in the realm of audio developments. We would much rather face the problems, describing them here according to the principal categories of source materials.
2.1 Microphone Sources
Microphone recording is an ever-evolving technique and today requires increasingly specialised knowledge. Its success is however dependent on maintaining tight control over a number of physical parameters, and this control demands more and more important material means. The parameters are as follows:
- Acoustical properties of the locale: locales which are sufficiently silent and optimised for recording are expensive to construct. The expedient of close-miking seriously limits the comparative advantages of standard miking, which offers a sensation of space, organic complexity, euphony. Commercial productions no longer suffer from this limitation as they are typically able to offer an acceptable balance of silence, ambiance and impact which corresponds to contemporary norms in recording techniques.
- Qualities of the recorded object: while acoustic instruments benefit from centuries of refinement, and the pop industry’s gear has made considerable progress in recent years, these advances have not only contributed to the augmentation of the cost of acquiring these instruments, but the inherent stylistic specialisation they engender has also made these instruments more delicate to use in the typical context of electroacoustics, where “function” often falls victim to individual appropriation. The alternative, to make use of non-“musical” (i.e. instruments) objects, demands of the user an extraordinary mastery of microphone recording if he or she wishes to compete with the spectacular results presently obtained with combinations of microphone / contact mics / pre-amps, configurations which are increasingly associated with precise instruments.
- Quality of the microphone / pre-amp coupling: the fact that the supply of equipment dedicated to home studios has mushroomed in recent years should not overshadow the fact that in most cases these are low-end versions of a new generation of high-performance equipment situated in an entirely different price range. The difference of the results of these two classes is obviously significant.
- Monitoring fidelity: this remains the principal tool for the decision and control of stages as critical as microphone placement, choice of directivity, pre-amplification levels, direction of the instrumentalists, the decisions of which takes to keep, etc.
2.2 Synthesized Sources
Right away it should be noted that no synthesis software can escape the present conflict between the precision of the waveforms it is to calculate and the efficiency of the available computational resources.
For example, a programme as ubiquitous as Max/MSP, already in its neutral state a huge consumer of resources, cannot succeed in synthesizing sounds of an even acceptable quality without severely limiting the number of simultaneous “voices” produced. This contradicts its more or less universal use as a principal tool for synthesis, intended to be highly interactive and used in real-time.
Synthesis instruments destined for use in the pop milieu suffer from the same fundamental limitations, with the small difference that they come with a number of hard-wired tools which serve to mask their qualitative misery. Further, even if we wished to do so, their rigid configuration prevents their operation in a high-quality mode using few voices. (Native Instruments’ “Massive”, with an “Ultra Quality” mode, recently appeared on the market.)
That said, the problems related to the compromise induced on the waveform calculations should not be downplayed: not only do the resulting distortions irritate the ears, but the potential for irritation is cumulative, which increases the difficulty for electroacoustic composers to build ambiances having a “symphonic complexity”.
Because of the high degree of control of raw sound materials it offers, synthesis is even more sensitive to monitoring deficiencies than microphone recording. A biased listening situation can induce fundamental errors at all stages of synthesis: harmonic content, envelope construction, the value of any parameter, etc.
2.3 Processed Sources
Here we refer to recorded or synthesized sources which have been so greatly transformed that their intrinsic audible characteristics are predominantly a derivative of the treatments applied in the processing:
- once again we are confronted with the same problem of computational power as with sound synthesis: no real quality of processing is possible without rapidly depleting the resources of the computer processor;
- the financial burden — equipment, software, etc. — of preserving the quality of the processing needs to be considered, as this is generally done using external DSP processors which offer processing power independent of that of the host computer;
- problems, cited above, which a deficient monitoring condition causes to recorded or synthesized sources, are exacerbated here in the second generation of intervention, where the potential for error is multiplied.
3. Tradition and Attitudes
The difficulties discussed here are only partially insurmountable: experience has shown that a severely selective choice from amongst available materials, made according to discerning qualitative criteria, can lead to high-quality electroacoustic productions. The keyword here is “discerning”, and in order to fully comprehend its importance we shall now examine a number of production prejudices which today are still rather persistent before formulating (in section 4) some practical recommendations.
3.1 Denial and Aesthetic Dogmas
By teaching the priority of abstract significance over æsthetic effectiveness, by assigning rigorous æsthetic equivalence to elements that have a greatly varied level of euphony, the bastions of post-modernism have in reality defined a restricted sound signature, which is dogmatic almost to the point of parody, based once and for all on the technical possibilities available to the early pioneers of the electroacoustic genre.
This is not the place to discuss the fundamental æsthetic validity of these arguments; we will concentrate on the facts and on the results. Audio quality is neither a given nor a concept that can be transferred from context to context at will. The conditions of its existence have evolved over the past 50 years thanks to a growing corpus of knowledge and complex techniques which follow a number of increasingly precise rules and conditions. The fact that these rules and conditions are not always clearly stated — if stated at all — does not invalidate their authority. This will offer us the occasion to formulate them, to refine them and expand their scope, at the same time adapting them to electroacoustics without the fear of — or more precisely, with the intention of avoiding — loss of identity.
For an art which considers itself “avant-garde” it may seem curious to cling to the fringes, suffering elementary conditions of production for reasons of fidelity to abstract concepts, knowing that this is in effect to abandon any hope of presenting works displaying a sound quality that reflect contemporary expectations. It would seem obvious that the value of a musical proposition is more dependent on the choice of musical materials than on technical limits affecting their transparence. Even musical genres as “established” as Baroque music were able to integrate — without provoking lamentations of æsthetic treason — technological advances at many levels: consequently, even if a Corelli LP produced in 1973 has a radically different sound signature than the same piece on a multi-channel SACD in 2006, no one in the “Baroque community” would accuse the SACD version of being less “authentic” than the 1973 product.
3.2 Techniques Inherited from Pop
The fact that the pop industry was dependent for so many years on distribution mediums as limited as the AM band, 45 rpm record or the audio cassette caused profound repercussions on recording techniques and mixing in an era when these mediums were predominant. The strategies were brutal, and the results more displeasing than anything else, but their elementary nature, built on a strict minimum of qualitative requirements, made them easy to understand and teach.
Although the general conditions of music distribution have evolved significantly from this catastrophic state, many electroacoustic works sound as if their authors followed the recipes of a bygone era, if not to the letter, at least in spirit. How can this phenomenon be explained? The two arguments usually evoked to explain their curious survival are hardly tenable: distribution in mp3 format gains nothing from these techniques, and the “level wars” are rejected by the majority of composers, with good reason.
In order to understand, we must again unfortunately remind ourselves that the electroacoustic community has remained for too long on the fringes of the evolution of production techniques and could therefore do no better than to concern itself with knowledge related directly to the level of equipment it had at its disposal. These recipes, which initiates are still encouraged to mix-and-match at will, are all related more or less to the following categories:
- Generalized compression; applied track by track, to subgroups or only to the master, massive compression always forces the entire gambit of audio material to co-exist within the few uppermost decibels, normally reserved for headroom, in which the operator has decided to confine them, whether for reasons of subjective volume (which seems contradictory), impact or clarity. A sensation of permanent irritation and of assault on the senses is always the consequence of this strategy;
- Operations to clean and clear tracks, sound layers and other elements judged to be unuseable or redundant. The idea is to retain the intelligibility and clarity by “keeping to the essentials.” Forget “symphonic textures”, here the famous “less is more” credo serves as the singular maxim and universal explanation. Consequently in electroacoustics, insofar as the sources suffer from one or another of the deficiencies described above, we are in fact guaranteed to hear these deficiencies “more” than “less”…
- Spectral confinement is an approach which restricts the presence of each sound element to a limited and separate region of the entire spectrum: for example, layer A will only contain the highs, layer B will contain nothing which sits outside of the region of 900–3000 Hz, layer C is low-pass filtered at 850 Hz, etc. As the aural and musical reality is in fact somewhat more complex than this technique would suggest, there is always a limit to the number of elements one can realistically force into neighbouring and complementary spectral regions. To such an extent that following this procedure to the letter brings about the same drastic changes and the same caricatural simplification of the work as in the preceding category, now with the additional sensation of artificiality and lack of cohesion;
- Concentration in the hi-mid region, which is considered to contain much of the essential “musically useful” information. Because of the superior sensitivity of the ear in this region, favouring it at the expense of others (the lows in particular) is believed to allow for much greater clarity in a context of heightened subjective volume. To obtain this result, clearly it is necessary to apply equalizations that are as severe as they are systematic on most of the tracks. This will provoke important and generalized phase problems and loss of impact and transparency, costs which have long been considered unacceptable in the commercial music milieu. It is important to note that in some cases the concentration in the hi-mids is not even a result of a deliberate strategy, but rather that of an attempt to obtain a bit of clarity in a monitoring situation that lacks precision and power, is coloured, etc.
See following section:
4. The Contemporary Audio Reality
Social top