Social top


The CyberWhistle and O-Bow

Minimalist controllers inspired by traditional instruments

Trends in new music controller design have often emphasized the weird and wonderful, with fantastic arrays of sensors arranged in unusual patterns. Two controllers are presented here, the CyberWhistle (Menzies 1998), and a new instrument, the O-Bow. They both follow a more conservative approach of taking traditional instrumental forms, and reinterpreting key elements in a practical and modern controller design. Such a strategy has the benefit of being supported by established and proven instrumental techniques, which can be applied to a wide range of synthesis engines from authentic sounding to experimental. The traditional controller morphologies have a strong “fingerprint”, which carries many musical associations from traditional music, even when the controller is applied in unusual ways. This can potentially help to bridge the chasm between old and new music. The MIDI keyboard has been very successful in this way, other controllers much less so. It is argued here that this is partly due to the lack of attention to key points of detail as well as impractical design elements. The designs described aim to address some these key points. The overall approach is minimalist, unfussy and practical. The cyber prefix is used to emphasize the idea of a controller as a naked technical transformation of something that was once entirely organic, rather than an extension of an existing instrument, as with hyper instruments. Cyber is an attractive concept, and removes the musician from constraints of existing structure. However it also presents the challenge of creating a unified design that can be compared with traditional instruments.

The CyberWhistle

Figure 1. The CyberWhistle.
Figure 1. The CyberWhistle.

Many wind controllers have been designed, both commercial and non-commercial. A characteristic they nearly all share is that the position of fingers over tone holes is reported with a simple binary on/off state. In several cases force-sensing resistors are used to measure the finger contact pressure once the finger is depressed. However this arrangement is unable to effectively capture the gesture of a finger being released or applied, which continues after the contact pressure has fallen almost to zero. In many wind instruments the acoustics and sound of the instrument change continuously over the gesture, which gives a characteristic detail to the phrases performed. This detail cannot be emulated with binary sensors. Pressure sensors do not help because they cannot capture the relevant part of the gesture. The transition sounds generated by a finger gesture were considered very important for the overall character of many instruments especially those with open tone holes, and this prompted the development of an interface that could capture the gesture.

Finger Sensing

A variety of techniques were tried using light and electric field sensing. The method adopted for the first two prototypes was to sense ambient light occluded by each finger using a light dependent resistor (LDRs), then adjust this with an unobscured ambient sensor using a specially designed balance circuit. This is surprisingly robust, and the circuit is very compact. The shading of the sensor is well-matched to the acoustic variation in an acoustic instrument, as the greatest change occurs just as the finger breaks full contact. LDRs had often been used in music controllers, for example (Favilla 1994), but not in this particular way. The main drawback is that it cannot operate in very low light conditions. Electric-field sensing, (Paradiso 1997), can be used with the finger acting as a shunt to “steal” electric field lines from the active electrode. Although this works in all conditions, it is trickier to setup so that the signal changes quickly as the finger breaks contact, and the circuitry is more complex and difficult to make very compact.

Figure 2. Close-up of two finger light sensors.
Figure 2. Close-up of two finger light sensors.

Breath Sensing

1. For example, Ocarina™ is an application that uses the built-in mic of the iPhone to pick up breath turbulence noise and converts it to an amplitude level for an electronic ocarina instrument.

In the first prototype a microphone picks up the noise turbulence in the mouthpiece. The same approach has been seen recently in musical iPhone applications. (1) A specially designed active filter converts the noise into a breath control signal. This works by responding very quickly to the initial onset of turbulence, then quickly increasing the degree of smoothing so that the inherent randomness of the turbulence is masked, without excessive loss of latency. The main advantage of using audio is that the response to onset, which musically is the most time critical part, is very rapid. It is also possible to measure audio signals from humming that can be used for growl synthesis. The disadvantages are that it may be inconvenient or impossible to send an audio signal, and standard mic capsules are not able to work with high moisture levels for long periods. The second prototype used a pressure sensor. Many of the pressure sensors available can detect sound up to a few kHz, and measure humming. It is not very practical to send this with MIDI, however. MIDI has a maximum theoretical data rate of 3 kB/s, in practice about half of this. This does not leave much bandwidth for audio and data. The sampling times are non-uniform and introduce uncertainty that leads to additional noise when resampled uniformly.

Electronics and Form

Since the focus is on finger gesture sensing, it was reasonable to base the form on an instrument that emphasized the importance finger gesture. Of the instruments meeting this criteria, the tin whistle was selected, because of its durability and symbolic simplicity. The tone holes of the B-flat whistle are a suitable size for the sensors, and the bore is just large enough to hold all the electronics required to generate a MIDI output. The first prototype used an external electronics box with a multi-core screened cable connecting to the bore. Apart from being less elegant, this suffered from electrical interference. It is best to keep sensitive analogue circuits as localized as possible. The second prototype has all the electronics mounted on a single rectangular strip that sits on a wooden former running the length of the bore. To encode all the analogue signals in to digital serial data, a PIC controller was programmed specially. At the time (this may still be the case) there were no devices available that were compact enough to fit into the bore. The output is in MIDI, which turns out to have more than adequate bit rate, provided that the hardware and later signal processing are designed optimally. Complaints are sometimes heard about MIDI bit rate that are unwarranted, because not enough steps have been taken to use the bandwidth efficiently, or post-process the signal.

Figure 3. The CyberWhistle disassembled.
Figure 3. The CyberWhistle disassembled.
Figure 4. Circuit showing PIC controller.
Figure 4. Circuit showing PIC controller.
Figure 5. Circuit showing pressure sensor.
Figure 5. Circuit showing pressure sensor.
Figure 6. Side view of the CyberWhistle.
Figure 6. Side view of the CyberWhistle.


Part of the original intention was to use the CyberWhistle with physically modelled sound synthesis of wind instruments (Smith 1986, Karjalainen 1991). The two are made for each other in the sense that physically modelled sound relies on the input of physical input parameters that are sometimes subtle, yet important. The state of closure of a tone hole is one such parameter. Instruments were created by combining a variety of wind mouthpiece models with waveguides divided by two-port scattering junctions that model the tone holes. The aim was to achieve the basic characteristics of a wind instrument without worrying about details excessively, and then to creatively explore the many variations that become possible in the signal processing domain. Although productive, this empirical approach may alarm the traditional analytical acoustician, and the mathematics may scare the traditional sound designer. The graduated finger control does indeed bring these models to life. Replacing with switched envelopes makes this clear. It is possibly the lack of effective controllers that has been the main reason for the limited impact physical modelling synthesis has had in music performance, despite early promise. It is also interesting to apply the finger control to abstract synthesis structures. A simple example, that was originally used for testing purposes, is to use each finger to control the gain of an independent oscillator. The control morphology makes this simple process surprisingly compelling.


Audio 1 (00:25). Reed type physically modelled synthesis, with rough lower register. Wet.
Audio 2 (00:14). Reed type physically modelled synthesis, bright, unstable, with easily excited upper harmonics. Wet.
Audio 3 (00:14). Reed type physically modelled synthesis, bright, stable, uniform tone. Dry.
Audio 4 (00:25). Flute type physically modelled synthesis, very low, with rich harmonics. Wet.
Audio 5 (00:20). Test instrument. One oscillator per finger.


2. Available on the author’s website.

There have been numerous requests from musicians wishing to own a CyberWhistle, including Pedro Eustache, the LA-based player who provided the solo work on the soundtrack to Munich, directed by Stephen Spielberg. Some new updated models are being prepared. A concept I would like to develop is the embouchure detection described in “The CyberWhistle — An Instrument For Live Performance” (Menzies 1998 [2]). While embouchure is not an important factor on many whistle instruments, it is on reed instruments, so the cross-pollination would be interesting. Wireless MIDI transmitters are now readily available if more freedom of movement is required. Phantom powering from signal leads may be more practical in most situations to avoid reliance on batteries. A higher data rate format such as USB would allow audio to be transmitted with the controller information, and could be used for more precise breath control and a humming signal.

The O-Bow

Figure 7
Figure 7. The O-Bow

Bowing is arguably the most expressive class of gesture among traditional instruments. There are many dimensions of control, and the physical dynamics are complex. These are all integrated naturally to work with ergonomics of hand and arm movement. Bowing is also visually expressive, much more than is generally the case with acoustic instruments. It is hard to imagine an orchestra without the bow movement. Bow controllers divide between those designed as add-ons for acoustic instruments, for example the Hyper-Cello (Paradiso 1997) and the K-Bow (McMillen 2008), and those designed to be standalone, for example the R-Bow (Trueman 2000) and the V-Bow (Nichols 2002). The standalone, “cyber”, form has been used in performance, but to a very limited extent and it does not appear that a sensitive, robust and inexpensive cyber-bow has been developed. Bowing is the most expressive and at the same time most accessible part of the string playing technique, so a dedicated bow controller would seem very attractive. In contrast the fingering of pitches on a violin requires a great deal of skill and training. The combination of a bow controller with a MIDI keyboard or ribbon controller device would make string-like performance accessible in an electronic setting.

Bow Sensor

There are parameters associated with a bow including position, orientation and contact force. The most useful is bow speed at the contact, as this has a very direct effect on the evolution of the string vibration. Commercially available bow sensors mainly use electric field sensing with a transmitter under the fingerboard, and a receiver in the frog of the bow. This is ingenious for retrofitting a violin, but it is also expensive and complex. Experimental systems have used servos and there are several patents indicating optical sensors with striped bows or graduated opacity. I was looking for the simplest possible robust method, requiring no special bow. The answer was found staring at the desk mouse. The optical flow sensor is a fantastic piece of silicon that measures movement over a surface by comparing successive low-resolution snapshots. It is also ideal for measuring the movement of a bow, either real or simply a piece of wooden dowel, across a fixed point. The latest generation of sensors can measure from very slow motion to fast hand motion, and with laser emitters can work with nearly all surfaces. There is a mystery bonus from using an optical flow sensor: it measures velocity in 2D. With a fixed sensor this means the direction of the bow tangent to the contact can be measured. This provides an extra degree of freedom that is integrated into the bowing gesture, and is something not available in traditional string instruments.

Processing and Synthesis

The raw optical-flow signal is fairly noisy in audio signal terms, but can be filtered to provide and smooth and responsive control. It is possible to detect whether the bow has come to a stop on the string or has been lifted by the profile of the velocity as it comes to a halt. The first prototype uses a single violin sample and a keyboard. When a new note is detected the sample plays from the start. The gain is controlled by the bow speed, and the vibrato intensity by the bow angle. Direct finger vibrato, like violin fingering, is a difficult skill to master physically, and so it is attractive to provide an alternative that is still in some ways natural and integrated. If a note is bowed continuously the sample is looped around its loop points. When the bow reverses the slow down causes a natural dip in volume. If the bowing stops the next note starts from the beginning of the sample, including the onset transient. A legato note jump can occur in mid bowing. It is quite a delicate process to handle this well. The gain must be dipped briefly at exactly the time the sample player changes playback rate and pitch. Despite the simplicity of this arrangement, expressive phrases can be played, and so the initial concept of simple, expressive bow controller has been demonstrated.

Video play
Video 1 (2:09). The video shows some test gestures, followed by some short phrases, using a sample synthesiser.


New prototypes will explore multiple samples, polyphony, unbowed string sound and physical modelling of strings and body resonance. There are lots of possibilities for combining different pitch controllers with the bowing mechanism. An industry exists producing orchestral music from extensive sample libraries. The MIDI keyboard is used to play string samples, with some keys used to switch the sample variation. The O-Bow used together with the keyboard could be useful for producing more realistic string phrases. A patent application has been made relating to this, and there is interest from manufacturers.


Some details have been presented on the design of two music controllers, based on wind and string instruments. It is hoped that an overall æsthetic has been conveyed, which can be summarized as minimalist but with attention to detail. The same principles have helped to guide the evolution of acoustic instruments.


Favilla, Stuart. “The LDR Controller.” Proceedings of the International Computer Music Conference (ICMC) 1994 (Denmark, DIEM — Danish Institute of Electroacoustic Music, 1994). pp. 177–180.

Karjalainen, Matti, Laine U.K., Laasko, T. and Valimaki. V. “Transmission-line modeling and real-time synthesis of string and wind instruments.” Proceedings of the International Computer Music Conference (ICMC) 1991 (Montréal: McGill University, 1991), pp. 293–296.

McMillen, Keith. “Stage-Worthy Sensor Bows for Stringed Instruments.” Name. “Article.” Proceedings of the International Conference on New Instruments for Musical Expression (NIME) 2008 (Genova, Italy: Università degli Studi di Genova, 5–7 June 2008).

Menzies, Dylan. “The CyberWhistle — An Instrument For Live Performance.” Colloquium on Musical Informatics XII, September 1998.

Nichols, Charles. “The vBow: Development of a Virtual Violin Bow. Haptic Human-Computer Interface.” Proceedings of the International Conference on New Instruments for Musical Expression (NIME) 2002 (Dublin: Media Lab Europe, 24–26 May 2002). Limerick: University of Limerick, Department of Computer Science and Information Systems.

Paradiso, Joseph A. and Neil Gershenfeld. “Musical Applications of Electric Feld Sensing.” Computer Music Journal 21/2 (Summer 1997), pp. 69–89.

Smith, Julius O. “Efficient Simulation of the Reed-Bore and Bow-String Mechanisms.” Proceedings of the International Computer Music Conference (ICMC) 1986 (Den Haag, Netherlands: Royal Conservatory of Music, 1986), pp. 275–280.

Trueman, Dan and Perry Cook. “BoSSA: The Deconstructed Violin Reconstructed.” Journal of New Music Research 29/2 (June 2000), pp. 121–130.

Social bottom