Chapter 3 – Alternative Controlling Systems

Above I presented a brief historical overview to provide a better contextual understanding of the choices made with respect to the control devices, design, and technology of the ConDiS system. It presented insight into the relationship between what I want to do with this project and what I set out to do. The following chapter offers reflections on alternative systems and controllers from both a musical and a technological standpoint. Emphasis is placed on sensing devices that can be traced genealogically to conducting gestures.

The birth of MIDI and Digital Technology

The introduction of the silicon microchip and the MIDI protocol in the early 1980s has been called the “technological revolution” by NYU professor Jamie Oliver (Oliver La Rosa, 2011). The silicon microchip enabled the production of faster, smaller, and cheaper computers, while the MIDI protocol promoted the interconnection of various kinds of equipment including computers, interfaces, and synthesizers. These revolutionary milestones mark the beginning of a new era in the history of computer music, the era of interactive music systems.

Interactive Music Systems

Faster, smaller, and cheaper computers and the MIDI protocol opened up various possibilities for using computers in music, as a means of composition, source of new electronic instruments, and in terms of performance. Robert Rowe explains in his book The Aesthetics of Interactive Music System show computers caused upheaval in the practice of music making, how they opened up new kinds of composition and performance practices, leading to the development of interactive computer music systems. Systems that “explore some highly specific techniques of composition at the same time that they create a novel and engaging form of interaction between humans and computers” (Rowe, 1999), systems “whose behavior changes in response to musical input” (Rowe, 1993). He then classifies these interactive systems into a combination of three distinguishing dimensions:

Classification of Interactive Systems

1. Distinguishes between Score driven systems and Performance driven systems.

Score-driven programs use predetermined event collections, or stored music fragments, to match against music arriving at the input. They are likely to organize events using the traditional categories of beat, meter, and tempo. Such categories allow the composer to preserve and employ familiar ways of thinking about temporal flow, such as specifying some events to occur on the downbeat of the next measure or at the end of every fourth bar.

Performance-driven programs do not anticipate the realization of any particular score. In other words, they do not have a stored representation of the music they expect to find at the input. Further, performance-driven programs tend not to employ traditional metric categories but often use more general parameters, involving perceptual measures such as density and regularity, to describe the temporal behavior of music coming in.

2. Distinguishes of response methods as transformative, generative, or sequenced.

Transformative methods take some existing musical material and apply transformations to it to produce variants. According to the technique, these variants may or may not be recognizably related to the original. For transformative algorithms, the source material is complete musical input. This material need not be stored, however – often such transformations are applied to live input as it arrives.

For generative algorithms, on the other hand, what source material there is will be elementary of fragmentary – for example, stored scales or duration sets. Generative methods use sets of rules to produce complete musical output from the stored fundamental material, taking pitch structures from basic scalar pattern according to random distributions, for instance, or applying serial procedures to sets of allowed duration values.

Sequenced techniques use prerecorded music fragments in response to some real time input. Some aspects of these fragments may be varied in performance, such as the tempo of playback, dynamic shape, slight rhythmic variations, etc.

 

3. Distinguishes between the instrument and player paradigms.

Instrument paradigm systems are concerned with constructing an extended musical instrument: performance gestures from a human player are analyzed by the computer and guide an elaborated output exceeding normal instrumental response. Imagining such a system being played by a single performer, the musical result would be thought of as a solo.

Systems following a player paradigm try to construct an artificial player, a musical presence with a personality and behavior of its own, though it may vary in the degree to which it follows the lead of a human partner. A player paradigm system played by a single human would produce an output more like a duet. (Rowe 1993)

The relevance to ConDiS

Based on Robert Rowe’s classification of interactive systems, we can situate the ConDiS system in the following manner:

  1. It is a score-driven system since it uses predetermined event collections and a stored representation of the music at its input. This comes in the form of the timeline-based programmed automation of the electronic part that then needs to synchronize with the music arriving at the input. That is to say, the two parts need to match the performance of the written score. Events are organized using the traditional categories of beat, meter, and tempo, allowing the composer to preserve and employ familiar ways of her compositional process.
  1. It is transformative since it takes musical material, in this case the live input, an audio signal from each instrument, and applies a transformation in real time to its sound to produce a sonic variant. These variants can be adjusted from being recognizable to nonrecognizable depending on the preprogrammed automation of the electronic part. For the transformative algorithms, the source material is complete live musical input of each of the performing instruments.

Certain functions of the ConDiS system can also be recognized in the description of the sequenced technique since they allow a variation in performance, such as the tempo of playback, dynamic shape, and rhythmic variations.

  1. ConDiS is not an instrument but a tool that that does nothing more than extend the conductor’s job to synchronize and control the electronics. Therefore, the ConDiS system is a conducting paradigm system that is concerned with constructing an extended musical tool.

Performance gestures from a human conductor are analyzed in real time by the computer and guide an elaborated output exceeding the standard instrumental response.

Score following programs

 “Score following was first presented at the 1984 ICMC, independently, by Barry Vercoe and Roger Dannenburg” (Puckette & Lippe, 1992).

Score following is the process of tracking live signals from a live performer to determine where in a score a performer is and synchronize this with its computer accompaniment. By matching the performance score to the score stored in the computer, electronic events are triggered at precise points written in the performance score. The most commonly used score following is pitch tracking, though other parameters can be followed.

The collaboration of composer Philippe Manoury and computer researcher Miller Puckette started in the 1980s when both were working at the IRCAM research center in Paris on a computer program for interactive performances named Score Following. Using the 4X system, developed by Italian physicist Giuseppe Di Giugno,[1]its Synthetic Performer interface was developed by Barry Vercoe[2]and the control implementation by Miller Puckette in the form of Music 500 software (Puckette, 1991).

The 4X machine was equipped with the 4X digital signal processor and two boards: one for controls and another for the interface allowing a real-time synthetic analysis and score following performance. Their first use of 4X in a real-life setting was during the performance of the Philippe Manoury composition Jupiter(1987) for flute and score following electronics-based pitch tracking. This is now considered to be the first interactive piece in which an acoustic instrument controls electronic music in real time. Further development of the Synthetic Performer and Music 500 led to the birth of Csound[3]and Max/MSP (Nunzio, 2013).

Score following is relevant for my project since I need synchronization between the performance of the written score and the programmed automation for the electronic parts. As mentioned above, there has historically been more concern with pitch recognition than with any other parameters, such as amplitude or tempo, the reason being that pitch recognition is a rather predictable and safe method. But I learned while composing Circles I(page 7) that pitch recognition is not ideal when dealing with more than one instrument, at least not when they are staged close to each other. The same goes for amplitude recognition due to the exponential complexity of a larger instrumental group. Tempo or meter () pattern recognition look at other laws where it is possible to draw a connection directly to the conductor’s gestures. As shown on page 43-44, attempts were made to connect these gestures through the MuBu.hhmm[4]gesture-following program, with unsatisfactory results. This was mainly related to the fact that most conductors prefer to conduct these patterns with great gestural flexibility, which proved too volatile for delivering an acceptable result. Therefore a decision was made, following, among other things, a conversation with my co-supervisor Wayne Siegel, to add buttons to the “ConGlove” design. That way a synchronization message could be sent with the click of a button by the conductor to the DAW with minimum interruption to her “natural” conducting method. This proved to be the simplest while also most reliable solution.

Gesture-based Interfaces

A number of systems and new digital musical instruments have been developed for interactive real-time performance that use performance gestures as musical input/output. At the same time, less emphasis has been placed on developing a tool for classical music conductors that analyzes their conducting gestures for use in mixed music performance. Of those that have been produced, most are related to conducting a virtual orchestra or the musical score of a MIDI-based performance. Although it is quite possible to transform many existing systems and instruments for use in interactive conducting, I decided to build and design such a tool from scratch. Why? Because I wanted to:

  • Create a conducting tool that is solely built for and responds to the art of conducting;
  • Design the structure and functions of the system from the ground up;
  • Ensure familiarity and understanding of the system to allow for future developments;
  • Understand the technical function of the tool.

The following is a list of interfaces based on and related to gestural recognition. This list is in no way comprehensive but should give the reader a better overview of the technology, analyzing method, and design that have dominated since the beginnings of interactive performance. For more detailed overview the conference paper by Kyungho Lee, Michael J. Junokas and Guy E. Garnett“ A Review of Interactive Conducting Systems 1970-2015” (Kyungho Lee, 2016)is highly recommended.

The first fully functional electronic instrument using hand gestures for music performance was the “Etherphone,” later renamed the “Theremin” after its inventor, Leon Theremin. Originally built in 1920, it first received international attention when Theremin traveled to Europe and the United States in 1927, generating considerable excitement. Its revolutionary design, a box with two antennas played with the hands but without touching the instrument itself, did not resemble any other conventional instrument. The relative proximity of the left and right measured capacitance in an electromagnetic field around the antennas allowed the performer to control pitch through the vertical antenna and amplitude through its relation to the horizontal antenna. With a combination of a variable oscillator and fixed oscillator, it created sound waves that evoked a sensation of angelic voices, hence it was often called the “Thermenvox.”

In the mid-1960s William H. Ninke (Ninke, 1965)developed a light-pen interface called “Graphic 1”[5]for the computer music inventor Max Mathews.[6]It allowed users to draw figures on a screen and see the graphical result before it was converted into sound. This early attempt at sonic computer interaction led to the birth of a digital-analog hybrid system called Groove, “A Program to Compose, Store, and Edit Functions of Time” (M. Mathews & Moore, 1970). In collaboration with composer and musician Richard Moore, Max Mathews developed the Groove machine as one of the first performance systems to have a computer controlling an analog synthesizer. The computer receives gestures and consequently produces sound, making the computer an instrument and the gesture-maker a performer. The same performer “commonly had the ability to design the behavior or processing of the control signals, the routing of these signals to analog equipment and the way this configuration would evolve over time. In this sense the user was a composer” (Oliver La Rosa, 2011).

In the 1980s Max Mathews made “The Conductor Program” and its associated device, the “Radio Baton.” The Conductor Program is a type of sequencer that receives triggers and control signals from the radio batons (mallet-shaped tools), sending MIDI commands to a synthesizer. The musical material to be played (pitch and duration) are stored in the computer. The Radio Baton is a controller that uses a coordinate system of radio receivers to detect the three-dimensional position of one or more batons and a table. “The technology and the analysis are straightforward. The capacitance between a transmitting antenna and receiving antenna is inversely related to the distance between them. The radio signal is simply used to measure this capacitance. The position of the transmitter can be deduced from the distances between the transmitter and a suitably chosen set of receivers” (M. V. Mathews, 1991).

Hand-held instruments

The Hands

The Dutch composer, performer, and inventor Michel Waisvisz,[7]in conjunction with STEIM engineer Johan den Biggelaar and, later, Wim Rijnsburger, developed a multi-sensor control device named “The Hands.” It consists of three parts: the two “hands” and an analog-to-MIDI-converter. The “hands” are a set of two aluminum, ergonomically shaped plates with sensors, potentiometers, and switches that are strapped under the hands of the performer (Waisvisz, 1985). Finger movements and altering the positions of the hands sends information to a microprocessor with an onboard MIDI-encoding program. Thus codes can be sent to any MIDI device through a standard MIDI cable.

The Strophonion

“The Hands” provided a role model for Alex Novitz’s instrument Strophonion (Novitz, 2019), though the two differed in fundamental ways in spite of similarities in design. The Strophonion is a wireless digital musical instrument that uses sensors that are built into differently shaped cases of two hand controllers. It has 13 push keys, a three-axis accelerometer, and a pressure sensor on the right hand and 8 push keys on the left, with a joystick that is operated by the thumb. An ultrasonic distance sensor, which functions as a receiver, is mounted on a hip belt to measure the distance of the hand from the body.

Through the physical movement of his body, in combination with the fingers pressing and releasing the round-shaped keys on both hand controllers, the musician operates the instrument’s functionality and thereby steers the sonic and musical process. Just as with any traditional acoustic instrument, the Strophonion allows the user to control pitch, rhythm, and timbre.

Digital Gloves

Lady’s Glove

In 1991, Laetitia Sonami[8] (SONAMI, n.d.)began her work on the Lady’s Glove, which has since gone through a series of radical design changes, most recently with the help of engineer/designer Bert Bongers, with sponsorship from STEIM.

In its present implementation, the Lady’s Glove comes with an accelerometer that measures hand speed, numerous motion and pressure sensors, and ultrasound transmitters and receivers that detect the distance between the glove and the floor. Sonami uses the Lady’s Glove in live performances to control sound, mechanical devices, and lights via MIDI—mostly in solo situations, but also in improvisations with other instrumentalists. Her current setup includes a STEIM SensorLab processor and a Mac laptop running Max/MSP.

The “mi.mu” glove[9](2015), created by MI.MU gloves limited, is a commercially available glove for both hands used for live composing and performing. It is probably best known for its use by the British innovative recording artist Imogen Heap and the American pop singer Ariana Grande, who uses the mi.mu glove for live processing of the voice. The mi.mu gloves use the x-OSC high performance I/O board as well as the x-IMU Inertial Measurement Unit from x-io Technologies to calculate, control, and trigger sounds through various hand gestures. The x-OSC is the same I/O board that is used in the ConDiS system.

 

Wireless MIDI Controllers – Rings

Wayne Siegel’s[10]attempt to control and perform live music using only hand gestures is a good example of the more recent development of a direct control system. In an introduction to his paper “Conducting Sound in Space,”, the composer writes: “The goal of this project was to explore the potential of electronic music that combines production, performance, and diffusion into a single integrated creative process”(Siegel, 2017). Using two rings, a HotHand USB on the middle finger of each hand, and the built-in web camera of his laptop, Siegel performs live using his hand gestures to control sounds and sound diffusions and rotational panning. This method of performing has an obvious parallel to other historical electronic music interfaces such as the Theremin (1920) and The Hands (1984) (Siegel, 2017).

Wave, by Genki Instruments, is a commercially available ring that, similarly to the HotHand, can control sound with motion. It comes with configurable presets and buttons for full control. It senses three different motions, allowing parameter control through the movement of its user’s hands. Three buttons can be clicked to play, stop, or record and by tapping any surface you can trigger any parameter, such as tempo, note, or a sample.

Jackets

A direct continuation of “The Digital Baton” (see below), “The Conductor’s” jacket system by the MIT Media Laboratory (Marrin & Picard, 1998)uses different and considerably more complex sensing technology than its predecessor. The device is a custom-made “jacket” that was able to gather and analyze the positional and acceleration data and sense muscle tension in the wearer (the conductor). It was designed to cause no discomfort to the conductor and measured physiological and gestural signals (Teresa Marrin Nakra, Machover, & Picard, 1999).

The “Controller Suit” was developed in 1994 as a collaboration between the Norwegian composer Rolf Wallin and Øyvind Hammer at the Norwegian Center for Technology, Acoustics, and Music (NOTAM). In 2008 the Controller Suit was upgraded by composer and programmer Anders Vinjar and developer (signal processing) Henrik Sundt at NOTAM, with the transmission mechanism, made wireless through adding Open System Control (OSC) using Wi-Fi via a normal wireless router (NOTAM, n.d.). Thanks to the NeXT computer, which could handle real-time sound data, the composer was able to create and control live sounds by touching various parts of his body: “I had small hose clamps on my fingers with a small voltage on them (5V). Touching one end of the black strips on my arms, legs, and chest, the full 5 Volt goes back into the central voltage-to-digital translating chip placed on my hip. Then the further away I touch, the smaller the voltage. In this way I had a lot of faders (+ a lot of on/off buttons of course) on my body to control the software” (Wallin, n.d.).

A commercially available MIDI jacket was recently released by MACHINA, a company that specializes in the design and production of wearable technology. The MIDI jacket DK1allows music creation through body movements and body sensors. It comes with an accelerometer, a gyroscope, two push buttons, and one piezoelectric transducer. The MIDI jacket DK1 can be configured to an individual’s needs to allow for control through body movements. Arm movements can be used for volume or tempo control, a tap on the chest can trigger events, all depending on one’s personal configuration. Unfortunately, the jacket is no longer available due to manufacturing complications. A new version will be released soon with redesigned hardware and firmware.

 

Infrared technology – Buchla Lightning

One of the first tools that used infrared light as a means of data transmission was made by the electronic instrument pioneer and inventor Don Buchla.[11]Introduced in 1991 the Buchla Lightning consists of two handheld wands that emit and transmit data about location, acceleration, velocity, and direction to the infrared photo-sensors of the box unit.

Several systems directly related to conducting musical performance have been developed using the Buchla Lightning. One of the earlier attempts to build an interactive conducting system was “The Digital Baton” built at the MIT Media Laboratory in 1996. The Digital Baton was designed as a multimodal, handheld input device measuring several types of activity through the use of three different sensor systems (Paradiso, 1999). It was built as part of Tod Machover’s Brain Opera, which is described as an “interactive musical journey into your brain” that premiered in August 1996. Depending on where the conductor points in space, an infrared detector following the tip of the Digital Baton sends information to the Buchla Lightning unit, resulting in the production of different instrumental sounds.

Another system for orchestra conducting using the Buchla Lightning is Jan Borchers’ “Personal Orchestra.” It was created for interactive exhibits featuring footage of the Vienna Philharmonic and is somewhat sensitive to conducting gestures. With one Buchla Lightning wand to measure the exhibit visitor’s gestures, the orchestra would reflect the tempo of the “conductor’s” beat. If the conductor conducted badly, the orchestra would stand up and insult the “conductor,” whereas if he/she did a nice job conducting, they would applaud.

Later Borchers, in collaboration with Teresa Marrin, created the “You’re the Conductor” based on a similar concept to the “Personal Orchestra” except the orchestra, now the Boston Pops orchestra, would raise their instruments as soon as the conductor raises a wired conducting baton. They would then immediately start to play upon the conductor moving the baton. As soon as the conductor stops playing, the orchestra stops and waits for 30 seconds before rewinding to the beginning of the loop to wait for the next conductor.

The iSymphony, “an interactive orchestral conducting system for digital audio and video that adaptively adjusts to the user’s conducting style” (Lee et al., 2006),also created by Jan Borchers and his research group, featured an adaptive gesture recognition system that could recognize three types of gestures: a four-beat gesture, up-down, and random. The system uses a digital baton to identify which gesture is being used and then follows in the indicated tempo.

Camera-based systems

There are several systems available that use video images as a means of gesture recognition and analysis. Of the systems that are commercially available, the Nintendo Wii motion controller and Microsoft Kinect are the two that have been most widely used in research related to gestural recognition and analysis.

Wii control-based systems

The UBS Virtual Maestro is a commercially available interactive conducting system. It simulates orchestral conducting to the general public attending a classical music concert. The system utilizes the Wii Remote, which users move like a conducting baton to affect the tempo and dynamics of an orchestral video/audio recording. The Wii Remote uses the accelerometer data to control playback speed and volume in real time. (Teresa M. Nakra, Ivanov, Smaragdis, & Ault, 2009).

EyesWeb

In 2004 Paul Kolesnik and M. Wanderley of McGill University designed a gesture analysis and performance system for conducting audio and video scores of orchestral performances in real time. It utilized two EyesWeb cameras and Max/MSP/Jitter environments to capture conducting gestures: “Gesture recognition procedure is designed on the basis of Hidden Markov Model (HMM) process. A set of HMM tools are developed for Max/MSP software. Training and recognition procedures are applied toward both right-hand beat- and amplitude- indicative gestures, and left-hand expressive gestures” (Kolesnik & Wanderley, 2004).

Microsoft Kinect

Gesture recognition systems – Machine learning systems

The Music Technology Group in Barcelona have conducted recent research on capturing the conductor’s gestures using the Kinect[12]device. This device enables the design of a system that learns expressive and idiosyncratic gesture variations for gesture-based interaction (Sarasua, Caramiaux, & Tanaka, 2016). Another research project based on the conductor metaphor analyzes the performance of a conductor in a concert, identifying descriptors and adapting personal nuances that can be automatically computed from spontaneous conducting movements. The idea is to turn the computer into an orchestra that the user conducts with movements resembling those a real orchestra conductor would make (Sarasúa Berodia, 2017).

The Norwegian researcher and university teacher Andreas Bergsland at NTNU, in collaboration with choreographer Robert Weschler and others, has led the development of the MotionComposer. This is described as “a therapy device for persons with different abilities that turns the movement into music using video-based motion tracking”(Bergsland & Wechsler, 2017). Using stereo-vision technology (two video cameras), which mimics two eyes, the MotionComposer can recognize the human body and then, with the use of motion tracking software, interpret expressive gestures, shapes, and movements, converting these into musical sounds in real time. The MotionComposer has been successfully used in interactive dance performances and as a musical tool for “otherly-abled persons including artists” (Wechsler, n.d).

Of the many other research projects and papers I consulted while working on the ConDiS research project that focuses on the conductor’s musical gestures, I would like to highlight several in particular: “Gestural control of sound synthesis” (Halmrast, Guettler, Bader, & R.I.Godøy, 2010), “Gesture and timbre” (Halmrast et al., 2010), “Tracking Conductors Hand Movements Using Multiple Wii motes” (Bradshaw & Ng, 2008), “Body Movements for Affective Expression: A Survey of Automatic Recognition and Generation” (Karg et al., 2013), “Gyroscope-Based Conducting Gesture Recognition” (Höfer, Hadjakos, & Mühlhäuser, 2009), “A computer music system that follows a human conductor” (Morita, Hashimoto, & Ohteru, 1991), “A Wii-based gestural interface for computer-based conducting systems” (Peng & Gerhard, 2009), and “A Customizable Recognizer for Orchestral Conducting Gestures Based on Neural Networks” (De Prisco, Sabatino, Zaccagnino, & Zaccagnino, 2011).

Assessments of gesture-based technologies in the context of ConDiS

Handheld instruments

In spite of their wonderful design, both of the handheld instruments detailed above are not feasible as tools for conducting since they rely on a physical mechanism that is not comfortable for conductors. They are designed as electronic instruments attached to the hands of a performer and, as such, are rather bulky. They therefore fail to provide the “natural” conducting extension tool I was aiming for.

Gloves

The design of the “Glove” interfaces mentioned above looks very similar to the ConDiS system. The fundamental difference lies in these “other” gloves having been designed as digital musical instrument-based systems. They were developed for improvisational performance where their main purpose is to manipulate electronic sounds in real time. Therefore, gloves are not that easily adjusted for the need of ConDiS.

Technologically the “mi.mu.” glove is the most closely related instrument to the “ConGlove,” since both use the same type of I/O board—the x- OSC from x-io Technologies—and bending sensor for each finger. Soon after starting the ConDiS research project, I took a close look at the “mi.mu” glove, which was not commercially available at that time, and gave it serious consideration to using it in my project.

After the release of the mi.mu. in 2015, I decided that it was too expensive and was missing fundamental functions needed for my conducting system, including the button-pressing function. It therefore seemed inadvisable to spend a significant amount of money to redesign an already made glove, to say nothing of the fact that I wanted to make my own glove from the bottom up.

Figure 11. HotHand USB Max/MSP test patch

Ring-designed interfaces

One of the first interfaces I looked into at the early stages of my research was the HotHand USB wireless controller ring. I introduced it to my co-supervisor Wayne Siegel, and we agreed that the ring provided many options that could be utilized in the ConDiS system. After some tests (Fig. 13) it became clear, however, that the HotHand USB receiving ring did not meet my requirements. By that point, I had tried it out several times and was convinced that the technology I needed should be based on the OSC protocol and Wi-Fi for its flexibility and accuracy.

The Wave ring from Genki Instruments was not available until late in the ConDiS research project and was therefore not an option when deciding what interface to use for the ConDiS system. It does come with 3 button functions, a function that is lacking in most other interfaces, but it also uses Bluetooth technology for wireless communication and therefore, for the same reason as the HotHand, did not meet my requirements.

Jackets

At the beginning of the ConDiS research project, I gave consideration to designing a jacket for the conductor not very dissimilar from The Conductor´s Jacket created at MIT. The idea was dropped based on the fact that having the conductor wearing a special jacket was onerous both in terms of size and comfort. The fact that neither the Conductor´s Jacket nor the Controller Suit are commercially available was also a negative factor. The MIDI jacket DK1 was not available at the beginning of the research project and therefore was not a choice.

Infrared devices

None of these systems are expressly made to conduct a live orchestra. They are all designed to be a tool to conduct a prerecorded (sampled) sounds or performance reacting to the conductor’s gestures. While it is possible to modify these systems to control and conduct live music, it is my personal experience, having used Buchla Lightning, that it is not a suitable tool for use. The infrared photosensors of the box are divided into eight zones and the conductor needs to be very careful about what zone she occupies at any moment. This zoning issue significantly reduces the conductor´s concentration and restricts movement and is therefore not well suited to use.

Camera-based interfaces

Knowing the importance of eye contact and other facial expressions on the part of the conductor, I originally intended to implement these exformations into my toolbox of recognizable conducting gestures using camera-based technology. Numerous attempts were made to connect the Kinect2 system to the Macintosh computer used for the research, without success. Too late for the ConDiS research project, NI Mate released software that supports the Kinect 2.0 for MacOS. This opens up all kinds of possibilities for having a physical object (a black box) watching the conductor’s exformative gestures.

Composer and conductor Oliver Knussen (Knussen, 1994), in an interview about a film showing conducting by Arthur Nikisch, considered by many to be the father of modern conducting, notes: “What you actually see on that film is a very disciplined and elegant classical, if you like, conducting technique where everything is here (Knussen shows hand gestures). You can see how for example, people said his eyes were mesmerizing because as a result of him conducting up there the player is forced to look at his eyes and every conductor virtually that one talk to, would tell you that half of what you do is done with your eyes” (Bambarger, 2003). Thinking ahead to the future development of ConDiS, implementing facial recognition could be a significant advance since, as stated above, “half of what you do is done with your eyes” (Bambarger, 2003), and that half is still missing in the ConDiS conducting digital system.

Related Compositions

The composition that comes the closest to my idea of ConDiS is the system the Swedish composer Jesper Nordin uses in his research and an artistic project developed at IRCAM. The Exformation Trilogy, a concerto for conductor with live electronics is a mixed music triptych, in which, in the first piece of the cycle “Sculpting the air” (2015), he “employs two motion sensors, allowing the conductor to trigger sound files or to process (sculpt) instrumental sounds gesturally” (Bacot & Féron, 2016). In his work Nordin uses the conductor’s gestures in a rather unconventional way, developing strategies that deal with various aspects of music creation and performance. In this way, it circumvents certain difficulties that are involved in conducting in a “traditional” way. It gives the conductor more time and more space to focus on shaping and interpreting the electronic sounds, not having to conduct and sculpt at the same time. This solution leads to a very satisfactory result and is extremely well suited for this kind of composition, namely a composition that is a concert for conductor and therefore features a greater emphasis on “sculpting the air” with theatrical conducting gestures.

Although not based on live interactive technology, there are other works in a similar vein, such as The Black Box Music(2012)[13]by Simon Steen-Andersen, where he uses an on-stage, real-time video display of hands inside a black box to conduct performers displayed around the auditorium, thus emphasizing the dialogue between gestures and sound.

Alexander Schubert in his composition Point – Ones[14]for sensor-augmented conductor and small ensemble approaches the problem in a manner somewhat similar to my own, in that sound reacts to the conductor’s gestures. His use of this technique and purpose behind it are nevertheless very different.

These compositions differ fundamentally from the ConDiS project, the aim of which is to make a system that is as unobtrusive as possible. The ConDiS system strives to achieve the most accurate performance of through-composed music with respect to both instrumental and electronic sound. That is to say, nothing is left to uncertainty, nothing is left in the air, rather it is left between the air. Of course, the conductor is responsible for the interpretation and shaping of the music, working out the sonic spectra, extracting everything that makes the composed interplay into music. But it all takes place within a specific framework based on the conductor’s traditional conducting: this is the key challenge in ConDiS. It was therefore very gratifying when after a performance of Kuuki no SukimaI got questions like “What did she do with the Glove?”

National relevance – Norwegian inventors

Most of the research and development in the field of music technology and gestural interfaces takes place in conjunction with the Norwegian Center for Technology, Acoustics, and Music (NOTAM), a center that focuses on the development and innovative use of technology in music and the arts, RITMO, Centre for Interdisciplinary Studies in Rhythm, Time and Motion, a division at the University of Oslo (UiO), and the music technology group at the Department of Music, Norwegian University of science and technology (NTNU).

As described above two of these institutions were involved in developing the MotionComposer of the NTNU-based Andreas Bergsland and Rolf Wallin’s Controller Suite, developed at NOTAM. The Norwegian musician and researcher Alexander Refsum Jensenius, currently a deputy director of RITMO, is internationally recognized for his contribution in the field of music and gestures. His systematic analysis of body motion has been widely cited in music research as well as in psychological research (Jensenius, 2014).

Similar to the Controller Suit is the Multisensorial Environment designed by Stahle Stenslie, a bodysuit to control sound and graphics used in his multisensorial installation “Erotogod.”(Stenslie, n.d.)

The composer and instrument builder Koka Nicoladze displays great innovation in his instrument construction, in which the main material is always what he describes as an “everyday object that [he] find[s] at random places” (Nikoladze, 2018). Others individuals who merit mention are the NOTAM-based composer, musician, and sound artist Åsbjørn Blockum Flø, known for his installations, including the “Doppelgänger,” a site-specific installation made for the Art exhibition hall in Bergen (Flø, 2014), and musician and researcher Øyvind Brandtsegg at the NTNU, who used a special sensor jacket with the group Krøyt. His jacket was embedded with EMG sensors, which convert muscle activity to MIDI signals via a BodySynth system made by Ed Severinghaus. Brandtsegg’s jacket is exhibited at Rockheim, the national museum for pop music.

Figure 12. ConDiS – Lecture at the Iceland Academy of the Arts.

[1]Giuseppe Di Giugno is an Italian physicist known for his work at IRCAM and his close collaboration with contemporary composers such as Luciano Berio and Pierre Boulez.

[2]Barry Vercoe is a New Zealand-born computer scientist and composer and inventor of Csound.

[3]A music synthesis language widely used by composers and performers of computer music.

[4]MuBu is a toolbox for multimodal analysis of sound and motion, interactive sound synthesis, and machine learning. hhmm stands for the Hierarchical hidden Markov model often used for pattern recognition.

[5]A YouTube silent video showing use of the Graphic 1 can be viewed at https://www.youtube.com/watch?v=llnzK2H_HZo

[6]Max Vernon Mathews (1926 – 2011) was a pioneer of computer music.

[7]Michel Waisvisz (1949–2008), Dutch inventor and performer, was Artistic Director of the STudio of Electro-Instrumental Music (STEIM) in Amsterdam 1981-2008.

[8]Laetitia Sonami (b. 1957) is a French sound artist, performer, and composer of interactive electronic music.

[9]See https://mimugloves.com

[10]Wayne Siegel is an American/Danish composer, performer, inventor, and teacher. He is the founder of DIEM, the Danish Institute of Electronic Music.

[11]Donald “Don” Buchla (April 17, 1937 – September 14, 2016) an American inventor of electronic music instruments.

[12]https://dev.windows.com/en-us/kinect

[13]Video recording available on YouTube: https://www.youtube.com/watch?v=UEInFQWgcNw

[14]Video recording available on YouTube: https://www.youtube.com/watch?v=CN-rIIuyNbY