Sound Spatialisation Controllers in Context of Digital Musical Instruments

As outlined above, spatialisation can be considered as established artistic practice in the broader field of electroacoustic music and live electronic music. We have discussed that prevalent spatial sound techniques can generally be applied both to the production process (mainly in the studio) and to the real time presentation of music in the respective performance space. Zvonar (2000) formally differentiates between the live performance approach to sound spatialisation and techniques for pre-composed spatial arrangements of sound, such as environmental multichannel soundscape, classic studio-based multi-track composition and automated spatial control. Accordingly, only some available implementations of sound spatialisation systems are suited for use during performance. While several spatialisation controllers have been designed explicitly as studio production means, other system designs have simply met the limits of contemporary technologies, be it in terms of computational power for spatial rendering or the lack of suitable control interfaces.[1]

Furthermore, the complexity of the control task can be considered as another substantial obstacle for real time spatialisation. Such control and mapping related issues are well-known and dealt with in the field of human-computer interaction (HCI) and especially in the interaction design for interfaces of musical expression, an applied subfield of HCI. Therefore, it seems reasonable to regard the means for spatial sound control used in live musical performance from the perspective of design practice of digital musical instruments (DMIs). This potential link has already been roughly explored in previous research (Wanderley and Orio 2002; Marshall et al. 2007; Schacher 2007; Perez-Lopez 2015), with a particular focus on the gestural control paradigm. At the core of the DMI metaphor, as introduced by Miranda and Wanderley (2006), stands the decoupling of the physical interface (input or control device) from the sound generating system (contrasting to the integral concept of acoustic musical instruments). Both instances are connected via a mapping layer assigning outputs of the controller to the inputs of the sound rendering engine. This modularization offers new degrees of freedom for the instrument design, however, the alleged decorrelation between the physical action of the performer and the produced sound, raises new issues related to the appreciation of the artistic performance (cf. Emerson and Egermann, this volume).

Considerations on both the control interface and the mapping structure are crucial for the instrument design in order to minimize control complexity without limiting its functionality. For a systematic outline of mapping strategies refer to Miranda and Wanderley (2006). Marshall et al. (2007) discuss common control issues and introduce three levels of spatial sound control parameters which are related to (1) the position, orientation and movement of the sound source and sink, respectively, (2) characteristics of the sound source (and sink), and (3) environmental and room model parameters (Marshall et al. 2007, 229). For a list of typical parameters related to all three levels see Table 1.

Beyond the aforementioned control aspects, the interaction interface includes the feedback side—be it visual, auditory, or tactile-kinaesthetic feedback—primarily experienced through the physical device itself and secondarily as an intended (auditory) result of the sound generation process (Miranda and Wanderley 2006, 11).

In order to compare and analyse musical interfaces appropriately, different classification systems have been developed, the most common one going back to Miranda and Wanderley (2006). Based on the resemblance to existing musical

Table 1 Spatialisation system control parameters (based on Marshall et al. 2007; Perez-Lopez 2015)

Sound sourcea position and orientation

Sound sourcea characteristics

Environmental/room

parameters

Position (X, Y, Z)

Size

Size

Elevation

Directivity

Presence

(Trajectories)

Presence/distance

Early reflections

Brilliance/warmth

Reverberation

Reverb. Cut-off Freq.

Doppler effect

Air absorption

Equalization

Geometry

aParameters refer to sound source and sink respectively instruments, the authors distinguish between augmented musical instruments, instrument-like or instrument-inspired controllers, and alternate controllers. Especially the category of alternate controllers—subsuming various different interface concepts beyond the physical-mechanical interaction paradigm of acoustical instruments—can be broken down into sub-categories related to their sensing functionality relative to the human (Paradiso 1997; Mulder 2000): touch controllers react on direct physical manipulation (like a button or knob); non-contact or ex- panded-range controllers provide a limited sensing range for control gestures without physical contact (e.g. by using an infrared sensor system). Wearable or immersive controllers capture the control gestures with few or no restrictions to the movement since the performer is always in the sensing field (either by using, e.g., a sensor glove, suit or wide-range camera tracking system).

A special form of wearable controller can be found in biofeedback interfaces allowing for the acquisition of electrical signals generated by the human’s muscles, eyes, heart or brain. Although present for over 50 years now in the field of music and interactive media art, these interfaces have played no significant role as spatial performance instruments, most likely due to the limited controllability and bandwidth of some of the captured parameters (such as brain waves).[2]

For a larger subgroup of alternate controllers, Overholt (2011) uses the term borrowed controller in order to emphasize that these have not originally been designed as a musical interface, such as video game controllers, camera tracking systems, etc. Interestingly, most spatialisation controllers can be assigned to this category.

Related to the control paradigm, further criteria to distinguish between different realizations of spatialisation controllers can be addressed. With respect to DMIs, Pressing (1990, 14) and Birnbaum et al. (2005, 193-94) propose multidimensional description spaces dealing with different aspects related to the controller and its relation to both the performance and the performer. Perez-Lopez (2015) derives a set of dimensions relevant for the analysis of spatialisation systems, including:

  • Role of the performer—the performer exclusively controls spatial parameters in contrast to a performer who controls both spatialisation and sound synthesis.
  • Required user competency—casual untrained users in contrast to trained expert users aiming at expressivity and virtuosity.
  • Number of performers—most spatialisation instruments have been designed for a single performer; however, the control task could also be (functionally) shared by a group of performers.
  • Multiplicity of control—denotes the relationship between the quantity of simultaneous control streams available and the requirement to control these parameters continuously (as opposed to a default state when no control signal is present).
  • Control Monitoring—related to the real time feedback modalities provided by the system on the executed control (e.g. by using a graphical user interface).

Having discussed the premises of spatialisation as performance practice in the field of electroacoustic music and contextualized real time spatialisation controllers within the discourse of DMIs and HCIs, we will provide a systematic inventory of spatialisation controllers presented to the public from the 1950s till today in the following.

  • [1] One can consider Stockhausen’s Rotationstisch (a loudspeaker mounted to a rotating turntablesystem) as typical tool for spatial studio composition (Brech 2015). The spatialisation system usedby Chowning to realize his simulation of moving sound sources (Chowning 1971) represents atypical studio approach. Simultaneously, it was clearly limited by processing performance of the1970s (Zvonar 2000).
  • [2] There is consensus that Music for Solo Performer (1965) by Alvin Lucier, scored for “enormously amplified brainwaves and percussion”, was the first composition to make use of abiofeedback interface to control percussion instruments by the resonance of the performers brainactivity (Miranda and Wanderley 2006). Several further artistic experiments have followed usingbiofeedback interfaces. Refer to Miranda and Castet (2014) for a comprehensive review on brainrelated interfaces.
 
Source
< Prev   CONTENTS   Source   Next >