Digital Music Technologies—Designing with Metaphors

Music is many things to many people. If we were to attempt at a general definition, one approach might divide music into two key categories: in the first, music is performed, where an instrumentalist, or a group of them, engage in an act of generating sound, either from a score, from memory, or by improvisation. The context of co-players, the audience, and the location plays an important role here, where the liveness yields a sense of risk, excitement and a general experience of the moment’s uniqueness and unrepeatability. The second category is music as stored: in people’s memory, as written notation, on disks, tapes, or digital formats. The music could even be stored as an undefined structure in the form of algorithmic code for computer language interpreters. Now, in the 21st century, things are a little more complicated. New developments in digital music technologies transcend the above categories, deriving their symbolic design equally from the world of acoustic instruments, performance, notation, and electronic technologies. These new technologies further complicate the relationships between instrument makers, composers, performers, and the audience. Who is what? And the work itself ... is it an instrument? A compositional system? A piece?

There is a real sense that the technologies of music making are undergoing a drastic change by the transduction into the digital domain. This can be explored by studying the divergent natures of acoustic vs. digital instruments. The sound of a traditional musical instrument is necessarily dependent on acoustics, or the physical properties of the materials it is built of. Instrument makers are masters of materiality, with sophisticated knowledge of material properties and how sonic waves traverse in and through diverse types of matter, such as wood, metal, strings, or skin membranes. The instrumental functions of an acoustic instrument are necessarily determined by millennia old practices of material design. This is clearly not the case with digital instruments, where any interface element can be mapped to any sound. The mappings are arbitrary, and can be intelligent, non-linear, non-determined, inverse, open, and more. The design of digital interfaces ranges from being directly skeumorphic[1] and functional to more abstract and representational. In either case, every interface element signifies a function resulting in a sound rather than directly causing a sound. With the mapping function inserted between a gesture and the sound, the interface becomes semiotic: with this arbitrary relation, the digital instrument begins to surpass the acoustic as an epistemic entity, and at times manifests as a vehicle of a music theory or even a score.

The idea of making music with computers has existed since they were invented, and we can boldly claim that computers are the ideal media for composing, performing, storing and disseminating musical work. A quick tracing of this symbiotic relationship takes us back to early computers, with Ada Lovelace speculating about the musical potential of Baggage’s Analytical Engine in 1842 (Roads 1996, p. 822). In the early days of electronic computers, we find Lejaren Hiller and Leonard Isaacson applying Markov chains in 1957 for one of the first algorithmically composed pieces, the Illiac suite, and Max Matthews inventing notation languages for computer generated synthetic sound. However, if we look at the history of mass produced digital musical instruments and software, we see that the computers have been used primarily as bespoke microchips integrated in instruments, for example in a synthesizer or an electronic drum kit, where the hardware design has been primarily mimetic, aiming at imitating acoustic instruments.[2] In the case of music software, we are faced with multiple imitations of scores, piano rolls, magnetic tape, where the key focus has been on developing tools for the composition and production of linear music at the cost of live performance. From both business and engineering perspectives it is evident that hardware manufacturers benefited from a model where new synthesis algorithms were embedded in redesigned computer chips, and sold as new hardware.[3] Software developers in turn addressed another market, applying the “studio in your bedroom” sales mantra, which sparked the imagination of a generation in the late 80s, who used Cubase on Atari computers, starting a genealogical lineage that can be traced to the current Logic or Live digital audio workstations.

Specialists in innovation studies, marketing, science and technology studies, and musicology, could explain in much more detail how technologies gain reception in culture, the social and economical conditions that shape their evolution, and the musical trends that support the development of particular technologies. From the perspective of an inventor, it is less obvious why the history of musical technologies has developed this way, although inventions ultimately have to depend on market forces in order to enter public consciousness. Here, the history of failures is as, if not more, interesting as the history of successes. (“failure” is here defined in the terms of the market, economy and sales). One such “failed” project could be Andy Hunt’s MidiGrid, a wonderful live improvising software for MIDI instruments written in the late 80s (Hunt 2003). An innovative system, ahead of its time, the focus was on performance, liveness and real-time manipulation of musical data. Written for the Atari, Hunt received some interest from Steinberg (a major software house), which, at the time, was working on the Cubase sequencing software. Only an alternative history of parallel worlds could speculate how music technologies had evolved if one of the main music software producers would be shipping two key software products: one for performance and the other for composition.[4] At the time of writing certain digital interfaces are being produced that are not necessarily imitating the acoustic, although inspired by them. It is yet to be seen whether instruments such as the Eigenharp and the Karlax[5] will gain the longevity required to establish a musical culture around the technology of composing and performing with them.

Since the early 2000s, developments in open source software and hardware have altered this picture. The user has become developer, and through software such as Pure Data, SuperCollider, CSound, Max, ChucK, JavaScript, and hardware such as Arduino and Raspberry Pi, a world has opened up for the creation of new music technologies. The ease of access and low cost of these technologies, together with strong online communities that are helpful and encouraging, make such DIY approaches fun, creative and rewarding. When music software has become sophisticated to the degree that it can almost compose the music without the input of the user (who becomes a “curator” of samples or a “wiggler” of knobs and buttons), many find that real creative approaches happen when music technology itself is questioned and redefined. Gordon Mumma’s ideas of “composing instruments” (see also Schnell and Battier 2002) are relevant here.

This chapter describes such questioning of music technology. Here the investigation regards interface and interaction design, i.e., how the visual elements in music software can affect musical ideas, composition and performance. Considering the practically infinite possibilities of representation of sound in digital systems— both in terms of visual display and mapping of gestural controllers to sound—the process of designing constraints will be discussed in relation to four systems developed by the author that engage with visual representation of sound in music software.

  • [1] Skeumorphic design is where necessary features in an original objects are used as ornamentationin the derivative object. Examples in graphical user interface design could be screws inscreen-based instruments, leather in calendar software, the use of shadows, and so on.
  • [2] The contrasting design ideologies between Moog and Buchla are a good example of the problemsat play here. It is evident that Moog’s relative commercial success over Buchla’s was largely dueto the referencing well known historical instruments (see Pinch and Trocco 2002).
  • [3] There are exceptions of that model of course, such as the discontinued Nord Modular Synth.
  • [4] Hunt’s software is of course no failure. It is a highly successful research project that has served itsauthor and many others as musical tool, for example in education, and it has inspired various otherresearch projects, mine included. But the context of this discussion is innovation and how aspecific music technology instance might fare in the world of mass markets and sales.
  • [5] The manufacturers of both interfaces call them “instruments”. Some might argue that they onlybecome instruments when coupled with a sound engine, as familiar instrumental models indicate(e.g., Wanderley 2000 or Leman 2008), but I do believe it makes sense, in terms of innovation,longevity and spread of use, to call these instruments. Will there be a day when something like theKarlax will be taught in music conservatories? How would that even work? What would thetraining consist in?
 
Source
< Prev   CONTENTS   Source   Next >