A humanised augmented practice

Placing the concerns and priorities of humans above those of technology would seem to be an appropriate, ethically sound starting point for our artistic endeavours. This is part of what I describe in my research as humanising in an electro-instrumental performance environment that centres on the use of computers alongside traditional instruments.

2015-05-08 17.59.28 2

I count myself among a growing number of musical performers of traditional acoustic instruments who are embracing technologies that have, for several decades, been the domain of electronic musicians, composers, DJs, producers and electric guitarists.

Computers, pre-recorded media, effects and multichannel sound are becoming increasingly established features in the performance of contemporary classical music, jazz and free improvisation. Furthermore, the increasing accessibility of computer-based music practice, and its application throughout a widening diversity of popular music styles, are contributing to a blurring at the edges of these and other genre distinctions.

The extension of instrumental possibilities by electronic means is far from new—in fact, its origins extend to experiments in the late 19th century, decades before the first commercially available electric guitars of the 1930s (pictured), with John Cage (1939) and Karlheinz Stockhausen (1958) among early pioneers within the field of concert music.

rickenbacker_frying_pan

The Rickenbacker Electro A-22 aluminium lap steel guitar (the so-called ‘Frying Pan’)—one of the first commercially available electric guitars, designed by George Beauchamp (1931-2).

The practice of real-time, digital manipulation of instrumental sounds was hugely advanced by the arrival of powerful personal computers in the 1970s and by the Musical Instrument Digital Interface (MIDI) technical standard in 1983. The subsequent growth in memory capacity, processing power and relative affordability of home computers, laptops and mobile devices, and the sharing of peer-to-peer knowledge via the worlwide web, has now democratised computer music to an unprecedented degree.

New avenues of musical expression

For many musicians, electronic experimentation takes the form of a parallel practice, requiring the purchase of specialist instruments modelled on their acoustic counterparts and equipped to manage the transition to sonic amplification and signal processing (effects)—for example, keyboard-based synthesisers, the electric violin, wind controllers and electronic drum interfaces, whose physical sounding bodies have been minimised or dispensed with altogether.

Barbera ultralight Violin_jpg

One among many designs of electric violin. Like the electric guitar, the body can shaped on a combination of ergonomic, acoustic and aesthetic grounds. http://www.barberatransducers.com/gallery.html

Electroacoustic musicians may choose to work entirely within their computer or specific hardware, while others create and evolve new classes of instrument, with a widening array of control parameters, from pedals and touchpads to light sensors, accelerometers and brain-computer interfaces (BCIs).

Soft Revolvers Facebook

Myriam Bleau’s ‘Soft Revolvers’ (2014) http://www.myriambleau.com/works/ (Photo: Severin Smith)

Some performers, including myself, prefer to prioritise an electroacoustic augmentation that is based around their existing instruments, with which a highly personal relationship has been forged over many years’ practice. Over the past two decades, there have been numerous developments that attempt to take into consideration the vast amount of embodied skill and knowledge brought to the practice by experienced musicians. Examples include:

See: 3 ways to make music with a computer [coming soon]

Each of these approaches opens new avenues of expression to be explored and, while they are far from mutually exclusive, on the whole musicians will tend to favour one form of practice over others.

Instrumental ‘sound’ and ‘voice’ in the 21st century

Whether we choose a parallel or augmented instrumental approach is very much a personal choice, and one that centres around performers’ individual preferences with regard to what we refer to as our sound, which connects in a wider sense to the idea of having an artistic voice. We speak of ‘voice’ in instrumental terms because musicians seek not only to optimise their expressive skills, but in some way to transcend their instrument in a shared musical experience.

This voice, I would argue, is as central to the art of musical performance as to any artistic output. Even that most homogenised of ensembles, the symphony orchestra, requires both a deft balance of disciplined dynamic and timbral blending and a strong sense of individual musical expression, for example in solo lines and passages, as well as in the global interpretation of a conductor or musical director.

I will be writing more in the coming months about my own particular sense of instrumental sound and how I prioritise and adapt within it according to circumstances. It’s important that performing musicians are beginning to articulate their processes and priorities, as this is rather under-represented and frquently misunderstood in the academic literature and general press.

Bringing the embodied knowledge and priorities of the experienced instrumentalist into a wider technological environment, without undue detriment to the capabilities of their existing apparatus, or disproportionate subjugation of the human performer, is my central concern in establishing a humanised augmented instrumental practice.

Being heard in the fray

For musicians with a Western classical training, there can be an understandable tendency to want to protect the hard-won and symbiotic relationship they have developed with their instrument(s). Compromising this relationship can lead to obstacles when interfacing with new technologies.

At each end of the additional sound chain we encounter microphones and loudspeakers, the science of which represents a formidable body of knowledge to begin to digest, including aspects of acoustics, psychoacoustics, electronics, equipment classifications and positioning, etc. In between, there may now be a vast array of digital sound technology to understand and form a relationship with. The theory and practice of professional audio, some basic principals of mathematics, physics and programming, software systems, interface design and a variety of external control devices are now part of our instrument too.

Laptop, software, audio interface, microphones, cables, foot pedals, headphones, amplifiers—these may all be part of the wider instrument.

Laptop, software, audio interface, microphones, cables, foot pedals, headphones, amplifiers—these may all form part of the wider instrument.

Exploring the sound potential of acoustic instruments—including their increasing possibilities with the arrival of 20th and 21st century technologies and multiple genres—forms a basis for the development of the instrumentalist’s personal voice. And where a musician wishes to assert certain priorities within a sonic environment, these should be taken into consideration by those designing and managing the technologies of sound reinforcement and manipulation.

When this is the case, in my experience, things tend to run optimally. There is a strong sense of contribution, of ownership, on the performer’s part to the overall sound output, which is appropriate—their voice is heard. Manifold decisions made about instrument maker and model, set up, the prioritisation of certain sound elements and the managing of contingent factors in the space, such as size, reverberance, equipment specification and placement, are all carried strongly into a process of collaboration with the sound engineer to create a particular sonic experience in the venue.

When a performer’s priorities with regard to their sound are not articulated clearly or well considered, there can be substantial problems—in particular a sense of helplessness in the musician at the failure to match their intended sound to its actual output. Whether this is due to inadequate rehearsal or soundcheck time, or inappropriate equipment, or to poor understanding and communication between parties in the performance space, this can be highly frustrating and potentially demotivating to the performer—their voice is denied, overruled or (sometimes literally) overpowered.

This is important, because this relatively new environment for acoustic musicians is a creatively fertile place, and one which is here to stay. Many more experienced musicians could be enjoying and contributing to its wider practice. If some of those who could bring enormously valuable experience to digitally augmented performance practice—those perhaps less accustomed to electroacoustic environments and aesthetics—decide after some unsatisfactory experiences that it’s not worth the effort, then we all (practitioners, composers, software designers, audiences) lose out.

Humanising

How do we avoid creating entrenched and oppositional camps of either dedicated or disenchanted musicians in a new media performance practice?

I suggest that a good starting point could be he prioritisation of human expressive concerns at the centre of the new ‘performance ecosystem’ (Waters, 2007)—performers, composers, technical experts, listeners, as well as technology, space and wider culture—where technology is employed as an agent in the service of this shared experience. As Simon Emmerson puts it: “to reclaim the riches of the acousmatic universe … placed under the expressive control of the truly ‘live’ performer” (Emmerson, 1994).

As performers, our responsibility is to learn where and how we are able to exercise influence over the mediated environment, to what extent we wish to do so, and how to articulate this clearly with our technical collaborators.

Let’s listen, learn and communicate our way to a wider musical practice that asserts its own values while embracing ongoing change and innovation.

The Rumber K1X advanced piezo technology microphone

An augmented setup: here a Buffet Crampon B flat clarinet (its detailed set up left out here for now), with Rumberger K1X advanced piezo technology microphone, Keith MacMillan SoftStep MIDI controller, plus (out of sight) further microphones, high impedance Radial PZ-DI box, RME Fireface UCX audio interface, Apple MacBook Pro, Max/MSP and/or Ableton Live and Reaper DAW software (for test recording and analysis), AKG headphones, Meyer Sound loudspeakers—all of which need to be understood to some extent in order to assert the musician’s voice in the newly augmented environment.

 

Links

An augmented clarinet: in this instance, I’m using Rodrigo Costanzo’s C-C-Combine in Max/MSP to create the effect of a virtual improvising partner (excerpt):

References

Emmerson, S. (1994). “Live” versus “real-time.” Contemporary Music Review, 10(2), 95–101.

Kimura, Mari, Rasamimanana, N., Bevilacqua, F., Schnell, N., Zamborlin, B. & Fléty, E. (2012). Extracting human expression for interactive composition with the augmented violin. In Proceedings of the International Conference on New Interfaces for Musical Expression,  99–102.

Machover, Tod & Chung J. (1989). Hyperinstruments: Musically intelligent and interactive performance and creativity systems.

McPherson, A. (2010). The magnetic resonator piano: Electronic augmentation of an acoustic grand piano. Journal of New Music Research, 39(3), 189-202.

Waters, S. (2007). Performance Ecosystems: Ecological approaches to musical interaction. EMS: Electroacoustic Music Studies Network.

 
https://en.wikipedia.org/wiki/Frying_pan_(guitar), accessed 11.8.15
“After discovering that his system produced copious amounts of unwanted feedback from sympathetic vibration of the guitar’s body, Beauchamp reasoned that acoustic properties were actually undesirable in an electric instrument.”
While Beauchamp’s priorities were specific to the 1930s Hawaiian guitar, it’s certainly true that the acoustic properties of an instrument provide additional challenges to electrification.

 

 

2 responses to “A humanised augmented practice

  1. Hi, I’m Nelson, Amsel

    I am contacting you on behalf of Natural Apiary.
    Natural Apiary is offering an affiliate program that allows you to create links and earn referral fees when customer clicks through and buy products.
    It’s completely free to join and easy to use.

    Give your websites visitors the ease of referring them to a trusted site where they can purchase products you advertise.
    And when they do, you can earn up to 17% of their total purchase.

    Take advantage of Natural Apiary retail promotions and drive traffic and earn commissions.
    Click here: https://zeep.ly/hdWF7

    Kind Regards,

  2. Pingback: Live coding meets augmented instruments | Pete Furniss : musician | improviser | educator | researcher·

Leave a comment