Interview: Natasha Barrett

Natasha BarrettNatasha Barrett is a freelance composer, performer, and installation artist. The composition and manipulation of space is a central element in much of her work, and it is the focus of this interview. Barrett completed her Master’s Degree at the University of Birmingham, where she studied with Jonty Harrison and became practiced in the art of live sound diffusion using Birmingham’s renowned BEAST (Birmingham Electroacoustic Sound Theatre) system. She completed a Doctoral degree in composition at City University in London in 1998, studying with Denis Smalley. Her body of work includes large architectural installations, electroacoustic concert pieces, works for instruments and performers, and live improvisation. Barrett’s works have won international acclaim and numerous awards, including the Nordic Council Music Prize in 2006, a first prize at the Bourges International Electroacoustic Music Competition (1998 and 2001), and most recently, a commission from the 2008 Giga-Hertz Award. Barrett was born in the UK, but currently lives in Oslo, Norway. She has released numerous CDs, available through her website.

Peter Traub: Space as a compositional parameter features prominently in your work. Knowing that you studied with composers Jonty Harrison and Denis Smalley – who do significant work with diffusion and multi-channel systems – gives us some clue as to your interest in this area, but I’m wondering what really attracts you to working with spatializing systems, ambisonics, and diffusion?

Natasha Barrett: During my masters degree, electroacoustic composition and particularly acousmatic composition drew my interest more than purely acoustic composition. In an acousmatic context, as spatial elements yield to greater variation and malleability, simply composing within this framework made spatial elements more interesting and important. So my interest in spatialisation systems stems from the investigation of space in sound, meaning and purpose in a compositional context and ultimately the need to find ways to communicate this information to a listener outside the composition studio.

Excerpt from “Fetters”, 2002

Peter: In your interview with Felipe Otondo, “Creating Sonic Spaces”, you say of your work with live diffusion, “I enjoy it and I know that I can project the right thing in the music.” The last part of that sentence is what interests me. The “right thing” clearly varies by piece, but it implies a type of musical play with space and spatial gesture, and I’m wondering if you can speak to that. When you diffuse your own or someone else’s work, what do you listen for in the existing piece to help you determine how it should be spatialized? You mention on your website that you make diffusion scores when preparing to diffuse a piece – what goes into the construction of a diffusion score, and how does its content relate to the content of the piece you’re diffusing?

Natasha: By ‘the right thing’ I am referring to an understanding of the work’s larger concept of musical structure, setting or landscape, as well as information such as gesture, articulation and identity over various temporal frames. First the work needs to be understood in detail – or analyzed to make evident its internal workings. Then the performance act requires knowledge and training to project this understanding over the loudspeaker system.

Acousmatic works are diverse and there are countless ways to execute the performance. I can however explain my own basic procedure. First, I listen to the work at least twice in a good listening situation. Then I transcribe my own graphic score to visually capture the work as I hear it – layers, gestures, textures, motion directions, articulations, text hints to allusions or extrinsic references. It is important for this analysis to be hand drawn. A sonogram holds too much time-frequency information and not enough information connected to our cognition of the music. By drawing I also teach myself the work. The next phase is to imagine how I would spatialise the work over a ‘continuous space’ – i.e. a spatial projection free from specific loudspeaker positions or room size. Some concerns will include the speed and direction of a gesture, depth of a sound-field, layering of many spatial-fields, space within the sound itself as a voluminous form with shape and dimension rather than as motion, how the sound activates the listening space (room acoustic), how I could use space to draw attention to articulations that are structurally important and consider how clearly recognizable sounds may demand a specific treatment. The style of musical language and genre also come into play. In an ideal world this mental or conceptual performance would be substituted for hours of work in the real space, yet rehearsal time is never so luxurious. Besides, ‘out of real-time’ I can also rehearse the physical motion of performance – kind of like playing “air guitar” but instead “air diffusion”. In the real rehearsal the preparation is modified to fit the room and loudspeaker set up. Playing my own work is slightly different in that I can skip the initial learning phase.

Speakers distributed through a vertical space in “Adsonore”

Peter: This is very interesting, and I’m wondering if you tie certain types of diffusion gestures to particular pitch, timbral, or temporal elements? Are there aspects of pieces that contribute more than any other to how you diffuse them (i.e., its density, pitch content, already existing spatial information)?

Natasha: The diffusion gesture is normally context specific for each work. But having said that, as there are trends across compositions there are also trends in diffusion technique. For example, pitch is often a latent aspect of the composition creating continuums or planes. You hardly want to activate this material in a dynamic way drawing inappropriate attention. For large sound masses, in a decent size space, I normally find it most successful to place the material on distant loudspeakers where the volume can be pushed that little bit more than on closer loudspeakers. Room acoustics thus emphasize a powerful sense of energy and submersion. However, these two examples are not the most important aspects of the diffusion. For me the primary issue is to express and enhance that already existing in the work. As performers we are there to project and interpret but not recompose.

Excerpt from “Kongsberg Silver Mines” from the “Sub Terra” cycle, 2008

Peter: In his article “Sound, space, sculpture: some thoughts on the ‘what’, ‘how’, and ‘why’ of sound diffusion”, Jonty Harrison gives a passionate defense of the art of live diffusion over the BEAST system, a system you have spent time on as well. Unlike ambisonics, the spatializing techniques used in a diffusion system like the BEAST do not attempt to recreate a 3D sound field. Since you have worked extensively in both areas, could you talk about how you differentiate between the space of an ambisonics-capable listening space and that of a space used by the BEAST? Does your use or conception of space as a compositional parameter change significantly between these different systems?

Natasha: Ambisonics systems are interesting in that they allow the direct transfer of composed spatial information to the listener. Stereo diffusion over a loudspeaker orchestra is interesting in that the work is interpreted in performance, and although the composer-listener link is interrupted by the performer, diffusion may be advantageous in situations connected to concert space, loudspeaker equipment, listener location, and not to mention the type of composition involved. Also some motions are tied to the loudspeaker as a sounding device, which works best in diffusion, such as the frontal ‘punch’ spatial attack. I sometimes work with hybrid techniques combining an ambisonics layer and a diffusion layer specifically to take advantage of both. We should note that since the article you mention Jonty has also composed multi-channel works and BEAST has also been interested in ambisonics.
My idea of space as a compositional parameter does indeed change depending on whether I am composing an ambisonics work or a work for diffusion. Some ambisonics compositional techniques simply don’t function under stereo diffusion (for example multiple simultaneous motion trajectories through free-space). Other spatial techniques lose their sense of presence in stereo and need enhancing in other ways. Understanding these differences has created extra work for myself. Many concert organisers, understandably, prefer normal stereo for use in diffusion performance and in my pure ambisonics works I end up with two different mixes. Information lost in stereo phantom images needs to be re-articulated in other ways and ultimately results in a variation of work. Over the past few years I have restricted my use of non-hyrbid ambisonics to contexts where I can accurately dictate the loudspeaker set-up – in large-scale whole-concert works or in sound installations.

Installation for performance of “Agora” (2003), a collaboration with Birger Sevaldson.

Peter: As an American composer, we rarely hear diffusion concerts over here – although multichannel is the norm – and it seems generally accepted within the American electroacoustic community that the practice and art of live diffusion is something that happens in Europe or Canada (it does stem from a European tradition after all). Why do you think live diffusion hasn’t been accepted or practiced in the US in the way that it has in many other countries?

Natasha: I have no idea!

Peter: You have done a number of installations that work with different types of architectural space, such as “Sub Terra”, “Barely”, “AGORA: Boundary Conditions”. How does your conception and use of space change when moving from concert settings to installation settings?

Excerpt from “Sub Terra”, 2008

Natasha: ‘Barely’ and ‘Boundary Conditions’ are installations involving physical visual materials, which in conjunction with sound create the architectural space. These two works were made in collaboration with an architect and his research-design group OCEAN North. In these works the conception of space changes in that sound and physical materials interact and need to make sense. In particular ‘Barely’ and ‘Boundary Conditions’ address the audience as a mobile body, where their own choice of movement will influence how they perceive the work. In contrast to an audience sitting stationary throughout the work the moving listener gathers spatial information to improve their perceptual accuracy and experience the dimensionality of the work installed in the space. The material parts of installations serve a three-fold purpose: to influence the visitors’ motion and point to sounding information that they otherwise may not have found, to influence the original room acoustic, sound-field and propagation from the loudspeakers and to project the work which is temporally static in terms of the physical materials but set in motion by the visitors motion, this in contrast to the sound which itself develops in time as well as in relation to the visitors motion.

Installation of “Barely_Part-1”, a collaboration with Birger Sevaldson and the experimental design and architecture group Ocean North.

Peter: When creating installations in which the architectural setting is a prominent component, how do the different qualities of the site, such as resonance, visual appearance, history, and so forth, influence how you create the sonic component?

Natasha: The qualities of the site are important – the acoustic, smell, light, feeling of size gained through both aural and visual senses as well as the over all impression that the space imposes on the visitor. I don’t think that installations necessarily articulate the space more than concert works. Concert works are portable and adaptable for different spaces mainly by virtue of performance and normally address a concept of time and structure whereby the nature of the listener-space is but one of many important facets. Installations are less portable. Without the performance aspect the site needs to be carefully addressed and the approach to temporal form and material demands a listening (experiencing) strategy where the setting and the work are a more unified entity. This was particularly the case for Barely_part-1 and part-2. Yet the amount of work involved meant that some adaptability needed to be considered. Barely contains layers which function in all spaces, layers that are ‘very site specific’ for each space, and layers that are adaptable simply by re-orientating loudspeaker locations, relative volumes and relative frequencies. Barely_part-1 was installed in an enormous concrete, steel and glass industrial space dating from the Second World War. Barely_part-2 was redesigned and installed in a small and dry architectural gallery space.

Excerpt from “Barely_part-1”, 2007

Peter: When you say that “Barely” contains some layers that function in all spaces, some that are custom made, and so forth, what differentiates the custom made materials from the adaptable ones? Do the custom made ones contain elements specific to the resonance of the space, or perhaps its history?

Natasha: The ‘very site specific’ sounds were of two types: (a) the natural sound-layer within and outside the installation site and (b) sounds made with the specific room acoustic in mind in terms of frequency, volume and texture. In Barely_part-1 the natural sound-layer involved a little traffic noise, a building site, a kindergarten, weather sounds and people sounds mainly from outdoors filtered through the structure and acoustic of the building. Although referential sounds could connect to the function or history of the building, in Barely such sounds were related to the concept of the work and were instead amongst those sounds most portable between the two spaces.

Peter: Spatial listening is something that, on a primitive level, we are all very good at, but on a more refined musical level, it takes some training. Your ability to listen and compose spatially is highly refined, but how do you communicate spatial gesture, allusion, and so forth to an audience whose spatial listening abilities may not be as practiced?

Natasha: Accurate spatial perception is partly dependent on training to re-waken our ears in a world dominated by visual information – it’s an ability we have but don’t often use. However, when the spatial information is tied into an allusion or gesture we are considering more than space in isolation. Amongst other techniques, there are two methods I use to communicate spatial information to a ‘less practiced’ listener: one is to find what I’m after and then enhance or make it ‘hyper-real’. For example by increasing speed of a motion you draw the ear’s attention. The other method is to tie spatial information to a source bond or allusion that reinforces or at least addresses the space in some way. In other words, connect to our biological spatial abilities by amplifying culturally conditioned concepts.

Peter: Most composers eventually develop a musical language in terms of how they use pitch, timbre, temporality, and so forth. Do you have a spatial language, in that you use similar spatial gestures or constructions, or combinations of spatial gestures, between pieces to communicate particular meanings or experiences?

Natasha: Yes, I think so, but I hope I achieve a development of the language rather than simply passing ideas between works.

Peter: I guess what I’m trying to get at is what, if any, are the fundamental structures, forms, or gestures of that spatial language? Are the gestures tied to each other in some continuous form of musical language, or are they more tied individually to the sound material they’re moving around? Trevor Wishart wrote a section on spatial motion in his book “On Sonic Art”, and I’m wondering if you think about your spatial language in similar terms to how he describes and diagrams his taxonomy of various spatial gestures and movements?

Natasha: For me the spatial language mainly derives from the materials and ideas in each specific work. In the chapter you mention Trevor describes an array of spatial motions. On one hand such a description is useful to awaken composers to the details we are concerned with, but for my work I find a detachment from the sound in question problematic. Spatial information finds musical meaning when specific to, and developed from, the material as a totality. Sound contains an internal or inherent space in intrinsic terms and a connection to space in extrinsic or referential terms. These aspects need to yield to enhancement or be intentionally contradicted. I find it compositionally problematic to take a spatial formula at the outset and squeeze a sound into this formula unless I am composing with particularly abstract material or instrumental music (and I don’t think this is Trevor’s meaning either).

Speakers arranged for “Sub Terra” (2008).
Sub Terra

Peter: In your interview by Felipe Otondo in the Computer Music Journal, you described your composition process as follows: “To some people, it may sound strange that I compose first in stereo or normal quad and then realize the ambisonics version once the materials, the timing, the counterpoint, and the flow are correct. Things do change when you compose the ambisonics field, obviously – when calculating Doppler shifts and filtering, pitch and volume changes! Then you have to go back and change the material – but to do most of the composition first in the more traditional format makes the complete process manageable.” I’m wondering, since manipulation of space is such a central characteristic of your work, is your work process dictated to some degree by the technology, i.e., that it would technically be very difficult to realize the spatialization earlier on, before a more final form is in place? I suppose the question I’m asking, in a general philosophical sense, is that if one uses space as a central compositional parameter, then how does one go about composing space at the outset, letting it dictate the form and materials that go into it? Is such an approach even practical?

Natasha: The reason for not working in higher order ambisonics in the development of the composition is due to the technical workload. For example, to work with 3rd-order ambisonics you either mix 16-channel files instead of mono or stereo files or add an encoder for every track in the mixing session. I am however currently developing materials recorded in A-format (Soundfield four-capsule microphone), attempting to keep all source and composed material in A-format and monitoring the output decoded (first to B-format and then to quad speaker monitoring). Interestingly it has been necessary to destroy some spatial sources and re-articulate space in other ways – i.e. reduce the original space to stereo or even mono and develop the sound in a different direction. It is a misunderstanding to think that space in music is simply a matter of motion trajectories, scenes and landscapes. In any case, I don’t think there is any major philosophical air about the approach. It’s all down to a combination of what’s practical and what you know through experience. After all, when sitting in the absolutely perfect listening location in a very good studio, a stereo sound reveals spatial information extremely close to 3-D, only that it is (predominantly) locked to the frontal image. Hearing and acknowledging this can be sufficient during the compositional process to allow a complete 3-D rendering later on (where the listener does not need to be located so accurately). Ambisonics room models and distance cues add another layer of complication, but one still needs to get the compositional ideas worked out. If stereo simply does not satisfy (such as inadequately projecting multiple uncorrelated motions) then I create real 3-D experiments, write notes, sketches, reminders, or even time-spatial co-ordinate tables to represent what I discovered and allow further development work in the stereo field. After the basic compositional ideas are in place the 3-D explosion is realized – and this is no quick or small part – it is a time consuming process where errors are found, ideas may not function and new issues raised. I have thought lengthily about this process and think that the only way for me to compose in higher order ambisonics from the beginning and throughout is when the nth-order file format is transparent to a user and when all processing and transformation software handles n-number of channels in the source.

Excerpt from “Trade Winds”, 2004-2006

Peter: In addition to your installation, concert, and fixed media work, you do live electroacoustic improv. How does your work in the other mediums influence and play into your performance during a live improv session? Does the site of play in your music change significantly when performing live?

Natasha: In improvisation I work with a live acoustic performer where I sample sound in real-time and manipulate this in various ways. The fact that the acoustic performer, and me too for that matter, are visually rooted on stage denies some types of spatial activity explored in my acousmatic composition. Also on stage behind the loudspeakers I am unable to hear the spatial picture the audience hears and it’s pretty difficult to second guess anything but coarse spatial information. Besides, my brain, controllers and MaxMSP patches are far too overworked with the issues of ‘real-time’ and meaningful improvisation, at least in my current state, to address space in any sense of refinement.

Apr 19, 2009
Trackback URL

Leave a comment


Current interview:
Robin Meier, Ali Momeni and the sound of insects

Previous Interviews:


livestage music sound performance calls + opps installation audio/visual radio festival instrument networked audio interactive experimental electronic workshop video participatory writings event mobile exhibition concert live collaboration electroacoustic environment nature reblog distributed soundscape field recording net_music_weekly improvisation software history locative media space public noise recording immersion voice acoustic sonification lecture generative conference body tool sound sculpture net art art + science VJ/DJ light diy remix site-specific perception mapping film visualization listening laptop algorithmic multimedia city urban data wearable architecture open source game virtual biotechnology sound walk spatialization webcast hacktivism robotic image score platform electromagnetic new media cinema ecology found news composer telematic interface streaming residency interviews/other sensor dance circuit bending synesthesia physical political notation intervention object controller broadcasts conversation narrative second life responsive mashup place technology ambient social network symposium motion tracking hybrid intermedia augmented spoken word livecoding text phonography auralization acousmatic upgrade! gesture opera aesthetics mixed reality resource theory processing 8bit orchestra nmr_commission wireless device toy wireless network theater web 2.0 presentation community surveillance p2p 3D copyright soundtrack research podcast sample feedback psychogeography social chance interdisciplinary tactile recycle interview language systems code emergence presence cassette privacy free/libre software media play chiptune newsletter place-specific archives avatar education haptics activist surround sound audio tour glitch hardware tactical identity bioart asynchronous business tv tangible composition animation jazz transmission arts apps tag e-literature collective microsound relational synchronous Artificial Intelligence conductor convergence reuse simulation ubiquitous synthesizers im/material
3D 8bit acousmatic acoustic activist aesthetics algorithmic ambient animation apps architecture archives art + science Artificial Intelligence asynchronous audio audio/visual audio tour augmented auralization avatar bioart biotechnology body broadcasts business calls + opps cassette chance chiptune cinema circuit bending city code collaboration collective community composer composition concert conductor conference controller convergence conversation copyright dance data distributed diy e-literature ecology education electroacoustic electromagnetic electronic emergence environment event exhibition experimental feedback festival field recording film found free/libre software game generative gesture glitch hacktivism haptics hardware history hybrid identity im/material image immersion improvisation installation instrument interactive interdisciplinary interface intermedia intervention interview interviews/other jazz language laptop lecture light listening live livecoding livestage locative media mapping mashup media microsound mixed reality mobile motion tracking multimedia music narrative nature net art networked net_music_weekly new media news newsletter nmr_commission noise notation object open source opera orchestra p2p participatory perception performance phonography physical place place-specific platform play podcast political presence presentation privacy processing psychogeography public radio reblog recording recycle relational remix research residency resource responsive reuse robotic sample score second life sensor simulation site-specific social social network software sonification sound soundscape sound sculpture soundtrack sound walk space spatialization spoken word streaming surround sound surveillance symposium synchronous synesthesia synthesizers systems tactical tactile tag tangible technology telematic text theater theory tool toy transmission arts tv ubiquitous upgrade! urban video virtual visualization VJ/DJ voice wearable web 2.0 webcast wireless device wireless network workshop writings



May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul

What is this?

Networked_Music_Review (NMR) is a research blog that focuses on emerging networked musical explorations.


NMR Commissions

NMR commissioned the following artists to create new sound art works. More...
More NMR Commissions


"Two Trains" by Data-Driven DJ aka Brian Foo

Two Trains: Sonification of Income Inequality on the NYC Subway by Data-Driven DJ aka Brian Foo: The goal of this song is to emulate a ride on the New York City Subway's 2 Train ... Read more
Previous N_M_Weeklies


Guest Bloggers:


Massachusetts Cultural Council
Networked: a (networked_book) about (networked_art)
New American Radio
New Radio and Performing Arts, Inc.
New York State Council on the Arts, a State agency
New York State Music Fund
Upgrade! Boston

Turbulence Works