The Xth Sense Wins Guthman Prize
The Xth Sense,( Marco Donnarumma), a biophysical, wearable interactive technology for musical performance and responsive milieux has been awarded the first prize at the prestigious Guthman New Musical Instrument Competition as “the world’s most innovative new musical instrument.” The GNMIC is an annual event to find the world’s best new ideas in musicality, design, and engineering. It takes place at Georgia Tech, Atlanta, US.
This critically acclaimed event seeks to provide a fertile platform for the advancement of the studies on New Musical Interfaces for Musical Expression (NIME).
The Xth Sense is a new and original, biophysical interactive system based on free, open source tools. The project goal is to investigate exploratory applications of biological sounds, namely muscle sounds, for musical performance and responsive milieux.
Complete information and a blog documenting the research can be viewed on-line.
The distribution of the framework is crucial to the investigation; the launch of the XS for public use is scheduled around April 2012. Whereas the software side and the complete hardware documentation will be freely downloadable on-line, the biophysical sensor DIY kit will be available for purchase. This consists of a low-cost pack that enables anyone to build, hack and extend such novel wearable device.
Introduction
The central principle underpinning the Xth Sense (XS) is not to “interface” the human body to an interactive system, but rather to approach the human body as an actual and complete instrument in itself. Augmented musical instruments and physical computing techniques are generally based on the relation user>controller>system: the performer can interact with a control interface (a physical controller or sensor systems) and modify results and/or rules of a computing system. Sometimes this approach can confine and perhaps drive the kinetic expression of a performer, leaving less room for his/her physical energy and non-verbal communication. Besides, being that often the sonic outcome of such performances is digitally synthesised, the overall performance can lack of “liveness”.
The XS completely transcends the paradigm of the user interface by creating sonic matter and control data directly from the performer’s body. There’s no mediation between body movements and music because the raw sonic material originates within the fibres of the body, and the sound manipulations are driven by the different amount of energy produced by the performer.
Technical description
The XS fosters a new and authentic interaction between man and machines. By enabling a computer to sense and interact with the muscular potential of human tissues, the XS approaches the biological body as a means for computational artistry. During a performance muscle movements and blood flow produce subcutaneous mechanical oscillations, which are nothing but low frequency sound waves (Mechanomyographic signals or MMG). Two microphone sensors capture the sonic matter created by the performer’s limbs and send it to a computer. This develops an understanding of the performer’s kinetic behaviour by *listening* to the friction of his flesh. Specific gesture, force levels and patterns are identified in real time by the computer; then, according to this information, it manipulates algorithmically the sound of the flesh and diffuses it through a variety of multi-channel sound systems.
The neural and biological signals that drive the performer’s actions become analogous expressive matter, for they emerge as a tangible sound.
The XS can be played as a traditional musical instrument, i.e. analog sounds can be produced and modified by adequately exciting the instrument, but it can also be used as a gestural controller to drive audio synthesis or sample processing. The XS can be used in both modes simultaneously. The most interesting performance feature of such system consists of the possibility to expressively control a multi-layered processing of the MMG audio signal by simply exerting diverse amounts of kinetic energy. For instance, stronger and wider gestures could be analysed and mapped so to generate sharp, higher resonating frequencies coupled with a very short reverb time, whereas weaker and more confined gestures could be deployed to produce gentle, lower resonances with longer reverb time.
The form and color of the sonic outcome is continuously shaped in real time with a very low latency (measured at 2.5ms), thus the interaction among the perceived force and spatiality of the gesture is neat, transparent and fully expressive. From the exclusive real time processing of the muscle sounds, through resampling of pre-recorded sounds, to the audio manipulations of traditional musical instruments, the XS is the first musical instrument of its kind to offer such a flexibility at a very low cost and with a free and open technology.
The work was developed at the SLE, Sound Lab Edinburgh – the audio research group at The University of Edinburgh, and was kindly supported by the Edinburgh Hacklab and Dorkbot ALBA. The project was finalized during an Artistic Development Residency at Inspace, Edinburgh. Inspace kindly sponsored the work by providing technical and logistical support, and organizing a public vernissage for the official launch of the project within the artistic research program “Non-Bio Boom”.
The XS technology was awarded the first prize at the Margaret Guthman Musical Instrument Competition (Georgia Tech, US 2012) as “the world’s most innovative new musical instrument”.
The research was endowed a PRE travel grant, which facilitated a related presentation at ICMC, International Computer Music Conference 2011 and the International grant by Creative Scotland, for a presentation at the academic conference KEAMS/SICMF 2011 in Seoul, South Korea.
Additional Information
The use of open source technologies is an integral aspect of the research. The biosensing wearable device was designed and implemented by Marco Donnarumma, with the support of Andrea Donnarumma and Marianna Cozzolino. The Pure Data-based framework for real time analysis and processing of biological sounds was designed and coded by the author on a Linux machine, with inspiring advice by Martin Parker, Sean Williams, Owen Green Jaime Oliver, and Andy Farnell.
Related works
Since its inception in March 2011, the first piece for the XS titled “Music for Flesh II” (MFII) has toured South Korea, Mexico, Norway, UK, Italy, Germany and has been presented at several major academic conferences among which the ICMC, International Computer Music Conference (UK) and the Linux Audio Conference (IRL).
Again, in March 2011 the author was commissioned a new work development residency at Inspace, UK. During the residency the XS has been deployed in the implementation of Non-Bio Boom: a Musicircus, a biosensing, participatory sound environment for eight audio channels and multiple users.
In May 2011 the system has been employed as central technology in the project Raw/Roar, a two weeks artistic residency which involved a team of five dancers and three composers directed by the author. The residency focused on the creation of an intermedia dance piece for enhanced bodies which was premiered at Dansehallerne, DK. The project was commissioned by the Danish National School of Theatre and Contemporary Dance and supported by The Danish Arts Council and Augustinus Fonden.
Pictures courtesy of Chris Scott.
Leave a comment