The soundtrack of bound is integral to the work. It acts as both a trigger for the animation and helps to elaborate on the concept behind the work sonically. The entire soundtrack is generated from the data generated by the decay of various particles collected by the ATLAS detector. The original process was undertaken by a group called LHCsound, who received funding from the Science & Technology Facilities Council to help open the science up to a broader public audience. This research and data has been released under a Creative Commons licence that enables the “remix and transformation” of the work on “any basis, even commercial” (Creative Commons, 2014).
Due to the nature of my installation and prior experimentation with other audio concepts prior to starting this version of the soundtrack, it felt as this sound worked best with the visuals. When setting out to create the soundtrack I also experimented with various sequencing software. Delicode z-vector (the plugin used to manipulate the Kinect data and create the visuals) can be manipulated with both OSC and MIDI data. I wanted to sequence the animation triggers in the software alongside the sound. I initially experimented with OSC sequencers, but this proved both complicated and unreliable. This did work well if used alongside a trigger e.g. TouchOSC on the iPad. This enables the user to trigger animations with a simple custom iPad app. I decided against this route as it added additional complication to the installation setup and again reliability issues.
After further experimentation I decided to link the software via MIDI internally in the Mac using the IAC driver built into the Audio/MIDI setup utility. This enables the pass through of MIDI data from a sequencer to another internal piece of MIDI software. Due to some major constraints in the way z-vector receives mapped MIDI data I have found this to be the most reliable way to control the software. For this kind of production I have found Ableton Live 9 to be the most intuitive and reliable software. By enabling the MIDI output on a track and mapping it to the IAC driver bus and in turn using the same bus to receive MIDI data in z-vector, you can program in a sequence that triggers any part of the software in a fairly simple way.
The soundtrack itself is fairly uncomplicated and has been manipulated for artistic and sonic reasons. One of the other key features of Ableton Live 9 is the audio to MIDI conversion technology. This unique technology enabled the input of audio recordings from LHCsound, which have been manipulated and converted to MIDI data, then attached to a sound generator. This soundtrack was designed primarily as a sonic atmosphere for the visual installation. Rather than acting as one “audio soundtrack” with a start, middle and end, it has been designed to loop continuously over a 5:30 period. This lead to interesting decisions being made when sequencing and generating sounds, as I wanted the sound/image to flow and go through various periods of intensity, but also not to be too repetitive. I envisage the audience experiencing and interacting with the work for anywhere up to 1 min. It will be highly unlikely that someone would hear the whole loop. This has also created the effect of the sound almost being generated at random.
Each track utilises a range of effects to achieve an atmospheric balance within the sound. The final audio is run through a preset stereo-enhance master tool that applies EQ, stereo-widening, compression, etc to create a polished and balanced sound. I was worried when initially creating this soundtrack that it was a bit “weird”, and maybe off-putting. In the context that it has been applied it feels as if it works well, and fits well with the basic concept of the work. It also adds another layer to the work which helps to convey the works final message.