The following illustrations have been taken from my sketchbook and should hopefully give you some idea the direction the project is taking. I still have further research to complete, but the following images illustrate some initial concepts which integrate interactive technology and some of the particle physics theories I’ve been exploring.
This illustration shows a projected image that the audience can interact with. A microphone input is used to generate symmetrical particles which can be manipulated through body movement from an Xbox Kinect depth sensor. After interacting with these particles they change state and emit a photon. This generates a sound in conjunction with the image.
In this idea a depth sensor inputs to a laptop to scan audience movement. Particles are generated via a microphone input in the room. The generated particles will move at random. The audience is also projected into the backdrop. Using hands the particles can be “caught” and manipulated before dispersing.
This concept is based on movement, or rather lack of. The visuals will be projected onto a surface and will consist of randomly generated moving particles. By standing still the particles will clump together to create the shape of the viewer. When movement is detected the particles will disperse based on speed of movement. Sound will be generated and synced to image and movement.
This concept focuses on form rather than content. I started to explore the avenue of 360˚ dome projection. There are two types of dome projection; the expensive digital dome projection system that utilises multiple projectors to fill the shape or the substantially cheaper option utilising a spherical mirror, 1 projector and software to warp the image. After following this avenue, costs proved prohibitive for both techniques. As far as form is concerned this would be perfect for the project and may be worth exploring further in the future.
This idea is inspired by Thomas Young’s double-slit experiment. Again this idea is based on movement, by standing still in front of the depth sensor, piece-by-piece an image is built up from particles. This image will have a symmetrical “anti” image on the opposite side of the screen representing matter and anti-matter. This will be integrated with sound in conjunction with the image. The particles disperse when movement is detected.
These are some of my initial ideas and still require substantial development. From the initial tests I’ve been conducting from a technological perspective I feel that all of these ideas are viable. I will upload the rest of my sketchbook at a later date which will help contextualise these ideas within the research I have been conducting.