Feeds:
Posts
Comments

Sculptures

6pm is fine for me, I’ve made a test sculpture…it’s about the size we had discussed (2 ft high) and is on wheels…I haven’t managed to get a decent large scale version of the tag. Chandan have you had any success with that? Have we decided on a colour scheme black floor/white sculpture…vice versa?

Camera goodness…

I just tested the School’s Canon XL-2 with the reactivision software, and it went without a hitch. I’m coming in to Alison house tonight to look into setting up a lighting rig in G11 that we can use as a test bed. We can install an overhead camera on it and use the space beneath to refine the system. Still going ahead with the prototype test tomorrow, 6pm in the Atrium.

First test!

And it’s a success.

http://becarefulwith.verysharpknives.com/public/EB_reactivision_test.mp3

It’s just me playing with a waveshaping synthesizer and a delay effect in Max/MSP, but it’s live and camera controlled with reactivision. Apologies for the messy Max code there, but it’s late and I couldn’t be arsed fixing it now 🙂

The controls are very simple, just xy position of two markers. One handles the frequency of the synth notes, one the waveshaping transfer function, another delay feedback, and the last delay mix. Notes are triggered by moving the first marker using its acceleration data. It’s uncalibrated, so it isn’t tracking as well as it might be, but I think it’s very encouraging so far.

Camera Booking:

I’ve booked the camera for Tuesday from 2pm, to give us a deadline to have a working prototype ready. I actually want to start “filming” at six, after a lecture, but having the kit early doesn’t hurt. I want to set up the camera on the upper floor of the Atrium and attempt to track objects on the lower floor. For this, we will need the objects, obviously, and I’ll try to have a sound creation system of some kind in place. The objects don’t need to be anything special, but should at least be large and durable. The tracking markers will have to be blown up, perhaps to about a foot across.

More development…

I now have TUIO/reactivision tracking multiple markers simultaneously, and have Max/MSP splitting the tracking data into streams per marker. From here on, scaling and formatting data streams from OSC into MIDI, and controlling external objects like synthesizers and samplers is all stuff I’ve done before.

One thing that occurred to me is that we will need to get our hands on a Quicktime compatible video capture card, because FireWire over the distances necessary to place a camera in the ceiling of Inspace is not a workable plan. We should be able to run the video as analogue over almost arbitrarily long distances, but we’d need the card to digitise it at the other end.

We should think about lighting the area in which the sculptures will stand, so that the space as a whole can have a lower, more comfortable light level. The tracking camera will need so much light to work reliably that the ambient levels would be too harsh. We can also look at infrared tracking to get around this.

This is where I am right now with the ideas so far.

We can track the sculptures from an overhead camera. It can be installed in the ceiling of Inspace, assuming we use that venue. Tracking in this way can be problematic, but there are ways to make it easier on us. One common way to simplify the video processing work is to transpose the capture image to a binary format, stripping out all but a single colour. From there, we would set contrast levels to eliminate all but the most intense examples of that hue. Then we would install simple LED lights in those colours into the sculptures, ensuring that the cameras always tracked the object, not other examples of the colour in the environment. This should give us a solid track on as many objects as we can find LED colours.

At the same time, we can do an averaged colour tracking for the whole image, along with an average change for successive frames that would allow the system to react to the amount of activity in the frame as well as  the specific positions of the sculptures. I think this will give us great latitude in aesthetic control. I think we we can submit this as a crit later today.

Thoughts?

I’d like to make a collection of sculptures, like two foot high chess pieces, that can be positioned by a participating audience in a defined area. They would be tracked in this area by colour, shape or glyph using a camera. The positions of the different pieces would control the parameters of a synthesizer or sampler patch, to allow the audience to shape sounds collaboratively.

As well as the sound shaping garden, there would be a sequencer garden that would define when different events took place on a timeline. It could also control effects processing and similar things. The participants would be given assistive and artistically interpretive visual feedback via screens, to encourage experimentation.

This could fairly easily be achieved with just a couple of cameras and a Max/MSP patch, controlling an external synthesizer or sampler. Imagine this: http://www.youtube.com/watch?v=2tZ7eWqHsdg&feature=related but with real-world, tactile interactivity, collaborative composition, and far more timbral and visual control.

I think it’s a good plan, any thoughts?