This is where I am right now with the ideas so far.
We can track the sculptures from an overhead camera. It can be installed in the ceiling of Inspace, assuming we use that venue. Tracking in this way can be problematic, but there are ways to make it easier on us. One common way to simplify the video processing work is to transpose the capture image to a binary format, stripping out all but a single colour. From there, we would set contrast levels to eliminate all but the most intense examples of that hue. Then we would install simple LED lights in those colours into the sculptures, ensuring that the cameras always tracked the object, not other examples of the colour in the environment. This should give us a solid track on as many objects as we can find LED colours.
At the same time, we can do an averaged colour tracking for the whole image, along with an average change for successive frames that would allow the system to react to the amount of activity in the frame as well as the specific positions of the sculptures. I think this will give us great latitude in aesthetic control. I think we we can submit this as a crit later today.
Thoughts?
I am quite satisfied with this idea, and I like the idea of installing LED on sculpture.
For the environment, we can make background monochrome or different than our LED color, the best i would suggest white and black as environment color. I think this way we can solve our other problem and it will be easy for us also to install the environment.
thanks
I like the idea of LEDs too but the only worry that I have about them is the distance from the camera. (Keeping with the idea of monochrome would it be worth considering incorporating 3D barcodes into the design of the sculptures?
It seems that these could be more easily scaled to a suitable size for the distance from the camera.)
If you’re confident Ev that the LEDs will be effectively tracked and that there will be enough colours (and enough variation between the colours) then that’s me sold! A high-contrast environment could look very striking (as long as it doesnt end up looking like a high-art 70’s flashback from AB-FAB!)
If we can develop this into a workable system I think the amount of control offered by this method of tracking could really work to our advantage, with a relatively simplistic environment ‘real-world’ we can afford to project quite a wide-ranging sonic and visual design.
Great to see a blog going! And I’m sorry I’ve been away for a bit but I’m back in town for now. I just have a few thoughts on the tracking aspects.
What kind of sculptures are you planning on tracking? Are these going to be maneuvered by the audience? How big are they and will they be blocked by people’s hands? What kind of shapes are they? What type of control are you expecting from these objects? Is the audience expected to be part of the piece? Will the audience want to interact with these objects when they see them or will they need to be told to? Will they realize what their interaction is doing?
Tracking implies you have a location and you want to follow that location. What you are suggesting is possibly using attached LEDs as they are easily detected by their strong contrast to a background. Though, there are a few issues you’ll need to consider. First, you have multiple objects. How will you decipher between the objects? Colors? Shape from multiple LEDs? Next, you also have the possibility of losing the track from obtrusion of a person’s interaction with the object or from the object being moved off camera. This problem may be resolved if you are assigning specific colors to your LEDs and you are detecting them each frame.
As Ev mentioned he is comfortable with Max, you may want to use Jitter’s library for this purpose. jit.findbounds, jit.brcosa, and the cv.jit library should do everything you want. However, you will need to refine the environment and come up with specific colors that are easily tracked if you are planning on using multiple objects.
Another common tracking algorithm for tangible objects is a marker-based tracking algorithm. You’ll have to decide if this if the type of control you need or not as it will require a bit of time to get going. The idea behind this is to have a highly featured fiducial marker (many edges and corners that a computer vision algorithm will easily find) that you can attach to your tangible objects. Then, detecting between objects and even their rotation, scaling, and translation are all much more approachable. There is also a framework called reactivision that also incorporates multi-touch finger tracking. However, this is suited for a multitouch table and not an overhead camera. The fiducial specific aspects of this framework could be used however for your tangible object based tracking.
I will be around this weekend if you’d like to arrange a time to meet. We could try and have a session where we try and get tracking working as well.
http://www.iamas.ac.jp/~jovan02/cv/
http://reactivision.sourceforge.net/
http://www.tuio.org/
Hi Parag, welcome back.
I’ve just had a good start to using reacTIVision, it seems to work solidly out of the box, tracking it’s amoeba markers well with a built-in webcam, and outputting useful data to Max/MSP. I’m having trouble splitting the data from multiple objects into separate streams, so if you could point me in a useful direction with that that would be very helpful.
E
Hi Ev,
Great to hear! I imagine you’ve got the reactivision demo working with the TUIO Client Max demo. There is another file in this package called TuioDemo.pat which will show an example of splitting the data into different streams. The route will split the 3 messages, add, remove, and updateobject into separate streams. You can use this to infer the tracking information of any object. Does this work?