TT experiment: essences of movement and form from animated halogens and from sound in TML (later BB)

Temporal Textures TML folks:

One experiment that Liza and I talked about -- which may interest others too, is the question of what kinds of room-memory (cf Ed Casey, Merleau-Ponty) can be evoked as a person walks through the TML's halogen  when they are animated according to the person's movement + some simple animation logics.  

First tests: 
Visual
Sense of relative movement -- the train alongside effect
Dead elevator effect
Striping inducing movement (Montanaro effect)
Several of us -- Harry, me, Morgan, -- have written code to animate networks of LED's.  For example, the  TML/pro/esea/tml.pro.esea.control   code   animates a sequence of LED's scattered in an arbitrary way through space.  

Audio
state: dense - loose
tendency: compressing -- expanding (any sound that person makes)
Navid pointed out with the much richer palette that we have, this quickly brings in  compositional questions.  It'd be nice to see this also with whatever Tyr extracts from Il y a (soon), and with what Freida creates in the course  Fear of Flight (after February tests)

Note re. tracking in the TML.  We cannot sense the person's orientation unless / until Freida or someone else mounts the CHR-UM6 Orientation Sensor with a radio micmodule -- Whoever does this should consult  Elio Bidinost <ebidinost@swpspce.net>.  I've talked with Elio -- he said just send him the link and the question.)   (Some folks have pointed to iPhone apps but the iPhones are too big and overdetermined.)    However, we can use our overhead cameras to get a sense of where the person is / is headed.   We have several on the grid thanks to  MF and Tyr.  Julien has hooked the Elmo up to his optical flow sampler.

For reference, this is a well-known series of works by artist Jim Campbell
Portrait of Claude Shannnon shows the effect most clearly
White Circle is a large scale installation.  It'd be more interesting to blow fog such an array of lights and see images and movement appear and disappear as the fog thickens and thins. 

(Let me cc as Spike theatrical lighting design expert, if that's alright)

On Jan 15, 2012, at 3:27 PM, Adrian Freed wrote:

(Xin Wei wrote) FYI on the low-end, MM and I are buying a cheap body-worn wireless analog videocam (on order of a few grams weight, 1" cube ) to try out mapping optical flow to Nav's instruments this coming weeks.   I'd like to write some mappings from optical flow to feed Julian and Navid's gesture followers, and as well as more directly Nav's sound instruments.  I wrote some MSP code from 2004 that worked in the fashion show "jewelry" that surely can be much more expressive!

You may wan't want to know where the camera is in space -
a tricky problem as you know but this is the best affordable module to get the answer without losing people down the rabbit hole of Kalman filtering etc.