Adrian Freed: barycenter, bunch

Andy Schmeder wrote a bunch of xyz.geometry objects. The new "o.expr" is powerful  enough to do these efficiently and John and I have been
pondering what functions to add to the extensive list of basic math favorites. This would be nice to curate while John and I are on your coast. We are trying to schedule that before he takes
the April break from Northeastern....

Speaking of Andy: he just told me about a tool that does 3d geometry parametrically but allows you to do convex hulls of objects not points. 
It is hard to find this sort of thing that is in a plastic enough form to build into real-time engines for what we want to do but I think it is a valuable direction.

I want to project various envelopes around real-time 3d models of the space and holes of people's bodies -envelopes that are predictive where the potential is layered over the actual. 
The visual ghostly analog to a sonic pre-echo. 
I think this is what basketball players and soccer players (and of course dancers) do.  They interact with the ghosts of the past and future.
The present is too late and moving.....

[emphasis added]

On Jan 31, 2012, at 8:54 AM, Sha Xin Wei wrote:

It'd be a very useful learning exercise to clean up this code to calculate the degree of dispersion / clustering of a set of points.

<README_barycenter+bunch.rtf>

on to spin...
Xin Wei

barycenter.maxpat
computes the center of a set of points
scatter
computes degree of clustering or dispersion
tml.math.bunch
does the basic arithmetic, uses   zl

MaxLibraries/TML/pro/lab/workshop/090115/
barycenter.maxpat
barycenter.xml
lights.maxpat
scatter.maxpat

MaxLibraries/TML/math/
tml.math.bunch.maxpat
tml.math.distance.maxpat
tml.math.range.maxpat
tml.math.distance.maxpat

TML next temporal textures + phenomenology seminar: Friday Jan 26, 16h00 - 18h00

Our next temporal textures + phenomenology seminar will be Fri 26 Jan 16h00 - 18h00 in the TML.
We decided to meet weekly since there's so much to work through.   See the temporal textures page for a snapshot of the research, and the temporal textures blog for a trace from 2010.

I believe the section of Merleau-Ponty's PP we agreed to read and discuss is

Part One: The Body / III. The Spatiality of One’s own Body [corps propre] and Motility

(Is that right?)

Here again is a helpful analytic index that Noah circulated.   (Thanks for the lucid orientations last week!)
 

 

Please email Liza and me privately if the people who couldn't make it Friday would like to stay on a "temporal textures" list for this term.   We may try to arrange a Wed evening discussion for another strand.  Some of us will be working quite intensely through Feb on some movement + video, lighting and acoustics experiments in the TML.

Xin Wei


TT experiment: essences of movement and form from animated halogens and from sound in TML (later BB)

Temporal Textures TML folks:

One experiment that Liza and I talked about -- which may interest others too, is the question of what kinds of room-memory (cf Ed Casey, Merleau-Ponty) can be evoked as a person walks through the TML's halogen  when they are animated according to the person's movement + some simple animation logics.  

First tests: 
Visual
Sense of relative movement -- the train alongside effect
Dead elevator effect
Striping inducing movement (Montanaro effect)
Several of us -- Harry, me, Morgan, -- have written code to animate networks of LED's.  For example, the  TML/pro/esea/tml.pro.esea.control   code   animates a sequence of LED's scattered in an arbitrary way through space.  

Audio
state: dense - loose
tendency: compressing -- expanding (any sound that person makes)
Navid pointed out with the much richer palette that we have, this quickly brings in  compositional questions.  It'd be nice to see this also with whatever Tyr extracts from Il y a (soon), and with what Freida creates in the course  Fear of Flight (after February tests)

Note re. tracking in the TML.  We cannot sense the person's orientation unless / until Freida or someone else mounts the CHR-UM6 Orientation Sensor with a radio micmodule -- Whoever does this should consult  Elio Bidinost <ebidinost@swpspce.net>.  I've talked with Elio -- he said just send him the link and the question.)   (Some folks have pointed to iPhone apps but the iPhones are too big and overdetermined.)    However, we can use our overhead cameras to get a sense of where the person is / is headed.   We have several on the grid thanks to  MF and Tyr.  Julien has hooked the Elmo up to his optical flow sampler.

For reference, this is a well-known series of works by artist Jim Campbell
Portrait of Claude Shannnon shows the effect most clearly
White Circle is a large scale installation.  It'd be more interesting to blow fog such an array of lights and see images and movement appear and disappear as the fog thickens and thins. 

(Let me cc as Spike theatrical lighting design expert, if that's alright)

On Jan 15, 2012, at 3:27 PM, Adrian Freed wrote:

(Xin Wei wrote) FYI on the low-end, MM and I are buying a cheap body-worn wireless analog videocam (on order of a few grams weight, 1" cube ) to try out mapping optical flow to Nav's instruments this coming weeks.   I'd like to write some mappings from optical flow to feed Julian and Navid's gesture followers, and as well as more directly Nav's sound instruments.  I wrote some MSP code from 2004 that worked in the fashion show "jewelry" that surely can be much more expressive!

You may wan't want to know where the camera is in space -
a tricky problem as you know but this is the best affordable module to get the answer without losing people down the rabbit hole of Kalman filtering etc.

Why Buxton doesn't bother with computer music any more?

Carissimi TML People interested in new matter, temporal textures, movement+media,...

We have a conversation on "material computing" that's beginning to fill out with interesting references.  Thanks to Adrian for this one.

- Xin Wei

Begin forwarded message:

This system already had a lot of what people are still trying to get into including physical models, arduino like processors (6809), dsp processors (TMS3210),
visual programming etc.
http://www.billbuxton.com/katosizer.pdf

For TML agenda Wed 5-6: lighting experiments

(Thanks, Spike, for board info which will be useful for future.)  Spike and I are on the same page about first steps: we can do well by starting as simply as possible and just wire to what we have in-house - dumb fixtures and iCue motorized mounts, and some LED components.  Instead of talking endlessly about sophisticated gear in the abstract, I'd like to see actual light modulation installed in our lab, running all the time, so we can hack it live, and so people other than technical experts can participate in the  design and evaluation of our lighting modulation apparatus from the get-go.   (I am thinking in particular of Tristana Rubio, Liza Solomonova, David's students, Patrick & his students, Komal, and me :)

It's a challenge, but I want to intercalate tech development finely with live action studies, and minimize programming in the abstract.   Here are the motivating "games" that I want to build as soon as possible, as demos and reality-checks AKA Cruelty-checks (in the spirit of Artaud).

To be concrete, let me pose some feasible first steps.  Who'd like to join us in realizing some of these experiments this term?  Please invite / recommend someone who can work with us on the practical and elementary makings this month.  (Navid  or Spike, can you invite Ted to contact me cc Morgan, please?)

(1) Wire the camera-based tracking to
regular static fixtures via our dimmer (done), 
iCue motor - to make a tracking spot,
some RGB fixture.

(2) Chase spot game with kids (XW, M?)  -- or we could map Navid's moving virtual sound sources to moving spots.  If video then we could vary the color and texture according to sonic cues.

(3) Color spots  mapped to blobs by  rank .   Rank by size or speed.  Devise rules such as   
3.1  Intersect => a third color, or
3.2  Same speed (even if different location) => blend colors,
3.3  Same curvature => blend colors.

(4) Map vegetal, solar, building (Tristana, Komal, or Patrick's students) and other non-human temporal patterns to params  (ie color, intensity) of fixtures mounted behind pillars or plant boxes, or other architectural accents in the room EV 7.725.  I think we should map such slow data to state rather than to actual lighting parameters.   This will take a weekend of collective re-wiring, to be scheduled perhaps in collaboration with Zoe & Katie (Annex / PLSS2 plant project).

Quality of light is not important at this stage - we need to create an entire signal path first and interpolate computational modulation.  "Le mieux est l'ennemi du bien"

Xin Wei

FQRSC proposal, AIS essay Minor Architecture

Hi Harry, Patrick,

I've had the pleasure of chatting with each of you individually.   Shall we move things along by putting together some "lab" notes  of experiences over the past year ?
We can generate this in three different forms  -- each of us has different strands of writings to do anyway...   let me contribute by re-posting two pieces of writing.   

Instead of artificially making yet another bit of work, I propose to work with what we each have done or need to do anyway.   So for example:

Patrick's got a set of projects with his studio over the past year, whose documentation as material to inspire the next phase.   Links to the project blogs would suffice.   We also talked about a Simondon essay that I'll be happy to look at soon. 

Harry's writing up some thoughts about the construction of apparatus, and the relation between apparatus and experiment for the prospectus which could neatly draw from and inform the various installation experiments.

I think at some point we talked about creating a project blog.   We have already two passworded spaces TML private WIKI    and the   posterous blog which you can re-format however you like.   They can be passworded to restrict as you like.   To just the three of us is fine for a start.

Here's the narrative of our FQRSC temporal textures proposal


and here's the Minor Architecture essay for AIS 26.2


Onward toward our joint article(s) I hope!  I'm hoping that we can work toward publication, scientific as well as EU support.  (First milestone date is Aug 15 for a Letter of Interest.)
Xin Wei

__________________________________________________________________________________
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________







[Temporal Textures] Terry Tao: flows on Riemannian manifolds

Adrian, Andy, Michael, Tyr, Sean,

Another wrinkle for our scientific research agenda discussion (at 2 today) paralleling the discussions of "temporal textures"

Terry Tao is one of the most lucid communicative mathematicians of his generation.   A key point for our purposes, I think, is the more general set up in which, instead of varying a metric g{t} with respect to the parameter "t" (putative time), one varies the base manifold as well.  M becomes M(t).  So a flow on a Riemannian manifold becomes a flow on a differentiable family of Riemannian manifolds: 


 Of course all the technical difficulty is in exactly how to vary through a family of manifolds, with potentially even changing topology.  Tao treats the RIcci flow, which has become a pillar of mathematics in the past 20 years, including Perelman's settling of the Poincare Conjecture.   


But in the spirit of a "small mammals in the age of large reptiles" strategy*, let me suggest a reversal of point of view, and read time from the evolutionary process.   I draw attention to two points that Tao makes in the passage quoted below.

We enrich the notion of time by the notion of the flow of time itself, modelled by the "time vector field" 

(1) The manifold developing topology goes hand in hand with the time vector field developing singularities.  Think of chocolate flowing down a donut held vertically.

(2) The "time vector field which obeys the transversality condition "   gives a more precise generalization of the "directionality" of time, but this is only the beginning of the journey...

I would like to see if this can be illuminated by Adrian's discussion of lensing.

Xin Wei
(* Mammals and reptiles do not refer to mathematicians but to the unnamed ;)


"The one drawback of the above simple approach is that it forces the topology of the underlying manifold M to stay constant. A more general approach is to view each d-dimensional manifold M(t) as a slice of a d+1-dimensional “spacetime” manifold (possibly with boundary or singularities). This spacetime is (usually) equipped with a time coordinate , as well as a time vector field which obeys the transversality condition . The level sets of the time coordinate t then determine the sets M(t), which (assuming non-degeneracy of t) are smooth d-dimensional manifolds which collectively have a tangent bundle which is a d-dimensional subbundle of the d+1-dimensional tangent bundle of . The metrics g(t) can then be viewed collectively as a section of . The analogue of the time derivative is then the Lie derivative . One can then define other Riemannian structures (e.g. Levi-Civita connections, curvatures, etc.) and differentiate those in a similar manner.

The former approach is of course a special case of the latter, in which for some time interval with the obvious time coordinate and time vector field. The advantage of the latter approach is that it can be extended (with some technicalities) into situations in which the topology changes (though this may cause the time coordinate to become degenerate at some point, thus forcing the time vector field to develop a singularity). This leads to concepts such as generalised Ricci flow, which we will not discuss here, though it is an important part of the definition of Ricci flow with surgery (see Chapters 3.8 and 14 of Morgan-Tian’s book for details)."



Ozone Jan 12 (context for OSC Discovery)

Hi OSC service discovery guys,

On 2010-12-24, at 4:22 AM, Sha Xin Wei wrote:

Dear Ozoners and media choreographers:

I propose we dedicate most of the Jan 12 Wed TML meeting for a discussion of the 2010-2011 Ozone system.   End-users -- artist / experimentalist composers -- are welcome and vital, but this discussion will run at the level of experts and system developers   We should allocate 5:15 - 7:00 for this.

I'd like to set the creative and research context so we can all prioritize the development effort appropriately to the lab's needs.
...

On 2010-12-23, at 4:48 PM, <adrian@adrianfreed.com> <adrian@adrianfreed.com> wrote:


(By calibrating I'll mean *making small adjustments** of an instrument's
parameters for contingent conditions of performance site and event*.)


I'm not sure what you mean to suggest here. What are you imagining we would
"incorporate" these techniques into, libmapper itself?
Calibration is an interesting problem deserving of more attention. Note
that the hipper devices store calibration information in the device
(e.g. wiimote, nunchuck).
This makes the calibrated device portable. Of course some calibration is
associated with the location (e.g. lighting, AGC, white balance, etc.
for video) other
with a paricular person. My experience is that the date
management/configuration issues are harder than the calibration signal
processing.


______________________________________________________________________________
Sha Xin Wei, Ph.D.
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  http://flavors.me/shaxinwei
______________________________________________________________________________