[Temporal Textures] Terry Tao: flows on Riemannian manifolds

Adrian, Andy, Michael, Tyr, Sean,

Another wrinkle for our scientific research agenda discussion (at 2 today) paralleling the discussions of "temporal textures"

Terry Tao is one of the most lucid communicative mathematicians of his generation.   A key point for our purposes, I think, is the more general set up in which, instead of varying a metric g{t} with respect to the parameter "t" (putative time), one varies the base manifold as well.  M becomes M(t).  So a flow on a Riemannian manifold becomes a flow on a differentiable family of Riemannian manifolds: 

 Of course all the technical difficulty is in exactly how to vary through a family of manifolds, with potentially even changing topology.  Tao treats the RIcci flow, which has become a pillar of mathematics in the past 20 years, including Perelman's settling of the Poincare Conjecture.   

But in the spirit of a "small mammals in the age of large reptiles" strategy*, let me suggest a reversal of point of view, and read time from the evolutionary process.   I draw attention to two points that Tao makes in the passage quoted below.

We enrich the notion of time by the notion of the flow of time itself, modelled by the "time vector field" 

(1) The manifold developing topology goes hand in hand with the time vector field developing singularities.  Think of chocolate flowing down a donut held vertically.

(2) The "time vector field which obeys the transversality condition "   gives a more precise generalization of the "directionality" of time, but this is only the beginning of the journey...

I would like to see if this can be illuminated by Adrian's discussion of lensing.

Xin Wei
(* Mammals and reptiles do not refer to mathematicians but to the unnamed ;)

"The one drawback of the above simple approach is that it forces the topology of the underlying manifold M to stay constant. A more general approach is to view each d-dimensional manifold M(t) as a slice of a d+1-dimensional “spacetime” manifold (possibly with boundary or singularities). This spacetime is (usually) equipped with a time coordinate , as well as a time vector field which obeys the transversality condition . The level sets of the time coordinate t then determine the sets M(t), which (assuming non-degeneracy of t) are smooth d-dimensional manifolds which collectively have a tangent bundle which is a d-dimensional subbundle of the d+1-dimensional tangent bundle of . The metrics g(t) can then be viewed collectively as a section of . The analogue of the time derivative is then the Lie derivative . One can then define other Riemannian structures (e.g. Levi-Civita connections, curvatures, etc.) and differentiate those in a similar manner.

The former approach is of course a special case of the latter, in which for some time interval with the obvious time coordinate and time vector field. The advantage of the latter approach is that it can be extended (with some technicalities) into situations in which the topology changes (though this may cause the time coordinate to become degenerate at some point, thus forcing the time vector field to develop a singularity). This leads to concepts such as generalised Ricci flow, which we will not discuss here, though it is an important part of the definition of Ricci flow with surgery (see Chapters 3.8 and 14 of Morgan-Tian’s book for details)."

Ozone Jan 12 (context for OSC Discovery)

Hi OSC service discovery guys,

On 2010-12-24, at 4:22 AM, Sha Xin Wei wrote:

Dear Ozoners and media choreographers:

I propose we dedicate most of the Jan 12 Wed TML meeting for a discussion of the 2010-2011 Ozone system.   End-users -- artist / experimentalist composers -- are welcome and vital, but this discussion will run at the level of experts and system developers   We should allocate 5:15 - 7:00 for this.

I'd like to set the creative and research context so we can all prioritize the development effort appropriately to the lab's needs.

On 2010-12-23, at 4:48 PM, <adrian@adrianfreed.com> <adrian@adrianfreed.com> wrote:

(By calibrating I'll mean *making small adjustments** of an instrument's
parameters for contingent conditions of performance site and event*.)

I'm not sure what you mean to suggest here. What are you imagining we would
"incorporate" these techniques into, libmapper itself?
Calibration is an interesting problem deserving of more attention. Note
that the hipper devices store calibration information in the device
(e.g. wiimote, nunchuck).
This makes the calibrated device portable. Of course some calibration is
associated with the location (e.g. lighting, AGC, white balance, etc.
for video) other
with a paricular person. My experience is that the date
management/configuration issues are harder than the calibration signal

Sha Xin Wei, Ph.D.
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  http://flavors.me/shaxinwei

Ozone & Einsteins Dream workshops in 2011

Here is my current list of potential and proposed opportunities to do workshops related to Ozone & Einsteins Dream in 2011.
Some are fundable.
I will be working on funding these...depending on who can accomplish what.

March (during intersession break) UC Berkeley
1 or 2 members of Ozone team demo Lisa Wymore's lab
Meyer Sound work

March 5 - April 30 Bain St Michel, Montreal

April 15-23 Berkeley Dance Productions 8, Berkeley
  Workshop in Lisa Wymore's lab ?
Maybe only media techniques

May 1-8 Hexagram, Montreal ??
Michael Montanaro & visitors Lisa Wymore (UC Berkeley) and Sheldon Smith (Mills College) ?

August 1-10 Hawaii
SECT Seminar in Experimental Critical Theory
technoscientific knowledge production and urban experience in Asia

Xin Wei

poor theory

Dear TMLabbers,

By the Critical Theory Institute (2008)

This can inform the TML's experimental work.
Thanks to Kavita Philip.
See also "phenomenological method".
Xin Wei

Sha Xin Wei, Ph.D.
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  http://flavors.me/shaxinwei

Edward S. Casey "On Not Putting Too Fine an Edge on Things" philosopher, talk Oct 15

Dear Colleagues and Students,

Prof. Ed Casey is one of the most respected living philosophers in N. America, and reputed to be a most energizing speaker.   This promises to be one of the livelier talks of the year, and motivation for a topological approach to things, perhaps.   Do tell your friends, and come to the top floor of the Hexagram space in the EV building at Concordia.

Philosophy Colloquium Talk
Co-sponsored by the Canada Research Chair in New Media,                                                                              and Topological Media Lab

Edward S. Casey
Distinguished Professor of Philosophy, Stony Brook University

Friday, Oct 15, 4-6 pm, EV 11.705, 1515 St. Catherine W., corner of Guy

"On Not Putting Too Fine an Edge on Things"


Philosophers, taking their lead from natural and social scientists, pride themselves on achieving clarity and exactitude. This aim is indisputably valid and has been indisputable to the accomplishment of many of the enduring achievements in philosophy – for instance, Descartes’s Principles of Philosophy, Kant’s Critique of Pure Reason, Peirce’s semiotics, Russell and Whitehead’s Principia Mathematica. At the same time, the virtues of vagueness have been increasingly pursued ever since William James (inspired by certain strains in Peirce himself) proclaimed “the value of the vague” in his Principles of Psychology (1890). Since then, others have followed suite, however diversely: notably Edmund Husserl, Maurice Merleau-Ponty, and Timothy Williamson. In this talk, I consider the merits of the vague in philosophy by a concerted exploration of the edges of things and topics: those extremities where the exact gives place to the less than precisely designatable and discussable. I maintain that, far from being a defect or lack, the very imprecision has positive values of its own to which we should attend more closely.

Adrian: temporal textures, CNMAT

"Texture" should remind us that time does not have to be modeled on a unidimensional path, -- though all our software tools for time-based media assume this! it's not at all clear to me where that leads, yet. Any thoughts to start?

I believe CNMAT's tools for many years avoid this trap. In CAST and OSW we use various "time machines" to index signals. Some of these use an interesting stateless scheme I developed to bridge samplings of time (events) into continuous time (and back again). I explore our interpolation spaces frequently using stored or penned traces. Andy's OSC database work allows for efficient (timely) access to OSC recordings in a "non-linear" framework. John's recent NIME work is another interesting take on modeling that allows for various polytempo textures to be created without completely letting go of control of convergence and divergence to and from events. I would be happy to share details of  this towards our collective understandings of by path(s) and textures.  

temporal textures 100804

> Now that Fall is coming, starting to fire up the temporal textures discussions, and to plan for milestone events in Fall and Spring. Don't know about Winter... any ideas on what you'd like for midway lilypads?

I hope to map out some workshops to bring some TML folks back to UC Berkeley this coming year. Maybe you'd be interested in the larger May events, ie part of Wymore's Dance Tech symposium, but presenting more provocative work like phenomenology time or presence consciousness cir lighting induced temporal textures, working with dancers. what would you think of linking the October workshop on psychology and architecture with Drs. Helga Wild and Linnaea Tillett with the May events, en route to Einsteins Dreams as a local shaping theme, but open to a cone of possibilities.

"Texture" should remind us that time does not have to be modeled on a unidimensional path, -- though all our software tools for time-based media assume this! it's not at all clear to me where that leads, yet. Any thoughts to start?

Mine are informed by (1) Maturana & Varela appendix to essay 2; (2) notion of spacelike hypersurface in general relativity; (3) harmony and texture in music; (4) the poetic figure of exfoltiation from Christopher Alexander, and Le Pli...and some more stuff. Lefebvre RHythmanalysis which we read in the Alexander & architecture reading group 3 4 years ago, is suggestive but not maybe productive enough? I don't know -- Erik Conrad knew it well. eg. I've been discussing generated time -- with Marek here. His model is a very clever approach to quantizing spacetime, but I am looking for a more textural, measure-theoretic approach that discovers temporal rhythm out of live movement, live processes.
Speaking of live process, please tell me what you think would be useful as guiding questions / themes/ curiosities for initial materials explorations. For my part, I propose to contribute some performance workshop scenarios from Einsteins Dreams. Id like to ask Michael as well as David Morris when we're all back in Montreal, if we might recruit some people to workshop this together with what lighting or activated materials we gather. Not only those people of course, and we can rapidly go through a bunch of other scenarios... to the limits of what can be done of course. What do you think ?

Xin Wei

PS Morgan and Michael are familiar with some of this already.
I'd like to share this with Linnaea and Helga if that's ok, And Navid for sophisticated, richer texturing.

Adrian Freed: ipads in motion sampling temporal texture space

May I suggest a small but important shift in the way of thinking of
imaging on the displays. Instead of referencing the image to coordinates
established from the edges of the screen, think of the edges of the
screens as addressing locations in a larger virtual world.
(i.e. "a temporal texture space")
Use the accelerometer to "move the frame" in the larger space. This is
an oldish idea that I have seen revived several times in the last
decades. (e.g., Jaron Lanier described it to me a few years ago and Sun
research did it in a PDA prototype before that). Now the displays that
are hanging like bats in  a cave can be swung from pendulums or blow
around in a breeze (there are some nice energy effficent
rigs for this with muscle wire). This motion is important for me for
two reasons, it can be used to create experiences like
the shimmer of leaves blowing in the wind where the matt and shiny sides
modulate temporally (also fields of wheat) but also it defeats
a problem I have with screens in theatrical contexts which is most
apparent in the "magic mirror" trope when activity in real space in
front of the screen is transformed and reflected back behind the real 
actors/dancers. The problem is that the screen is anchored 
and the action isn't so when you move your head the screen reveals
itself throughout the image instead of framing the image
thus defeating the necessary suspension of disbelief for cohabitation of
the screen information and the action. This is analogous to the sweet
spot and uniform directivity problems in audio that we have been

As for the iPad choice: you might want to wait for the slew of
competitors coming out. HP has one
coming out real soon and a lot of programmers I know prefer other OS
development tools for such things.

Finally, remember if you are willing to live with lower overall lifetime
of the leds (a year or two instead of 4 or 5) you can increase the
backlighting brightness considerably:

[TT] Temporal Textures 1: iPads

Hi Harry, Patrick,

This may be just a digression.

To complement the fine and saturated power of lighting instruments ...
if I buy iPADs for to play with in the lab for Michael,Tim and JS, I would like to invite us 3 to think orthogonally to the usual pathology of screen-based play.

What if we (when we get the budget) get many iPADs and find a way to suspend them in space?    What if we think of them as 

(1) addressable light panels -- expensive light bulbs -- FORGET IMAGE !
(2) windows into other parts of the same room (so we need to get a lot of inexpensive cameras and a video-mixer)

What if we
(A) Float them on aircraft cable from the grid at different heights in the air.  Let's figure out what heights and densities have what effects
Finally we can try to something I've always wanted to play with more.

(B) Introduce rhythms into them by sequencing

(C) Introduce responsive behavior.   a brute force but perhaps effective sketch method: map Jitter into large video matrix, then beam submatrices to iPADs?

Xin Wei