Adrian: temporal textures, CNMAT

"Texture" should remind us that time does not have to be modeled on a unidimensional path, -- though all our software tools for time-based media assume this! it's not at all clear to me where that leads, yet. Any thoughts to start?

I believe CNMAT's tools for many years avoid this trap. In CAST and OSW we use various "time machines" to index signals. Some of these use an interesting stateless scheme I developed to bridge samplings of time (events) into continuous time (and back again). I explore our interpolation spaces frequently using stored or penned traces. Andy's OSC database work allows for efficient (timely) access to OSC recordings in a "non-linear" framework. John's recent NIME work is another interesting take on modeling that allows for various polytempo textures to be created without completely letting go of control of convergence and divergence to and from events. I would be happy to share details of  this towards our collective understandings of by path(s) and textures.  

temporal textures 100804

> Now that Fall is coming, starting to fire up the temporal textures discussions, and to plan for milestone events in Fall and Spring. Don't know about Winter... any ideas on what you'd like for midway lilypads?

I hope to map out some workshops to bring some TML folks back to UC Berkeley this coming year. Maybe you'd be interested in the larger May events, ie part of Wymore's Dance Tech symposium, but presenting more provocative work like phenomenology time or presence consciousness cir lighting induced temporal textures, working with dancers. what would you think of linking the October workshop on psychology and architecture with Drs. Helga Wild and Linnaea Tillett with the May events, en route to Einsteins Dreams as a local shaping theme, but open to a cone of possibilities.

"Texture" should remind us that time does not have to be modeled on a unidimensional path, -- though all our software tools for time-based media assume this! it's not at all clear to me where that leads, yet. Any thoughts to start?

Mine are informed by (1) Maturana & Varela appendix to essay 2; (2) notion of spacelike hypersurface in general relativity; (3) harmony and texture in music; (4) the poetic figure of exfoltiation from Christopher Alexander, and Le Pli...and some more stuff. Lefebvre RHythmanalysis which we read in the Alexander & architecture reading group 3 4 years ago, is suggestive but not maybe productive enough? I don't know -- Erik Conrad knew it well. eg. I've been discussing generated time -- with Marek here. His model is a very clever approach to quantizing spacetime, but I am looking for a more textural, measure-theoretic approach that discovers temporal rhythm out of live movement, live processes.
Speaking of live process, please tell me what you think would be useful as guiding questions / themes/ curiosities for initial materials explorations. For my part, I propose to contribute some performance workshop scenarios from Einsteins Dreams. Id like to ask Michael as well as David Morris when we're all back in Montreal, if we might recruit some people to workshop this together with what lighting or activated materials we gather. Not only those people of course, and we can rapidly go through a bunch of other scenarios... to the limits of what can be done of course. What do you think ?

Xin Wei

PS Morgan and Michael are familiar with some of this already.
I'd like to share this with Linnaea and Helga if that's ok, And Navid for sophisticated, richer texturing.

Adrian Freed: ipads in motion sampling temporal texture space

May I suggest a small but important shift in the way of thinking of
imaging on the displays. Instead of referencing the image to coordinates
established from the edges of the screen, think of the edges of the
screens as addressing locations in a larger virtual world.
(i.e. "a temporal texture space")
Use the accelerometer to "move the frame" in the larger space. This is
an oldish idea that I have seen revived several times in the last
decades. (e.g., Jaron Lanier described it to me a few years ago and Sun
research did it in a PDA prototype before that). Now the displays that
are hanging like bats in  a cave can be swung from pendulums or blow
around in a breeze (there are some nice energy effficent
rigs for this with muscle wire). This motion is important for me for
two reasons, it can be used to create experiences like
the shimmer of leaves blowing in the wind where the matt and shiny sides
modulate temporally (also fields of wheat) but also it defeats
a problem I have with screens in theatrical contexts which is most
apparent in the "magic mirror" trope when activity in real space in
front of the screen is transformed and reflected back behind the real 
actors/dancers. The problem is that the screen is anchored 
and the action isn't so when you move your head the screen reveals
itself throughout the image instead of framing the image
thus defeating the necessary suspension of disbelief for cohabitation of
the screen information and the action. This is analogous to the sweet
spot and uniform directivity problems in audio that we have been
tackling.

As for the iPad choice: you might want to wait for the slew of
competitors coming out. HP has one
coming out real soon and a lot of programmers I know prefer other OS
development tools for such things.

Finally, remember if you are willing to live with lower overall lifetime
of the leds (a year or two instead of 4 or 5) you can increase the
backlighting brightness considerably:
http://www.ifixit.com/Teardown/iPad-Wi-Fi-Teardown/2183/4#s11207.

[TT] Temporal Textures 1: iPads

Hi Harry, Patrick,

This may be just a digression.

To complement the fine and saturated power of lighting instruments ...
if I buy iPADs for to play with in the lab for Michael,Tim and JS, I would like to invite us 3 to think orthogonally to the usual pathology of screen-based play.

What if we (when we get the budget) get many iPADs and find a way to suspend them in space?    What if we think of them as 

(1) addressable light panels -- expensive light bulbs -- FORGET IMAGE !
(2) windows into other parts of the same room (so we need to get a lot of inexpensive cameras and a video-mixer)

What if we
(A) Float them on aircraft cable from the grid at different heights in the air.  Let's figure out what heights and densities have what effects
Finally we can try to something I've always wanted to play with more.

(B) Introduce rhythms into them by sequencing

(C) Introduce responsive behavior.   a brute force but perhaps effective sketch method: map Jitter into large video matrix, then beam submatrices to iPADs?

Xin Wei