Rhythm Studies during TML residency at Synthesis workshop Feb 15 - March 9, 2013

Rhythm Studies
TML and Synthesis
Sha Xin Wei, Dec 2013

Temporal Textures Research Notes: http://textures.posthaven.com


Correlation and Entrainment Questions

Now that Julian Stein has crafted a very tidy package for dealing with rhythm (msec resolution, up to minutes and hours scales) I would like to explore these questions:

• How can two people moving in concert get a sense of each other's presence and dynamics using rhythm alone? 

• How can two people moving in concert anticipate of each other's dynamics without superficial representations via sound-or light-image? 

The model for this is Tinikling (a Philipines game):  A pair of kids hold two long sticks parallel between them.  As they rhythmically bring the sticks together or apart, a third kid jumps in between.  Often they chant as they bang the sticks together.    The jumper syncs by singing prior to jumping in.   This is a core model for dealing with latency, which we've been poised to do for 10 years but always swerved aside distracted by distractions.

PRACTICAL GOAL

• Look for correlative indicators of "intentional" "coordinate" movement, ideally using mapping of intersubjective, not ego-based data.

• Prepare software / hardware for rhythm experiments at ASU Phoenix in February 2014.

KEY TACTICS

• For data of type (t, v) with values v(t) as a function of time, deliberately ignore the dependent value (colour, accelerometer or whatever sensor data you ordinarily would be attending to).  Instead work only with onsets ti  and intervals  ∆ti .

• AVOID being seduced by dealing with the "dependent" data v -- whether incident sound, light, pressure or what have you.   The point of this is to focus on time-data: onsets (zero-crossings), or intervals, etc. -- instantaneous, as well as interval / integrated (summed), as well as their derivatives (i.e. Sobolev norms).

• Create sound or visual feedback only based on this time data.  I would say drive lighting, or some simple sonification .  The key is to do it yourself so you can rapidly change it yourself, and because it must be simple, not musical.  This should be NOT slaved only to one body's movement, but auditory feedback in concert with collective movement.

• Compute cross-correlation on multiple streams of time-series, and map the running values to time-based media as feedback.

References

Helgi-Jon Schweizer, Innsbruck 
Karl Pribram, Stanford

and recent work:

[ PUT LITERATURE SURVEY HERE ]

EMAILS




On Thu, Dec 19, 2013 at 10:49 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Hi Nikos & Julian,

Just to confirm that I'll come to the TML Friday to define with you a little research plan for the rhythm experiments

How can two people moving in concert get a sense of each other's presence and dynamics using rhythm alone?

 

How can two people moving in concert anticipate of each other's dynamics without superficial representations via sound­ or light­ image? 
However it is important to understand in what context are we posing those questions.

Context will be two dancers, but as Adrian and I both observed -- we are interested in the challenge presented by much less over-determined situations in everyday activity.   But working with dancers in a variety of improvised movement would scaffold the initial approach.   (We need to be sensitive to how that biases the apparatus and our use of it.)

I would like to start with standard cross-correlation measures -- Doug or Nikos can locate them!

Cheers,
Xin Wei


__________________________________________________________________________________
Incoming Professor and Director • Arts, Media + Engineering • Herberger Institute for Design and the Arts / Director • Synthesis Center / ASU
Founding Director, Topological Media Lab (EV7.725) • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________



On 2013-12-17, at 2:48 PM, Nikolaos Chandolias <nikos.chandolias@gmail.com>
 wrote:

Of the particular interest are the questions raised: 
How can two people moving in concert get a sense of each other's presence and dynamics using rhythm alone? and 
How can two people moving in concert anticipate of each other's dynamics without superficial representations via sound­ or light­ image? 
However it is important to understand in what context are we posing those questions.

If we are talking for performance, creating rhythm and dynamics between two dancers could require either an innate timing structure already in place, instinctual reactions between the two and what they are creating, and yes ‘representations’ through movement, sound- breathe that can create or inform this rhythmic 'pulse'... Thus, I believe that there is space in the research that we are conducting in collaboration with Margaret and Doug. I also believe that this could be useful for Omar’s experiment with the 'glove-space-sensor' [David Morris et al., Patricia Duquette and Zohar Kfir built the glove; Nina Bouchard could improve it?]

In any case, all those will need to be tested to see how it works. We will be in the BlackBox from 8th to the 23rd of January where we could implement and play with the xOSC platform and conduct different 'light-weighted' experiments that afterwards could continue in the EV7.725. xOSC could be also proved useful for other people and future movement-based workshops with the TML.

I would like to propose to talk maybe in person tomorrow with you Xin Wei at the tml, and whoever else wants to participate.

Best regards,
Nikos


On Mon, Dec 16, 2013 at 10:06 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Hi Nikos, (Doug, Julian, et al…)

If you and/or colleagues are available to rapidly carry out lightweight & brief experiments along the lines proposed in the attached RTF as part of your research, then I'd be happy to sponsor a small test by authorizing Ozzie Kidane, AME's electronics engineer, to build a set of these xOSC wearables for you.  Lightweight means doable in EV 7.725 24x7, brief means one or two days per experiment.  Once you get this to work, I'd like to use this to establish presence across a distance, with no visual or auditory "image" of the other.


Ozzie made this wearable prototype out of the xOSC recently for Synthesis work.  Garth's used this in his class. 
I will ask Adrian if his + John's normal vector (Grassmannian) package works with this device.

Cheers,
Xin Wei

rhythm / correlation research using xOsc devices from ASU


Dear Julian, Nikos, Ozzie,

I'm addressing the people who I hope can actively help advance research on correlation-based indicators of "intentional" "coordinate" movement for studies of rhythm, ideally using mapping of intersubjective, not ego-based data.   Julian's already put together MnM / FTM code for this so things are well-underway.

In prep for rhythm experiments at ASU in February… and following up with Ozzie's initiative at ASU.

Finally, after 6 years, a chance to inch forward on this low-hanging fruit and simple exercise!  (May it not Tantalize.)



Practical wearability question:

Some of the researchers at the TML have also designed wearables with an eye to comfort and visual design.  How flat can these enclosures be (and still do their job)?  Can we halve the thickness somehow by re-arrangement, so that the dancer can wear this for example in a flatter, concave part of her/his body?  Can a dancer or roll over it comfortably and safely?

The Sensestage wireless sensor platform that Nikos has used is much cheaper  -- about 1/10 the price including radio + software to map data to Max/MSP/Jitter.   One bottleneck is that each wireless device Bluetooth's to only one base computer at a time.  Another is probably low sample frequency.  (We could not fund numbers.)

The xOSC is pricey but technically better on key specs -- so the remaining question is how physically wearable can it be….

Experiment:

Can we entrust this discussion to TML RA's if available:  Julian Stein + Nikos Chandolias?   Nikos is an MA student who is both an electrical engineer and a dancer by training?   Talk / confer with Adrian Freed @ CNMAT Berkeley, Dr. Doug van Nort @ TML, Ozzie and Prof. Chris Ziegler @ AME.  Let's not second-guess or over-engineer -- please get sound deterministically coupled to co-movement happening as soon as possible -- CRUDE BUT PALPABLE is good.  Then you'll refine rapidly.

This can be linked to the theme of transition that Garth and I are both interested in as a motivation for this workshop.

This is subject to people's availability and the strategic needs of the TML and Synthesis Center.  But we need to define the February workshop a month ago, so let's commit on the new year ASAP.  Then you can organize housing...

Cheers,
Xin Wei

__________________________________________________________________________________
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab (EV7.725) • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________

a research purpose for xOsc devices from ASU ?

Hi Nikos, (Doug, Julian, et al…)

If you and/or colleagues are available to rapidly carry out lightweight & brief experiments along the lines proposed in the attached RTF as part of your research, then I'd be happy to sponsor a small test by authorizing Ozzie Kidane, AME's electronics engineer, to build a set of these xOSC wearables for you.  Lightweight means doable in EV 7.725 24x7, brief means one or two days per experiment.  Once you get this to work, I'd like to use this to establish presence across a distance, with no visual or auditory "image" of the other.

Ozzie made this wearable prototype out of the xOSC recently for Synthesis work.  Garth's used this in his class. 
I will ask Adrian if his + John's normal vector (Grassmannian) package works with this device.

Cheers,
Xin Wei



__________________________________________________________________________________
Incoming Professor and Director • Arts, Media + Engineering • Herberger Institute for Design and the Arts / Director • Synthesis Center / ASU
Founding Director, Topological Media Lab (EV7.725) • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________








Adrian Freed: barycenter, bunch

Andy Schmeder wrote a bunch of xyz.geometry objects. The new "o.expr" is powerful  enough to do these efficiently and John and I have been
pondering what functions to add to the extensive list of basic math favorites. This would be nice to curate while John and I are on your coast. We are trying to schedule that before he takes
the April break from Northeastern....

Speaking of Andy: he just told me about a tool that does 3d geometry parametrically but allows you to do convex hulls of objects not points. 
It is hard to find this sort of thing that is in a plastic enough form to build into real-time engines for what we want to do but I think it is a valuable direction.

I want to project various envelopes around real-time 3d models of the space and holes of people's bodies -envelopes that are predictive where the potential is layered over the actual. 
The visual ghostly analog to a sonic pre-echo. 
I think this is what basketball players and soccer players (and of course dancers) do.  They interact with the ghosts of the past and future.
The present is too late and moving.....

[emphasis added]

On Jan 31, 2012, at 8:54 AM, Sha Xin Wei wrote:

It'd be a very useful learning exercise to clean up this code to calculate the degree of dispersion / clustering of a set of points.

<README_barycenter+bunch.rtf>

on to spin...
Xin Wei

barycenter.maxpat
computes the center of a set of points
scatter
computes degree of clustering or dispersion
tml.math.bunch
does the basic arithmetic, uses   zl

MaxLibraries/TML/pro/lab/workshop/090115/
barycenter.maxpat
barycenter.xml
lights.maxpat
scatter.maxpat

MaxLibraries/TML/math/
tml.math.bunch.maxpat
tml.math.distance.maxpat
tml.math.range.maxpat
tml.math.distance.maxpat

TML next temporal textures + phenomenology seminar: Friday Jan 26, 16h00 - 18h00

Our next temporal textures + phenomenology seminar will be Fri 26 Jan 16h00 - 18h00 in the TML.
We decided to meet weekly since there's so much to work through.   See the temporal textures page for a snapshot of the research, and the temporal textures blog for a trace from 2010.

I believe the section of Merleau-Ponty's PP we agreed to read and discuss is

Part One: The Body / III. The Spatiality of One’s own Body [corps propre] and Motility

(Is that right?)

Here again is a helpful analytic index that Noah circulated.   (Thanks for the lucid orientations last week!)
 

 

Please email Liza and me privately if the people who couldn't make it Friday would like to stay on a "temporal textures" list for this term.   We may try to arrange a Wed evening discussion for another strand.  Some of us will be working quite intensely through Feb on some movement + video, lighting and acoustics experiments in the TML.

Xin Wei


TT experiment: essences of movement and form from animated halogens and from sound in TML (later BB)

Temporal Textures TML folks:

One experiment that Liza and I talked about -- which may interest others too, is the question of what kinds of room-memory (cf Ed Casey, Merleau-Ponty) can be evoked as a person walks through the TML's halogen  when they are animated according to the person's movement + some simple animation logics.  

First tests: 
Visual
Sense of relative movement -- the train alongside effect
Dead elevator effect
Striping inducing movement (Montanaro effect)
Several of us -- Harry, me, Morgan, -- have written code to animate networks of LED's.  For example, the  TML/pro/esea/tml.pro.esea.control   code   animates a sequence of LED's scattered in an arbitrary way through space.  

Audio
state: dense - loose
tendency: compressing -- expanding (any sound that person makes)
Navid pointed out with the much richer palette that we have, this quickly brings in  compositional questions.  It'd be nice to see this also with whatever Tyr extracts from Il y a (soon), and with what Freida creates in the course  Fear of Flight (after February tests)

Note re. tracking in the TML.  We cannot sense the person's orientation unless / until Freida or someone else mounts the CHR-UM6 Orientation Sensor with a radio micmodule -- Whoever does this should consult  Elio Bidinost <ebidinost@swpspce.net>.  I've talked with Elio -- he said just send him the link and the question.)   (Some folks have pointed to iPhone apps but the iPhones are too big and overdetermined.)    However, we can use our overhead cameras to get a sense of where the person is / is headed.   We have several on the grid thanks to  MF and Tyr.  Julien has hooked the Elmo up to his optical flow sampler.

For reference, this is a well-known series of works by artist Jim Campbell
Portrait of Claude Shannnon shows the effect most clearly
White Circle is a large scale installation.  It'd be more interesting to blow fog such an array of lights and see images and movement appear and disappear as the fog thickens and thins. 

(Let me cc as Spike theatrical lighting design expert, if that's alright)

On Jan 15, 2012, at 3:27 PM, Adrian Freed wrote:

(Xin Wei wrote) FYI on the low-end, MM and I are buying a cheap body-worn wireless analog videocam (on order of a few grams weight, 1" cube ) to try out mapping optical flow to Nav's instruments this coming weeks.   I'd like to write some mappings from optical flow to feed Julian and Navid's gesture followers, and as well as more directly Nav's sound instruments.  I wrote some MSP code from 2004 that worked in the fashion show "jewelry" that surely can be much more expressive!

You may wan't want to know where the camera is in space -
a tricky problem as you know but this is the best affordable module to get the answer without losing people down the rabbit hole of Kalman filtering etc.

Why Buxton doesn't bother with computer music any more?

Carissimi TML People interested in new matter, temporal textures, movement+media,...

We have a conversation on "material computing" that's beginning to fill out with interesting references.  Thanks to Adrian for this one.

- Xin Wei

Begin forwarded message:

This system already had a lot of what people are still trying to get into including physical models, arduino like processors (6809), dsp processors (TMS3210),
visual programming etc.
http://www.billbuxton.com/katosizer.pdf

For TML agenda Wed 5-6: lighting experiments

(Thanks, Spike, for board info which will be useful for future.)  Spike and I are on the same page about first steps: we can do well by starting as simply as possible and just wire to what we have in-house - dumb fixtures and iCue motorized mounts, and some LED components.  Instead of talking endlessly about sophisticated gear in the abstract, I'd like to see actual light modulation installed in our lab, running all the time, so we can hack it live, and so people other than technical experts can participate in the  design and evaluation of our lighting modulation apparatus from the get-go.   (I am thinking in particular of Tristana Rubio, Liza Solomonova, David's students, Patrick & his students, Komal, and me :)

It's a challenge, but I want to intercalate tech development finely with live action studies, and minimize programming in the abstract.   Here are the motivating "games" that I want to build as soon as possible, as demos and reality-checks AKA Cruelty-checks (in the spirit of Artaud).

To be concrete, let me pose some feasible first steps.  Who'd like to join us in realizing some of these experiments this term?  Please invite / recommend someone who can work with us on the practical and elementary makings this month.  (Navid  or Spike, can you invite Ted to contact me cc Morgan, please?)

(1) Wire the camera-based tracking to
regular static fixtures via our dimmer (done), 
iCue motor - to make a tracking spot,
some RGB fixture.

(2) Chase spot game with kids (XW, M?)  -- or we could map Navid's moving virtual sound sources to moving spots.  If video then we could vary the color and texture according to sonic cues.

(3) Color spots  mapped to blobs by  rank .   Rank by size or speed.  Devise rules such as   
3.1  Intersect => a third color, or
3.2  Same speed (even if different location) => blend colors,
3.3  Same curvature => blend colors.

(4) Map vegetal, solar, building (Tristana, Komal, or Patrick's students) and other non-human temporal patterns to params  (ie color, intensity) of fixtures mounted behind pillars or plant boxes, or other architectural accents in the room EV 7.725.  I think we should map such slow data to state rather than to actual lighting parameters.   This will take a weekend of collective re-wiring, to be scheduled perhaps in collaboration with Zoe & Katie (Annex / PLSS2 plant project).

Quality of light is not important at this stage - we need to create an entire signal path first and interpolate computational modulation.  "Le mieux est l'ennemi du bien"

Xin Wei

FQRSC proposal, AIS essay Minor Architecture

Hi Harry, Patrick,

I've had the pleasure of chatting with each of you individually.   Shall we move things along by putting together some "lab" notes  of experiences over the past year ?
We can generate this in three different forms  -- each of us has different strands of writings to do anyway...   let me contribute by re-posting two pieces of writing.   

Instead of artificially making yet another bit of work, I propose to work with what we each have done or need to do anyway.   So for example:

Patrick's got a set of projects with his studio over the past year, whose documentation as material to inspire the next phase.   Links to the project blogs would suffice.   We also talked about a Simondon essay that I'll be happy to look at soon. 

Harry's writing up some thoughts about the construction of apparatus, and the relation between apparatus and experiment for the prospectus which could neatly draw from and inform the various installation experiments.

I think at some point we talked about creating a project blog.   We have already two passworded spaces TML private WIKI    and the   posterous blog which you can re-format however you like.   They can be passworded to restrict as you like.   To just the three of us is fine for a start.

Here's the narrative of our FQRSC temporal textures proposal


and here's the Minor Architecture essay for AIS 26.2


Onward toward our joint article(s) I hope!  I'm hoping that we can work toward publication, scientific as well as EU support.  (First milestone date is Aug 15 for a Letter of Interest.)
Xin Wei

__________________________________________________________________________________
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________