Rhythm Studies during TML residency at Synthesis workshop Feb 15 - March 9, 2013

Rhythm Studies
TML and Synthesis
Sha Xin Wei, Dec 2013

Temporal Textures Research Notes: http://textures.posthaven.com


Correlation and Entrainment Questions

Now that Julian Stein has crafted a very tidy package for dealing with rhythm (msec resolution, up to minutes and hours scales) I would like to explore these questions:

• How can two people moving in concert get a sense of each other's presence and dynamics using rhythm alone? 

• How can two people moving in concert anticipate of each other's dynamics without superficial representations via sound-or light-image? 

The model for this is Tinikling (a Philipines game):  A pair of kids hold two long sticks parallel between them.  As they rhythmically bring the sticks together or apart, a third kid jumps in between.  Often they chant as they bang the sticks together.    The jumper syncs by singing prior to jumping in.   This is a core model for dealing with latency, which we've been poised to do for 10 years but always swerved aside distracted by distractions.

PRACTICAL GOAL

• Look for correlative indicators of "intentional" "coordinate" movement, ideally using mapping of intersubjective, not ego-based data.

• Prepare software / hardware for rhythm experiments at ASU Phoenix in February 2014.

KEY TACTICS

• For data of type (t, v) with values v(t) as a function of time, deliberately ignore the dependent value (colour, accelerometer or whatever sensor data you ordinarily would be attending to).  Instead work only with onsets ti  and intervals  ∆ti .

• AVOID being seduced by dealing with the "dependent" data v -- whether incident sound, light, pressure or what have you.   The point of this is to focus on time-data: onsets (zero-crossings), or intervals, etc. -- instantaneous, as well as interval / integrated (summed), as well as their derivatives (i.e. Sobolev norms).

• Create sound or visual feedback only based on this time data.  I would say drive lighting, or some simple sonification .  The key is to do it yourself so you can rapidly change it yourself, and because it must be simple, not musical.  This should be NOT slaved only to one body's movement, but auditory feedback in concert with collective movement.

• Compute cross-correlation on multiple streams of time-series, and map the running values to time-based media as feedback.

References

Helgi-Jon Schweizer, Innsbruck 
Karl Pribram, Stanford

and recent work:

[ PUT LITERATURE SURVEY HERE ]

EMAILS




On Thu, Dec 19, 2013 at 10:49 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Hi Nikos & Julian,

Just to confirm that I'll come to the TML Friday to define with you a little research plan for the rhythm experiments

How can two people moving in concert get a sense of each other's presence and dynamics using rhythm alone?

 

How can two people moving in concert anticipate of each other's dynamics without superficial representations via sound­ or light­ image? 
However it is important to understand in what context are we posing those questions.

Context will be two dancers, but as Adrian and I both observed -- we are interested in the challenge presented by much less over-determined situations in everyday activity.   But working with dancers in a variety of improvised movement would scaffold the initial approach.   (We need to be sensitive to how that biases the apparatus and our use of it.)

I would like to start with standard cross-correlation measures -- Doug or Nikos can locate them!

Cheers,
Xin Wei


__________________________________________________________________________________
Incoming Professor and Director • Arts, Media + Engineering • Herberger Institute for Design and the Arts / Director • Synthesis Center / ASU
Founding Director, Topological Media Lab (EV7.725) • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________



On 2013-12-17, at 2:48 PM, Nikolaos Chandolias <nikos.chandolias@gmail.com>
 wrote:

Of the particular interest are the questions raised: 
How can two people moving in concert get a sense of each other's presence and dynamics using rhythm alone? and 
How can two people moving in concert anticipate of each other's dynamics without superficial representations via sound­ or light­ image? 
However it is important to understand in what context are we posing those questions.

If we are talking for performance, creating rhythm and dynamics between two dancers could require either an innate timing structure already in place, instinctual reactions between the two and what they are creating, and yes ‘representations’ through movement, sound- breathe that can create or inform this rhythmic 'pulse'... Thus, I believe that there is space in the research that we are conducting in collaboration with Margaret and Doug. I also believe that this could be useful for Omar’s experiment with the 'glove-space-sensor' [David Morris et al., Patricia Duquette and Zohar Kfir built the glove; Nina Bouchard could improve it?]

In any case, all those will need to be tested to see how it works. We will be in the BlackBox from 8th to the 23rd of January where we could implement and play with the xOSC platform and conduct different 'light-weighted' experiments that afterwards could continue in the EV7.725. xOSC could be also proved useful for other people and future movement-based workshops with the TML.

I would like to propose to talk maybe in person tomorrow with you Xin Wei at the tml, and whoever else wants to participate.

Best regards,
Nikos


On Mon, Dec 16, 2013 at 10:06 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Hi Nikos, (Doug, Julian, et al…)

If you and/or colleagues are available to rapidly carry out lightweight & brief experiments along the lines proposed in the attached RTF as part of your research, then I'd be happy to sponsor a small test by authorizing Ozzie Kidane, AME's electronics engineer, to build a set of these xOSC wearables for you.  Lightweight means doable in EV 7.725 24x7, brief means one or two days per experiment.  Once you get this to work, I'd like to use this to establish presence across a distance, with no visual or auditory "image" of the other.


Ozzie made this wearable prototype out of the xOSC recently for Synthesis work.  Garth's used this in his class. 
I will ask Adrian if his + John's normal vector (Grassmannian) package works with this device.

Cheers,
Xin Wei