Navid Navab: Re: correlation is a vast and nebulous space


I agree with Doug’s caution about the problem with ignoring away the “dependent” variables — values f[t] — and paying attention only to “zero”-crossings.   As Adrian would point out as well, this already encodes many assumptions on what is a significant event.   For example, that’s the basic problem with the “pluck” detector that Navid has coded and used

This is a big reduction about how I use plucks and triggers to ornament continueus events. In my first rough draft of GestureBending Principles (found here: http://gesturebending.weebly.com/principles-of-gesture-bending.html) I have clearly stated that triggers and and other event and onset detectors are solely used to modulate continues data with the goal of ornamenting their perceived formal structures that are driven continuously and often by the the trigger's dependant variable.

Maybe what is being referred to here as "pluck" i believe is our trigger detectors with hysteresis and debounce... Out of context this is just a very very simple element that people in our lab and elsewhere have put to different uses. Miller's bonk~ (onset detector) partially uses this and so does Vangelis's triggers and etc. We have in the recent past used this data to detect onsets and feed the onset times into our rhythm kit. Contextually meaningful modal bracketing starts from thoughtful feature extraction and the complementary rhythm kit provides a method for viewing, analyzing and manipulating the detected event onsets.

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement

As I’ve been saying.  This is more rhetorical fuel for why I want the Einsteins Dream as well as the Improvisational Environments workshop to host apparatus that tries to avoid  “absolute” time-indexes 

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement

Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it



https://medium.com/the-physics-arxiv-blog/d5d3dc850933

Re: correlation is a vast and nebulous space || was Re: Intel Announces Edison and a Wearables Contest

I agree with Doug’s caution about the problem with ignoring away the “dependent” variables — values f[t] — and paying attention only to “zero”-crossings. As Adrian would point out as well, this already encodes many assumptions on what is a significant event. For example, that’s the basic problem with the “pluck” detector that Navid has coded and used

(More precisely, for a fixed y, the intervals in the inverse image of y under f: { f^(-1)[y] }, assuming f is C0).)

But I have a fundamental reason which is to deliberately lever us away from mono-sense-modality-ness. It’s a very crude but hopefully effective method to get us to pay attention to the phenomenology of temporality.

Keeping in mind the modal bracketing that’s being performed by looking at intervals as Julian’s kit provides.

There are more sophisticated approaches — as Pavan pointed out in an AME seminar last month: well known in signal processing 101 as passing to frequency (time) domain. That raises other fundamental issues when the signal cannot be assumed to have a significant periodic component.

And so it goes. Meanwhile I say, let’s get crude and palpably relevant experiments working first, palaver later! Xin Wei

Doug van Nort: correlation is a vast and nebulous space

I am interested in how this study progresses, and must admit I'm more into gestural/temporal analysis than the hardware side of things (though it is nice to keep abreast of the latest developments…).  Especially ideas that could also be applied to music/movement coordinations.

Definitely agreed that the simple approach of onset/intervals is the way to go with this one.  I just wanted to note that, in such unconstrained movement situations, the continuous signal is still an important friend when one
cares about defining segments or onsets.  In Max/MSP parlance, bonk~ is right for some situations and not for others.  Conversely, continuous cross-correlation can be an excellent tool for finding onsets and the lag between co-ordinated onset actions, or a misleading one depending on the ensemble of signals. Really it should be tried, though along with the inverse, discrete correlation of pre-extracted onset data.  Weighting, warping and normalization of the corr. function can be applied as the situation receives more constrains due to the movement context. (e.g. compare Yin algorithm to standard autocorr. in the case of pitch detection).

All this to say, allowing a pre-defined package to uncritically handle the layer between continuous input and output onset data could be another type of over-determination of the problem : )

best,

Doug

(1) rhythm correlation and (2) state transition experiments for TML-Synthesis improvisatory environments residency Feb 15 - March 8 @ ASU

Hi Julian, Katie, (and Nina):

Yes, I need to define the experiments in consultation with the experienced folks.  See http://improvisationalenvironments.weebly.com 

Assuming that we can clear her paperwork with ASU, Katie will do logistics, scheduling.

My colleague Prof. Chris Ziegler, an expert in the domain of movement and technology from ZKM,
graciously agreed to be the Faculty point person for this workshop on the AME side.
So, soon I would like to hand this sort of communication over to Chris.

Then I would like to step back and focus on the design and execution of
phenomenological and scientific experiments on 
(1) rhythm / temporal textures
and 
(2) state transitions
which are my main foci during this workshop. 
(1) rhythm / temporal textures involves working with Julian, Nikos, Ozzie, talking with Adrian and Doug.  Re-read http://textures.posthaven.comespecially.
(2) state transitions involves working with Navid, Julian, Evan, talking with Garth.  Re-read the Ozone paper (ACM Multimedia 2010)

Julian, Katie, (and Navid, Chris and Nina), I’ve inviting you as Admins to weebly.com so you can edit

I have transferred info from the following text into the website

Cheers,
Xin Wei
__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis Center / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Fwd: Intel Announces Edison and a Wearables Contest

Hi Ozzie, (...Julian, Nikos, Adrian),

xOSC — correlation, rhythm experiments 

Yes, please go ahead and make two sets of wearables based on the xOSC device.  As discussed we can lay the battery and the board  flat, add a switch as you recommended, with a bit of curvature & rounding in the housing.  If they are as flat as possible, then we can strap them with store-bought sport cuffs to the arm or leg.    One we’ll give to TML, one AME/Synthesis (who will be the grad student here @ AME who will work in parallel with Nikos, Julian, Doug on the correlation tests ?   (See Adrian’s suggestions recorded on http://textures.posthaven.com .)

I would like to suggest leaving the capacity to do the following : to attach a small number (up to 12, 24?) of photocells.   Someday I’d like to be able to wire dispersed, isolated photocells into a shirt / skirt / pants and find clever ways to interpret the non-photographic time-varying data from those dispersed light-sensing points.  (Hence I add Pavan to this thread.)


INTEL wearable

As for other devices, I’d like to get us on the gravy train for some Intel gear such as the Galileo or Edison board.    Will do so soon as I can turn attention back to cultivating Intel.  Please advise me on what to ask for, and why so we can brainstorm on whether we — TML+Synthesis (CNMAT?) want to go for that Intel Wearables contest

Perhaps some video documenting a dancer controlling DMX lights via on-the-body inertial sensor would be nice to have ready to hand.

Cheers,
Xin Wei

__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis Center / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________




On Dec 16, 2013, at 2:51 PM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Here are some pictures and video with xOsc device. We could just as easily integrate it with Evan's video patches to add other sensor effects. I am waiting for your Ok to order the other 2.

From: Assegid Kidane
Sent: Sunday, December 15, 2013 9:57 PM
To: Sha Xin Wei
Subject: RE: xOsc devices

Hi Xin Wei,

I think we should get 2 for TML as they are useful as a quick way to add IMU data or data from any external sensor to an interactive installation or Responsive Environment. The cost is about $620 for 2 sets which includes a LIPoly battery and USB charger. I will then make the housing ready here. Let me know if I should wait until I hear from you to review the pictures I will send to you.

Ozzie

From: Sha Xin Wei [shaxinwei@gmail.com]
Sent: Sunday, December 15, 2013 7:12 AM
To: Assegid Kidane
Subject: Re: xOsc devices

Dear Ozzie,

Yes, whenever you can, please send some pictures.  I will share them with the researchers in the Topological Media Lab who wanted to try out some of those devices for movement + media experiments.   Maybe we can investigate the costs for making say 4 sets, in order to have two pairs one in Montreal, one in Tempe, to test co-movement…   

Thank you very much for this initiative,
Xin Wei

__________________________________________________________________________________
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab (EV7.725) • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________









On 2013-12-14, at 9:12 AM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

Actually, I am not. I can send you a few pictures on Monday, will that work?


Best Regards,
Assegid Kidané


-------- Original message --------
From: Sha Xin Wei 
Date:12/14/2013 8:59 AM (GMT-07:00) 
To: Assegid Kidane 
Subject: Re: xOsc devices 

Good morning Ozzie, 
Are you anywhere close to campus?  My plane leaves at 1 PM…
Regards,
Xin Wei


On 2013-12-14, at 8:45 AM, Assegid Kidane <Assegid.Kidane@asu.edu> wrote:

That was one of the goals of the enclosure design. I have added a foot long hook and loop to strap it easily to the arm  or legs. For larger parts of the body it is a matter of using a readily available longer hook and loop. We tested it to control  the lights yesterday. Just as easy to integrate it in one of Evan's video patches.


Best Regards,
Assegid Kidané


-------- Original message --------
From: Sha Xin Wei 
Date:12/14/2013 5:00 AM (GMT-07:00) 
To: Assegid Kidane 
Subject: Re: xOsc devices 

Hi Ozzie,

I would love to see what you have built.   How wearable is it?  We want it to be comfortable worn on the body for very vigorous dance, athletics.

Cheers,
Xin Wei

__________________________________________________________________________________
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab (EV7.725) • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________









Begin forwarded message:

From: "Somoza, John A" <john.a.somoza@intel.com>
Subject: Intel Announces Edison and a Wearables Contest
Date: January 7, 2014 at 4:14:58 PM MST

Some exciting news!!

 

Intel announced a new product for wearables yesterday at CES. The Intel Edison board features a low-power 22nm 400MHz Intel® Quark processor with two cores,

integrated Wi-Fi and Bluetooth*, and much more.

 

 

 

Intel also announced a Wearables contest that, as more details emerge, I am optimistic this network of design schools will be able to capitalize on.

 

 

 

 

 
__________________________

John Somoza, Program Manager

University Program Office

Intel Corporation

Hillsboro, Oregon USA

Cell: 971-998-8490

 

Adrian Freed Re: rhythm / correlation research using xOsc devices from ASU

On 2013-12-28, at 7:03 PM, <adrian@adrianfreed.com> wrote:


Some of the researchers at the TML have also designed wearables with an eye to comfort and visual design.  How flat can these enclosures be (and still do their job)?  Can we halve the thickness somehow by re-arrangement, so that the dancer can wear this for example in a flatter, concave part of her/his body?  Can a dancer or roll over it comfortably and safely?

The Sensestage wireless sensor platform that Nikos has used is much cheaper  -- about 1/10 the price including radio + software to map data to Max/MSP/Jitter.   One bottleneck is that each wireless device Bluetooth's to only one base computer at a time.  Another is probably low sample frequency.  (We could not fund numbers.)

I don't know what you are measuring when you say 1/10 of the price. Are
you including any labor costs -all the setup time the Canadian
government is paying
to configure the radios etc? Zigbee radios are paired to a master radio
which has to be accounted for. Also the old sense stage didn't have a
full 9DOF IMU - just
an unpopulated accelerometer. THese devices are significantly different
enough that I think we may be in Apple/Oranges territory.

Try a setup where you have them both connected to the same object.

flatness of fit?
I have found some smooth boxes to put things in: avoid stacking battery
and device. Put them side by side in narrow boxes with a good hinged
connection between. Silicone wire used by RC hobbyists is the magic
material you need for strain relief and to deliver enough power.
x-OSC gets hot due to higher power radio (and longer distances).


The xOSC is pricey but technically better on key specs -- so the remaining question is how physically wearable can it be….
The key thing for me is that you know when each measurement was made.
x-OSC has accurate time tags and the firmware
is starting to use this on various input sources (output control will be
done next year) . x-OSCis actually using an uncalibrated medium-grade
9DOF IMU.
Vangelis can fill you in on the sources and prices of calibrated
systems. Extra credit homework is to self-calibrate based on the
correlations rather
than use conventional "calibrate-to-a-reference" scientific and
engineering practice.

Experiment:

Can we entrust this discussion to TML RA's if available:  Julian Stein + Nikos Chandolias?   Nikos is an MA student who is both an electrical engineer and a dancer by training?   Talk / confer with Adrian Freed @ CNMAT Berkeley, Dr. Doug van Nort @ TML, Ozzie and Prof. Chris Ziegler @ AME.  Let's not second-guess or over-engineer -- please get sound deterministically coupled to co-movement happening as soon as possible -- CRUDE BUT PALPABLE is good.  Then you'll refine rapidly.

Agreed. Sound is a great medium for this work and also for detecting
problems in the timing and resolution of the sensing. Map parameters you
are worried
about noise and dynamic range of PITCH. Match timing to short, 
percussive HITS. Have the performer where the sound output otherwise you
are modulating
the delay structure with the movement in the room. sound: 1ms/foot

Rhythm Studies during TML residency at Synthesis workshop Feb 15 - March 9, 2013

Rhythm Studies
TML and Synthesis
Sha Xin Wei, Dec 2013

Temporal Textures Research Notes: http://textures.posthaven.com


Correlation and Entrainment Questions

Now that Julian Stein has crafted a very tidy package for dealing with rhythm (msec resolution, up to minutes and hours scales) I would like to explore these questions:

• How can two people moving in concert get a sense of each other's presence and dynamics using rhythm alone? 

• How can two people moving in concert anticipate of each other's dynamics without superficial representations via sound-or light-image? 

The model for this is Tinikling (a Philipines game):  A pair of kids hold two long sticks parallel between them.  As they rhythmically bring the sticks together or apart, a third kid jumps in between.  Often they chant as they bang the sticks together.    The jumper syncs by singing prior to jumping in.   This is a core model for dealing with latency, which we've been poised to do for 10 years but always swerved aside distracted by distractions.

PRACTICAL GOAL

• Look for correlative indicators of "intentional" "coordinate" movement, ideally using mapping of intersubjective, not ego-based data.

• Prepare software / hardware for rhythm experiments at ASU Phoenix in February 2014.

KEY TACTICS

• For data of type (t, v) with values v(t) as a function of time, deliberately ignore the dependent value (colour, accelerometer or whatever sensor data you ordinarily would be attending to).  Instead work only with onsets ti  and intervals  ∆ti .

• AVOID being seduced by dealing with the "dependent" data v -- whether incident sound, light, pressure or what have you.   The point of this is to focus on time-data: onsets (zero-crossings), or intervals, etc. -- instantaneous, as well as interval / integrated (summed), as well as their derivatives (i.e. Sobolev norms).

• Create sound or visual feedback only based on this time data.  I would say drive lighting, or some simple sonification .  The key is to do it yourself so you can rapidly change it yourself, and because it must be simple, not musical.  This should be NOT slaved only to one body's movement, but auditory feedback in concert with collective movement.

• Compute cross-correlation on multiple streams of time-series, and map the running values to time-based media as feedback.

References

Helgi-Jon Schweizer, Innsbruck 
Karl Pribram, Stanford

and recent work:

[ PUT LITERATURE SURVEY HERE ]

EMAILS




On Thu, Dec 19, 2013 at 10:49 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Hi Nikos & Julian,

Just to confirm that I'll come to the TML Friday to define with you a little research plan for the rhythm experiments

How can two people moving in concert get a sense of each other's presence and dynamics using rhythm alone?

 

How can two people moving in concert anticipate of each other's dynamics without superficial representations via sound­ or light­ image? 
However it is important to understand in what context are we posing those questions.

Context will be two dancers, but as Adrian and I both observed -- we are interested in the challenge presented by much less over-determined situations in everyday activity.   But working with dancers in a variety of improvised movement would scaffold the initial approach.   (We need to be sensitive to how that biases the apparatus and our use of it.)

I would like to start with standard cross-correlation measures -- Doug or Nikos can locate them!

Cheers,
Xin Wei


__________________________________________________________________________________
Incoming Professor and Director • Arts, Media + Engineering • Herberger Institute for Design and the Arts / Director • Synthesis Center / ASU
Founding Director, Topological Media Lab (EV7.725) • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________



On 2013-12-17, at 2:48 PM, Nikolaos Chandolias <nikos.chandolias@gmail.com>
 wrote:

Of the particular interest are the questions raised: 
How can two people moving in concert get a sense of each other's presence and dynamics using rhythm alone? and 
How can two people moving in concert anticipate of each other's dynamics without superficial representations via sound­ or light­ image? 
However it is important to understand in what context are we posing those questions.

If we are talking for performance, creating rhythm and dynamics between two dancers could require either an innate timing structure already in place, instinctual reactions between the two and what they are creating, and yes ‘representations’ through movement, sound- breathe that can create or inform this rhythmic 'pulse'... Thus, I believe that there is space in the research that we are conducting in collaboration with Margaret and Doug. I also believe that this could be useful for Omar’s experiment with the 'glove-space-sensor' [David Morris et al., Patricia Duquette and Zohar Kfir built the glove; Nina Bouchard could improve it?]

In any case, all those will need to be tested to see how it works. We will be in the BlackBox from 8th to the 23rd of January where we could implement and play with the xOSC platform and conduct different 'light-weighted' experiments that afterwards could continue in the EV7.725. xOSC could be also proved useful for other people and future movement-based workshops with the TML.

I would like to propose to talk maybe in person tomorrow with you Xin Wei at the tml, and whoever else wants to participate.

Best regards,
Nikos


On Mon, Dec 16, 2013 at 10:06 PM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
Hi Nikos, (Doug, Julian, et al…)

If you and/or colleagues are available to rapidly carry out lightweight & brief experiments along the lines proposed in the attached RTF as part of your research, then I'd be happy to sponsor a small test by authorizing Ozzie Kidane, AME's electronics engineer, to build a set of these xOSC wearables for you.  Lightweight means doable in EV 7.725 24x7, brief means one or two days per experiment.  Once you get this to work, I'd like to use this to establish presence across a distance, with no visual or auditory "image" of the other.


Ozzie made this wearable prototype out of the xOSC recently for Synthesis work.  Garth's used this in his class. 
I will ask Adrian if his + John's normal vector (Grassmannian) package works with this device.

Cheers,
Xin Wei

rhythm / correlation research using xOsc devices from ASU


Dear Julian, Nikos, Ozzie,

I'm addressing the people who I hope can actively help advance research on correlation-based indicators of "intentional" "coordinate" movement for studies of rhythm, ideally using mapping of intersubjective, not ego-based data.   Julian's already put together MnM / FTM code for this so things are well-underway.

In prep for rhythm experiments at ASU in February… and following up with Ozzie's initiative at ASU.

Finally, after 6 years, a chance to inch forward on this low-hanging fruit and simple exercise!  (May it not Tantalize.)



Practical wearability question:

Some of the researchers at the TML have also designed wearables with an eye to comfort and visual design.  How flat can these enclosures be (and still do their job)?  Can we halve the thickness somehow by re-arrangement, so that the dancer can wear this for example in a flatter, concave part of her/his body?  Can a dancer or roll over it comfortably and safely?

The Sensestage wireless sensor platform that Nikos has used is much cheaper  -- about 1/10 the price including radio + software to map data to Max/MSP/Jitter.   One bottleneck is that each wireless device Bluetooth's to only one base computer at a time.  Another is probably low sample frequency.  (We could not fund numbers.)

The xOSC is pricey but technically better on key specs -- so the remaining question is how physically wearable can it be….

Experiment:

Can we entrust this discussion to TML RA's if available:  Julian Stein + Nikos Chandolias?   Nikos is an MA student who is both an electrical engineer and a dancer by training?   Talk / confer with Adrian Freed @ CNMAT Berkeley, Dr. Doug van Nort @ TML, Ozzie and Prof. Chris Ziegler @ AME.  Let's not second-guess or over-engineer -- please get sound deterministically coupled to co-movement happening as soon as possible -- CRUDE BUT PALPABLE is good.  Then you'll refine rapidly.

This can be linked to the theme of transition that Garth and I are both interested in as a motivation for this workshop.

This is subject to people's availability and the strategic needs of the TML and Synthesis Center.  But we need to define the February workshop a month ago, so let's commit on the new year ASAP.  Then you can organize housing...

Cheers,
Xin Wei

__________________________________________________________________________________
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab (EV7.725) • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________

a research purpose for xOsc devices from ASU ?

Hi Nikos, (Doug, Julian, et al…)

If you and/or colleagues are available to rapidly carry out lightweight & brief experiments along the lines proposed in the attached RTF as part of your research, then I'd be happy to sponsor a small test by authorizing Ozzie Kidane, AME's electronics engineer, to build a set of these xOSC wearables for you.  Lightweight means doable in EV 7.725 24x7, brief means one or two days per experiment.  Once you get this to work, I'd like to use this to establish presence across a distance, with no visual or auditory "image" of the other.

Ozzie made this wearable prototype out of the xOSC recently for Synthesis work.  Garth's used this in his class. 
I will ask Adrian if his + John's normal vector (Grassmannian) package works with this device.

Cheers,
Xin Wei



__________________________________________________________________________________
Incoming Professor and Director • Arts, Media + Engineering • Herberger Institute for Design and the Arts / Director • Synthesis Center / ASU
Founding Director, Topological Media Lab (EV7.725) • topologicalmedialab.net/  •  skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________