sensorimotor observations of collective movement, Heims Ingalls experiment

Hi Todd,

Thanks.  Can we talk with Pavan, and then Steve Heims?

Yes I’d be very interested in such collective movement experiments.  But then it is urgent that we really prep our own measurement methods and team (Garrett?, ___ ? assisted by Julian).

As you know, I would want to measure correlations not (only) in the brain but across much more of the event.   
It is far more direct (scientifically rigorous) to measure as much of the global aspects of collective movement than to zero in on only one part of the body and in fact a part whose functions are extremely indirectly related to corporal kinetics, and in ways that are quite ill understood . 

That’s why I’ve asked Julian and our students to build out the rhythm kit to use all modalities of sensing intervallic rhythm.


in particular: 


and as an aside:

Can we talk with Pavan, and then with Steve?

Thanks,
Xin Wei


On Dec 4, 2015, at 12:41 PM, Todd Ingalls <TestCase@asu.edu> wrote:

could this be tied to rhythm. I think we are both skeptical of brain imaging but could still be interesting. 

todd from my phone

Begin forwarded message:

From: Stephen Helms Tillery <stillery@asu.edu>
Date: December 4, 2015 at 12:26:56 PM MST
To: Todd Ingalls <TestCase@asu.edu>
Subject: Back to music and brain

Hey Todd,

Hope you’re good.

I have been working and thinking about a couple of issues a lot lately in group neuroscience .. the two key topics are joint action and entrainment.   Joint action is just multiple actors working together to accomplish some task … like two people carrying a table together, or a couple of soccer players moving the ball down the field.   These are interesting problems because they require the actors to have some sense of what their partners are trying to accomplish and how they are going about that.   Entrainment is an entirely hypothesized process in which two brains come into “synchrony” in order to communicate .. this is thought to be important in language, but obviously is also important in music performance.

Entrainment, however, is pretty loosely defined at the moment … we have an idea for getting at entrainment using musicians.    The notion is to get an ensemble together, a good ensemble … and record simultaneous EEGs from the players as they work a piece.

To some extent this has been done before:   with saxophones (ugh!)   The focus of that paper was on EEG markers of empathy (even more ugh), and the usual expected changes in EEG associated with listening to and motor outputs for music.

What I’d like to do is do real analysis across multiple brains during performance, and see if we can see electrical signs of entrainment as they are working.   In a dream world, as the ensemble locks into “togetherness” … the brains will entrain.   Or vice versa.  

Anyway, to go after this we will need to synch up multiple EEGs, and more importantly, find a good ensemble that might be up for this.    

I thought of AME, and wondered if there would be somebody there interested in devoting a little bit of time and nominal resources to chasing this down.

In any case, have good holidays,

STeve

AME research and graduate proseminar: the problem with explaining things in terms of "'parts' of the brain"

Hardcastle and Stewart succinctly point out a fundamental problem at the heart of the methodology of neuroscience (and of cognitive science): the modularity thesis.

Neuroscience did not “discover” modules — loci of functions —  in brains.   Rather “they don’t even have a good way of accessing the appropriate evidence. It is a bias in neuroscience to localize and modularize brain functions.”

The problem with scientistic methodology is that you see what you expect to see.



There’s much more in play: Noah Brender’s work questions the modularity thesis underlying much of technoscience. 
However, another world is possible :)

Xin Wei

Synthesis rhythm: IMU's etc.

Dear Rhythm people: Garrett, Gabby, Julian,

Thanks for being on the demo team !  Now we can get back to steady state  work, like rhythmanalysis 

Can you please check out the IMU’s that we bought last year as an input for our rhythm test platform?
Ask Ozzie or perhaps one of Prof. Turaga’s students who’s used them for permission and see if you can stream them into Max.

I’d like assemble a suite of inputs:
contact mic
air mic
camera (Julian)
IMU (Pavan’s group?)
xOSC gyros (Mike —> Julian)

and record them in parallel
with some movement scenarios to get multiple streams of time data.

Please define some scenarios : e.g.  assembling blocks small to giant size, cutting and washing .  try seated and upper body and locomotive.   Varsha’s done some movement scenarios with Grisha, but in very specialzied contexts.  How about quotidian ?

Let’s try some out on Monday Nov 30?

Cheers,
Xin Wei

cc Pavan

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
Fellow: ASU-Santa Fe Consortium for Biosocial Complex Systems
Affiliate Professor: Future of Innovation in Society; Computer Science; English
Founding Director, Topological Media Lab
skype: shaxinwei • mobile: +1-650-815-9962
_________________________________________________________________________________________________

example of synthesis research: Naccarato and MacCallum, "From Representation to Relationality: Bodies, Biosensors, and Mediated Environments" JDSP 8.1 (2015)

Here’s a journal article published by a couple of researchers hosted at Synthesis last year that may be interesting to folks working on movement and responsive media, somatic experience, experimental dance and experimental technology, critical studies of technoscience, or philosophy of movement:

Teoma Naccarato, John MacCallum, “From Representation to Relationality: Bodies, Biosensors, and Mediated Environments,”  in Embodiment, Interactivity and Digital Performance, Journal of Dance and Somatic Practices, 8.1, 2015.

Teoma is starting a PhD with the Centre for Dance Research (C-DaRE), Coventry University UK
and John is a postdoc at the Centre for New Music and Audio Technologies (CNMAT) Department of Music, University of California at Berkeley. 

John and Teoma’s extended journal article is a good example of a durable outcome from the research cluster hosted by Synthesis in the Heartbeat Residency: Choreography and Composition of Internal Time.  This was a residency on temporality — sense of dynamic, change, rhythm — held February 15- 20, 2015. AME iStage, Matthews Center, ASU.




Ambient color changes according to whether dancer’s heart is faster or slower than some rate in the rhythm accompaniment software.  Synthesis Residency Jan 2015.   (The overhead tube lamps from Ziegler’s “forest2" were not used in this particular experiment.)


Improvisation with dancer Naccarato, composer / system creator MacCallum, Synthesis team and members of ASU laptop orchestra (Lorkas). Synthesis Residency Jan 2015.


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
Fellow: ASU-Santa Fe Center for Biosocial Complex Systems
Affiliate Professor: Future of Innovation in Society; Computer Science; English
Founding Director, Topological Media Lab
skype: shaxinwei • mobile: +1-650-815-9962
_________________________________________________________________________________________________

HMM in Max

On Fri, Apr 24, 2015 at 5:12 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Where can we get the best publicly available HMM external for Max
as a general purpose HMM package

Should we extend / modify  gf (which we have via IRCAM license )
and can we use it easily for non-audio data?   People claim to have tried it on video.
It seems that the real work is the preliminary feature extraction where a lot of interpretation happens
What are examples of code that do this in interesting ways?

Xin Wei

Navid Navab wrote 

While FTM is somewhat discontinued, this all is being moved to IRCAM's free Mubu package:
download the package and quickly check some of their example patches.

poster: 


It contains optimized algorithms building on gf, FTM, cataRT, pipo, etc. While mubu is audio-centric, it is not necessarily audio-specific. mubu buffers can work with multiple data modalities and use a variety of correlation methods to move between these layers... This makes up a fairly wholesome platform without the need to move back and forth between gf, FTM, concatenative synthesis instruments, multimodal data handling, analysis, and etc.

As with most current IRCAM releases, it is highly under-documented. Besides gf that is distributed with their package, the mubu.hhmm object might be good place to start for what you are looking for:


also their xmm object might be of interest:

o4.track (video +sensor osc data), gyro (Was: Synthesis Center / Inertial Sensor Fusion)

Great!

Mike, Can you generate data in Julian’s data structure and store them in a shared directory for us all
along with the journaled video?  Julian Stein wrote the object for journalling data.

On Nov 10, 2014, at 1:07 AM, Julian Stein <julian.stein@gmail.com> wrote:
Also included in the O4.rhyth_abs is a folder labeled o4.track. This features a simple system for recording and playing video with a synchronized osc data stream.
I’ll cc this to the SC team so they can point out those utilities on our github.

It’d be great if you can give the Signal Processing group some Real Live Data to matlab offline this week, as a warmup to Teoma + John’s data the week of Feb 15.

We must have video journaled as well, always.

I’d be interested in seeing an informal brownbag talk about Lyapunov exponents one of those mornings of the week of Feb 15, together with some analysis of the data. 

Let’s cc Adrian Freed and John MacCallum on this “gyro" thread —
Adrian’s got the most insight into this  and could help us make some actual scientific headway
toward publishable results.

My question is : By doing some stats on clouds of orientation measurements
can we get some measure of collective intention (coordinated attention)
not necessarily at any one instant of time (a meaningless notion in a relativistic world like ours) — 
but in some generalized (collective) specious present?

Let’s plan John and Teoma’s workshop hour by hour schedule this coming week at a tea?

Kristi or Garrett, or __: please let us know when the “heartbeat”  workshop weebly site is posted and linked to the Synthesis research ok?

Cheers,
Xin Wei

On Feb 6, 2015, at 12:13 PM, Michael Krzyzaniak <mkrzyzan@asu.edu wrote:

I translated Seb's sensor fusion algorithm into Javascript to be used within Max/MSP:


There was still quite a bit of drift when I tested it, but I was only using 100Hz sample rate which I suspect may have been the main issue.

Mike


 On Sat, Jan 31, 2015 at 3:45 PM, Adrian Freed <adrian@adrianfreed.com
 wrote:
  Thanks Xin Wei.
  Tt would indeed to at least develop a road map for this important work. We should bring the folk from x-io
  into the discussion because they have moved  their considerable energies and skills further into this space in 2015.
 
  I also want to clarify my relative silence on this front. As well as weathering some perfect storms last year, I found
  the following project attractive from the perspective of separating concerns for this orientation work: http://store.sixense.com/collections/stem
  They are still unfortnuately in pre-order land with a 2-3 month shipping time. Such a system would complement commercial and inertial measuring systems well
  by providing a "ground truth" ("ground fib") anchored to their beacon transmitter.  The sixense system has limited range for many of our applications
  which brings up the question (again as a separation of concerns not for limiting our perspectives) of scale. Many folk are thinking about orientation and inertial
  sensing for each digit of the hand (via rings).
 
  For the meeting we should prepare to share something about our favored use scenarios.

 On Jan 31, 2015, at 1:37 PM, Xin Wei Sha <Xinwei.Sha@asu.edu
 wrote:
 
 
  Can you — Adrian and Mike —  Doodle a Skype to talk about who should do what when to get gyro information from multiple (parts of ) bodies
  into our Max platform so Mike and the signal processing maths folks can look at the data?
 
  This Skype should include at least one of our signal processing  Phd’s as well ?
 
  Mike can talk what he’s doing here, and get your advice on how we should proceedL
  write our own gyro (orientation) feature accumulator
  get pre-alpha version of xOSC hw + sw from Seb Madgewick that incorporates that data
  adapt something from the odot  package that we can use now.
  WAIT till orientation data can be integrated easily (when, 2015 ?)
 
  Half an hour should suffice.
  I don’t have to be at this Skype as long as there’s a precise outcome and productive decision that’ll lead us to computing some (cor)relations on streams of orientations as a start...
 
  Cheers,
  Xin Wei
 
  __________


On Jan 31, 2015, at 1:27 PM, Vangelis <vl_artcode@yahoo.com wrote:

 Hello!
  Yes, there is great demand for something that works in sensor fusion for inertial sensors but I think the best way to do so is as part of o. so to benefit every inertial setup out there. It will take ages for Seb to implement it for xosc and would be an exclusive benefit. Seb's PhD is out there and I am sure he will help sharing new code for solving the problem. The question is can we do this? :)
  My warm regards to everyone!
  v


On Jan 30, 2015 6:45 PM, Adrian Freed <adrian@adrianfreed.com
 wrote:

  Hi.
  The experts on your question work at x-io. Seb Madgewick wrote the code a lot of people around the world are using for sensor
  fusion in IMU's.
  Are you using their IMU (x-OSC) as a source of inertial data?
 
  We started to integrate Seb's code into Max/MSP but concluded it would be better to wait for Seb
  to build it into x-OSC itself. There are some important reasons that this is a better approach, e.g.,
  reasoning about sensor fusion in a context with packet loss is difficult.
 
  It is possible Vangelis persisted with the Max/MSP route
 
 
  On Jan 30, 2015, at 3:01 PM, Michael Krzyzaniak <mkrzyzan@asu.edu  wrote:
 
  Hi Adrian,
 
  I am a PhD student at ASU and I work with Xin Wei at the Synthesis Center. We are interested in fusing inertial sensor data (accel/gyro/mag) to give us reliable orientation (and possibly position) information. Do you have in implementation of such an algorithm that we can use in (or port to) Max/MSP?
 
  Thanks,
  Mike

Physis, poiesis in the highest sense

Not only handcraft manufacture, not only artistic and poetical bringing into appearance and concrete imagery, is a bringing-forth, poiesis. Physis also, the arising of something from out of itself, is a bringing-forth, poiesis. Physis is indeed poiesis in the highest sense. For what presences by means of physis has the bursting open belonging to bringing-forth, e.g., the bursting of a blossom into bloom, in itself (en heautoi). In contrast, what is brought forth by the artisan or the artist, e.g. the silver chalice, has the bursting open belonging to bringing­ forth not in itself, but in another (en alloi), in the craftsman or artist.

[Heidegger, Question Concerning Technology, 11]

[Synthesis] rhythm research: a self-organizing map (SOM) (jit.robosom)-->

Dear, Garrett, Mike, Julian, Omar, Chris Z,

Swiss-French artist Robin Meier used self-organizing maps
Paper about Max Jitter patch: jit.robosom

Self-organizing map abstraction for MaxMSP Jitter.

Don’t know if jit.robosom works or is very interesting in effect but his jit.robosom may be worth a try                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    for rhythm experiments driving our lighting instruments. 
This is a relatively trivial application of linked oscillators.
We should be able to achieve much more interesting behaviour, 
especially with live action in the loop.

A presentation by an authority in SOM’s: Timo Honkela (Finland)



HUNCH : Mike K’s correlation-based method should yield more interesting temporal textures.

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________

Adrian Freed: fuelling imagination for inventing entrainments

Here’s a note relevant to entrainment and co-ordinated rhythm, as the Lighting and Rhythm workshop looms.

Adrian Freed’s collected a list of what he calls “semblance typology of entrainments.”
Notice that he does NOT say “types” but merely a typology of semblances, which helps us avoid reification errors.

Let’s think of this as a way to enrich our vocabulary for rhythm, entrainment, temporality, processuality in movement and sound.
Let’s not use this — or any other list of categories — as an absolute universal set of categories sans context. 
See the comments below.


On Nov 8, 2014, at 11:54 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
I would like to enrich our imagination before we lock too much into talking about “synchronization” 
and regular periodic clocks in the Lighting and Rhythm workshop.




On Nov 8, 2014, at 10:01 PM, Adrian Freed <Adrian.Freed@asu.edu> wrote:
I haven't thought about this in a while but the move to organize the words using the term "together" which I did for the talk at Oxford is interesting because it allows a formalization in mereotopology a la Whitehead but I would have to provide an interpretation of enclosure and overlap that involves  correlation metrics in some structure, for example, CPCA 
(Correlational Principal Component Analysis): http://www.lv-nus.org/papers%5C2008%5C2008_J_6.pdf



On Nov 9, 2014, at 7:05 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:


__________________________________________________________________________________
Sha Xin Wei, Ph.D. • xinwei@mindspring.com • skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________



Thanks Adrian,

Then I wonder if rank statistics —  ordering vs cardinal metrics — could be a compromise way.
David Tinapple and Loren Olson here have invented a web system for peer-critique called CritViz
that has students rank each other’s projects.  It’s an attention orienting thing…

Of course there are all sorts of problems with it — the most serious one being 
herding toward mediocrity or at best herding toward spectacle

and it is a bandage invented by the necessity of dealing with high student / teacher ratios in studio classes.

The theoretical question is can we approximate a mereotopology on space of 
Whiteheadian or Simondonian  processes using rank ordering 
which may do away with the requirement for coordinate loci.

Axiom of Choice gives us a well ordering on any set, so that’s a start, 
but there is no effective decidable way to compute an ordering for an arbitrary set.
I think that’s a good thing.   And it should be drilled into every engineer.
This means that the responsibility for ordering 
shifts to ensembles in milieu rather than individual people or computers.

Hmmm, so where does that leave us?

We turn to our anthropologists, historians, 
and to the canaries in the cage — artists and poets…

There’s a group of faculty here including  Cynthia Selin, who are doing what they call scenario [ design | planning | imagining ]
as a way for people to deal with wicked messy situations like climate change, or developing economies.   They seem very prepared for 
narrative techniques applied to ensemble events but don’t know anything about theater or performance.  
It seems like a situation  ripe for exploration.  If we can get sat the naive phase of slapping conventional narrative genres from
community theater or gamification or info-visualization onto this.

Very hard to talk about, so I want to build examples here.

Xin Wei