BRAWN — the full “un-inspiring” version of the Standard Model’s Lagrangian as hacked together by physicists:
Connes and Marcolli’s formulation using and producing deep insight:
Brains over brawn.
Thanks to Adrian Freed for drawing attention to this analog machine + human rhythm jam session https://www.facebook.com/guanitoweb/videos/10206485558432152/ “Viviendo la nochebuena en un ricon de sevillano aquí os muestro algo nuevo la máquina de compas y os deseo una feliz navidad a todos, con armonia, sentimiento y compas”
On Dec 4, 2015, at 12:41 PM, Todd Ingalls <TestCase@asu.edu> wrote:could this be tied to rhythm. I think we are both skeptical of brain imaging but could still be interesting.
todd from my phone
Begin forwarded message:
Hey Todd,Hope you’re good.I have been working and thinking about a couple of issues a lot lately in group neuroscience .. the two key topics are joint action and entrainment. Joint action is just multiple actors working together to accomplish some task … like two people carrying a table together, or a couple of soccer players moving the ball down the field. These are interesting problems because they require the actors to have some sense of what their partners are trying to accomplish and how they are going about that. Entrainment is an entirely hypothesized process in which two brains come into “synchrony” in order to communicate .. this is thought to be important in language, but obviously is also important in music performance.Entrainment, however, is pretty loosely defined at the moment … we have an idea for getting at entrainment using musicians. The notion is to get an ensemble together, a good ensemble … and record simultaneous EEGs from the players as they work a piece.To some extent this has been done before: with saxophones (ugh!) The focus of that paper was on EEG markers of empathy (even more ugh), and the usual expected changes in EEG associated with listening to and motor outputs for music.What I’d like to do is do real analysis across multiple brains during performance, and see if we can see electrical signs of entrainment as they are working. In a dream world, as the ensemble locks into “togetherness” … the brains will entrain. Or vice versa.Anyway, to go after this we will need to synch up multiple EEGs, and more importantly, find a good ensemble that might be up for this.I thought of AME, and wondered if there would be somebody there interested in devoting a little bit of time and nominal resources to chasing this down.In any case, have good holidays,STeve
Where can we get the best publicly available HMM external for Max
as a general purpose HMM package
Should we extend / modify gf (which we have via IRCAM license )
and can we use it easily for non-audio data? People claim to have tried it on video.
It seems that the real work is the preliminary feature extraction where a lot of interpretation happens
What are examples of code that do this in interesting ways?
Navid Navab wrote
Also included in the O4.rhyth_abs is a folder labeled o4.track. This features a simple system for recording and playing video with a synchronized osc data stream.
There was still quite a bit of drift when I tested it, but I was only using 100Hz sample rate which I suspect may have been the main issue.Mike
Thanks Xin Wei.
Tt would indeed to at least develop a road map for this important work. We should bring the folk from x-io
into the discussion because they have moved their considerable energies and skills further into this space in 2015.
I also want to clarify my relative silence on this front. As well as weathering some perfect storms last year, I found
the following project attractive from the perspective of separating concerns for this orientation work: http://store.sixense.com/collections/stem
They are still unfortnuately in pre-order land with a 2-3 month shipping time. Such a system would complement commercial and inertial measuring systems well
by providing a "ground truth" ("ground fib") anchored to their beacon transmitter. The sixense system has limited range for many of our applications
which brings up the question (again as a separation of concerns not for limiting our perspectives) of scale. Many folk are thinking about orientation and inertial
sensing for each digit of the hand (via rings).
For the meeting we should prepare to share something about our favored use scenarios.
Can you — Adrian and Mike — Doodle a Skype to talk about who should do what when to get gyro information from multiple (parts of ) bodies
into our Max platform so Mike and the signal processing maths folks can look at the data?
This Skype should include at least one of our signal processing Phd’s as well ?
Mike can talk what he’s doing here, and get your advice on how we should proceedL
write our own gyro (orientation) feature accumulator
get pre-alpha version of xOSC hw + sw from Seb Madgewick that incorporates that data
adapt something from the odot package that we can use now.
WAIT till orientation data can be integrated easily (when, 2015 ?)
Half an hour should suffice.
I don’t have to be at this Skype as long as there’s a precise outcome and productive decision that’ll lead us to computing some (cor)relations on streams of orientations as a start...
Yes, there is great demand for something that works in sensor fusion for inertial sensors but I think the best way to do so is as part of o. so to benefit every inertial setup out there. It will take ages for Seb to implement it for xosc and would be an exclusive benefit. Seb's PhD is out there and I am sure he will help sharing new code for solving the problem. The question is can we do this? :)
My warm regards to everyone!
The experts on your question work at x-io. Seb Madgewick wrote the code a lot of people around the world are using for sensor
fusion in IMU's.
Are you using their IMU (x-OSC) as a source of inertial data?
We started to integrate Seb's code into Max/MSP but concluded it would be better to wait for Seb
to build it into x-OSC itself. There are some important reasons that this is a better approach, e.g.,
reasoning about sensor fusion in a context with packet loss is difficult.
It is possible Vangelis persisted with the Max/MSP route
On Jan 30, 2015, at 3:01 PM, Michael Krzyzaniak <firstname.lastname@example.org wrote:
I am a PhD student at ASU and I work with Xin Wei at the Synthesis Center. We are interested in fusing inertial sensor data (accel/gyro/mag) to give us reliable orientation (and possibly position) information. Do you have in implementation of such an algorithm that we can use in (or port to) Max/MSP?