Affiliate Professor: Future of Innovation in Society; Computer Science; English
Founding Director, Topological Media Lab
Affiliate Professor: Future of Innovation in Society; Computer Science; English
Founding Director, Topological Media Lab
Where can we get the best publicly available HMM external for Max
as a general purpose HMM package
Should we extend / modify gf (which we have via IRCAM license )
and can we use it easily for non-audio data? People claim to have tried it on video.
It seems that the real work is the preliminary feature extraction where a lot of interpretation happens
What are examples of code that do this in interesting ways?
Xin Wei
Navid Navab wrote
Also included in the O4.rhyth_abs is a folder labeled o4.track. This features a simple system for recording and playing video with a synchronized osc data stream.
I translated Seb's sensor fusion algorithm into Javascript to be used within Max/MSP:
There was still quite a bit of drift when I tested it, but I was only using 100Hz sample rate which I suspect may have been the main issue.Mike
On Sat, Jan 31, 2015 at 3:45 PM, Adrian Freed <adrian@adrianfreed.com
wrote:
Thanks Xin Wei.
Tt would indeed to at least develop a road map for this important work. We should bring the folk from x-io
into the discussion because they have moved their considerable energies and skills further into this space in 2015.
I also want to clarify my relative silence on this front. As well as weathering some perfect storms last year, I found
the following project attractive from the perspective of separating concerns for this orientation work: http://store.sixense.com/collections/stem
They are still unfortnuately in pre-order land with a 2-3 month shipping time. Such a system would complement commercial and inertial measuring systems well
by providing a "ground truth" ("ground fib") anchored to their beacon transmitter. The sixense system has limited range for many of our applications
which brings up the question (again as a separation of concerns not for limiting our perspectives) of scale. Many folk are thinking about orientation and inertial
sensing for each digit of the hand (via rings).
For the meeting we should prepare to share something about our favored use scenarios.
On Jan 31, 2015, at 1:37 PM, Xin Wei Sha <Xinwei.Sha@asu.edu
wrote:
Can you — Adrian and Mike — Doodle a Skype to talk about who should do what when to get gyro information from multiple (parts of ) bodies
into our Max platform so Mike and the signal processing maths folks can look at the data?
This Skype should include at least one of our signal processing Phd’s as well ?
Mike can talk what he’s doing here, and get your advice on how we should proceedL
write our own gyro (orientation) feature accumulator
get pre-alpha version of xOSC hw + sw from Seb Madgewick that incorporates that data
adapt something from the odot package that we can use now.
WAIT till orientation data can be integrated easily (when, 2015 ?)
Half an hour should suffice.
I don’t have to be at this Skype as long as there’s a precise outcome and productive decision that’ll lead us to computing some (cor)relations on streams of orientations as a start...
Cheers,
Xin Wei
__________
Hello!
Yes, there is great demand for something that works in sensor fusion for inertial sensors but I think the best way to do so is as part of o. so to benefit every inertial setup out there. It will take ages for Seb to implement it for xosc and would be an exclusive benefit. Seb's PhD is out there and I am sure he will help sharing new code for solving the problem. The question is can we do this? :)
My warm regards to everyone!
v
On Jan 30, 2015 6:45 PM, Adrian Freed <adrian@adrianfreed.com
wrote:
Hi.
The experts on your question work at x-io. Seb Madgewick wrote the code a lot of people around the world are using for sensor
fusion in IMU's.
Are you using their IMU (x-OSC) as a source of inertial data?
We started to integrate Seb's code into Max/MSP but concluded it would be better to wait for Seb
to build it into x-OSC itself. There are some important reasons that this is a better approach, e.g.,
reasoning about sensor fusion in a context with packet loss is difficult.
It is possible Vangelis persisted with the Max/MSP route
On Jan 30, 2015, at 3:01 PM, Michael Krzyzaniak <mkrzyzan@asu.edu wrote:
Hi Adrian,
I am a PhD student at ASU and I work with Xin Wei at the Synthesis Center. We are interested in fusing inertial sensor data (accel/gyro/mag) to give us reliable orientation (and possibly position) information. Do you have in implementation of such an algorithm that we can use in (or port to) Max/MSP?
Thanks,
Mike
Not only handcraft manufacture, not only artistic and poetical bringing into appearance and concrete imagery, is a bringing-forth, poiesis. Physis also, the arising of something from out of itself, is a bringing-forth, poiesis. Physis is indeed poiesis in the highest sense. For what presences by means of physis has the bursting open belonging to bringing-forth, e.g., the bursting of a blossom into bloom, in itself (en heautoi). In contrast, what is brought forth by the artisan or the artist, e.g. the silver chalice, has the bursting open belonging to bringing forth not in itself, but in another (en alloi), in the craftsman or artist.
[Heidegger, Question Concerning Technology, 11]
http://robinmeier.net/?p=1234
On Nov 8, 2014, at 10:01 PM, Adrian Freed <Adrian.Freed@asu.edu> wrote:
On Nov 9, 2014, at 7:05 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Then I wonder if rank statistics — ordering vs cardinal metrics — could be a compromise way.
David Tinapple and Loren Olson here have invented a web system for peer-critique called CritViz
that has students rank each other’s projects. It’s an attention orienting thing…
Of course there are all sorts of problems with it — the most serious one being
herding toward mediocrity or at best herding toward spectacle
and it is a bandage invented by the necessity of dealing with high student / teacher ratios in studio classes.
The theoretical question is can we approximate a mereotopology on space of
Whiteheadian or Simondonian processes using rank ordering
which may do away with the requirement for coordinate loci.
Axiom of Choice gives us a well ordering on any set, so that’s a start,
but there is no effective decidable way to compute an ordering for an arbitrary set.
I think that’s a good thing. And it should be drilled into every engineer.
This means that the responsibility for ordering
shifts to ensembles in milieu rather than individual people or computers.
Hmmm, so where does that leave us?
We turn to our anthropologists, historians,
and to the canaries in the cage — artists and poets…
There’s a group of faculty here including Cynthia Selin, who are doing what they call scenario [ design | planning | imagining ]
as a way for people to deal with wicked messy situations like climate change, or developing economies. They seem very prepared for
narrative techniques applied to ensemble events but don’t know anything about theater or performance.
It seems like a situation ripe for exploration. If we can get sat the naive phase of slapping conventional narrative genres from
community theater or gamification or info-visualization onto this.
Very hard to talk about, so I want to build examples here.
Xin Wei
From: Adrian Freed <adrian@cnmat.berkeley.edu>
Subject: Re: Automatic projector calibration
Date: October 5, 2014 at 1:08:24 PM MST
The technical notion of lag, does not jive very well with the multiple temporal structures involved in experience.
Using it as a ground truth produces some ugly theories, e.g., http://en.wikipedia.org/wiki/Interaural_time_difference
Notice the frequency dependent hacks added to the theory and vagueness about delay/phase. Also notice that the detailed
anatomical and signal flow analysis says nothing to support the theory other than that there are places the information from each ears meet. I encourage everyone to think this through carefully and build and explore speculatively.
We have been down this hole at CNMAT for pitch detection on guitars. People think you can synthesis sounds that tightly track pitch of a string. You can't. There are no interesting definitions of a low guitar string pitch that would make this possible. One "solution" is to track with a constant lag, i.e. a sort of echo. This conditions and constrains the space considerably. A few artists have done amazing things within these constraints (https://www.youtube.com/watch?v=VYCG5wZ9op8, https://www.youtube.com/watch?v=1X5qDeK3siw) but the apparatus has strong agency and may well interfer with other goals.
On Oct 5, 2014, at 10:44 AM, Evan Montpellier <evan.montpellier@gmail.com> wrote:Pages 36-9 in the thesis deal with tracking moving projection surfaces
at near real-time rates. The short of it is that the max refresh rate
Dr. Lee was able to achieve was 12Hz, noting:
"feedback latency places a substantial constraint on the usage of
alternative patterns that may utilize recent sensor data to improve
tracking performance. Tracking algorithms that require instantaneous or
near instantaneous feedback from sensors are not likely to be executable
in practice."
Perhaps the lag would be be acceptable, though, within some of the
visual movement experiments that already play with time delay.
Evan
On 2014-09-30, 4:47 PM, Byron Lahey wrote:From my perspective, the real value that would come from implementing an
auto-calibration system would be the potential for dynamic projection
surfaces: surfaces that enter or exit a space, expand and contract,
morph into different shapes, etc.
I'm interested but don't have much bandwidth to devote to this.
Byron
On Fri, Sep 26, 2014 at 1:36 PM, Evan Montpellier
<Evan.Montpellier@asu.edu <mailto:Evan.Montpellier@asu.edu>> wrote:
For projects such as the Table of Content/Portals, automatic
projector calibration would save a considerable amount of work and
time. Here's an attractive looking solution from Dr. Johnny Chung
Lee, presently of Microsoft:
http://johnnylee.net/projects/thesis/
Is anyone interested in attempting to implement an analogous system
as part of the Synthesis-TML portal network?