lag; auto-calibration of video projection

From: Adrian Freed <adrian@cnmat.berkeley.edu>
Subject: Re: Automatic projector calibration
Date: October 5, 2014 at 1:08:24 PM MST

The technical notion of lag, does not jive very well with the multiple temporal structures involved in experience.
Using it as a ground truth produces some ugly theories, e.g., http://en.wikipedia.org/wiki/Interaural_time_difference
Notice the frequency dependent hacks added to the theory and vagueness about delay/phase.  Also notice that the detailed
anatomical and signal flow analysis says nothing to support the theory other than that there are places the information from each ears meet. I encourage everyone to think this through carefully and build and explore speculatively.

We have been down this hole at CNMAT for pitch detection on guitars. People think you can synthesis sounds that tightly track pitch of a string. You can't. There are no interesting definitions of a low guitar string pitch that would make this possible. One "solution" is to track with a constant lag, i.e. a sort of echo. This conditions and constrains the space considerably. A few artists have done amazing things within these constraints (https://www.youtube.com/watch?v=VYCG5wZ9op8, https://www.youtube.com/watch?v=1X5qDeK3siw) but the apparatus has strong agency and may well interfer with other goals.

On Oct 5, 2014, at 10:44 AM, Evan Montpellier <evan.montpellier@gmail.com> wrote:

Pages 36-9 in the thesis deal with tracking moving projection surfaces
at near real-time rates. The short of it is that the max refresh rate
Dr. Lee was able to achieve was 12Hz, noting:

"feedback latency places a substantial constraint on the usage of
alternative patterns that may utilize recent sensor data to improve
tracking performance. Tracking algorithms that require instantaneous or
near instantaneous feedback from sensors are not likely to be executable
in practice."

Perhaps the lag would be be acceptable, though, within some of the
visual movement experiments that already play with time delay.

Evan

On 2014-09-30, 4:47 PM, Byron Lahey wrote:
From my perspective, the real value that would come from implementing an
auto-calibration system would be the potential for dynamic projection
surfaces: surfaces that enter or exit a space, expand and contract,
morph into different shapes, etc.

I'm interested but don't have much bandwidth to devote to this.

Byron

On Fri, Sep 26, 2014 at 1:36 PM, Evan Montpellier
<Evan.Montpellier@asu.edu <mailto:Evan.Montpellier@asu.edu>> wrote:

  For projects such as the Table of Content/Portals, automatic
  projector calibration would save a considerable amount of work and
  time. Here's an attractive looking solution from Dr. Johnny Chung
  Lee, presently of Microsoft:

  http://johnnylee.net/projects/thesis/

  Is anyone interested in attempting to implement an analogous system
  as part of the Synthesis-TML portal network?

[Synthesis] Protentive and retentive temporality (Was: Notes for Lighting and Rhythm Residency Nov (13)17-26)

Casting the net widely to AME + TML + Synthesis, here follows very raw notes that will turn into the plans for Synthesis Center’s Lighting and Rhythm Residency Nov (13)17-26.  Forgive me for the roughness, but I wanted to give as early a note as possible about what we are trying to do here.   Think of the work here as live, experientially rich yet expertly built experiments on temporality -- a sense of change, dynamic, rhythm or more generally temporal texture.

Please propose experiential provocations relevant to temporality, especially those that use modulated, animate lighting. 

I draw special attention to the phenomenological, Husserlian proposition of:

“… something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?“

This would be an apt application of bread and butter statistical DSP methods!

TODOS:

• Chris Z. and I are reaching toward a strand of propositions that are both artistically relevant and phenomenologically informative.    We can start w a double strand of fragments of prior art and prior questions relevant to “first person” / geworfen temporality (versus spectatorship as a species of vorhanden attitude which is not in Synthesis’ scope, by mandate).
We need to seed this with some micro-etudes, but we expect that we’ll discover more as we go.    This requires that we do all  the tech development prior to the event, all gear acquisition, installation,  sw engineering should be done  prior to Nov 13.   The micro is a way to fight the urge to make a performance or an installation out of this scratch work.

Informed by conversation w Chris Z and all parties interested in contributing ideas on what we do during the LRR Nov 17-26, Chris R (and I) will
• Agreed on outcomes
• Timeline
• Organize teams
• Plan Publicity, Documentation


Begin forwarded message:

From: Xin Wei Sha <Xinwei.Sha@asu.edu>
Subject: Re: [Synthesis] Notes for Lighting and Rhythm Residency Nov (13)17-26
Date: October 4, 2014 at 2:31:10 PM MST

Please please please before we dive into more gear specs

What are the experiential provocations being proposed?

For example, Omar, everyone, can you please write into some common space more example micro-studies  similar to Adrian’s examples?
(See the movement exercises that MM has drawn up for past experiments for more examples.)
Here at Synthesis, I must insist on this practice, prior to buying gear, so that we have a much greater ratio of
propositions : gadgets.

Thank you, let’s play.
Xin Wei

_________________________________________________________________________________________________

On Oct 4, 2014, at 12:53 PM, Omar Faleh <omar@morscad.com> wrote:

I got the chance lately to work with the Philips Nitro strobes which is intensely stronger than the atomics 3000 for example. It is an LED strobe,  so you can pulse, flicker, and keep on for quite a while without having to worry about discharge and re-charge.. and being an all-led strobe, it isn't as voltage- hungry as the atomics..

 The LED surface is split into 6 sub rectangle that you can address individually or can animate by preset effects, which allows for a nice play with shadows with only one light (all DMX-controlled)
and there is an RGB version of it too.. so no need for gels and colour changers.

I am also looking into some individually-addressable RGB LED strips . Placing the order today so I will hopefully be able to test and report the findings soon


_________________________________________________________________________________________________

On 2014-10-04, at 3:30 PM, Adrian Freed <adrian@adrianfreed.com> wrote:

Sounds like a fun event!

Does the gear support simple temporal displacement modulations, e.g., delaying one's shadow or a projected image of oneself?

This is rather easy to do with the right gear.

I would like to see something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?

It would also be interesting to modulate a scrambling of oneself and connect its intensity to movement intensity. Navid has done things similar to this with sound. The experience was rather predictible but might well be different visually.


_________________________________________________________________________________________________

On Oct 4, 2014, at 11:53 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience, NOT designing for spectator.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:
Core Questions to extend:  http://improvisationalenvironments.weebly.com/about.html
Playing around with lights: https://vimeo.com/tml/videos/search:light/sort:date
Key Background:  http://textures.posthaven.com


The idea is to invite Chris and his students to work [richly] on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.




• Lighting and Rhythm
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

_________________________________________________________________________________________________


On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

_________________________________________________________________________________________________


On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei


_________________________________________________________________________________________________


On Sep 2, 2014, at 5:07 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Of course we could reduce the 6 second lag by reducing the window sizes and increasing the hop sizes, at the expense of resolution. Also, rather than using the OSS calculation provided, perhaps we could perhaps just use a standard amplitude follower that sums the absolute value of the signal with the absolute value of the Hilbert Transform of the signal and filtering the result. This would save us from decimating the signal on input and reduce the amount of time needed to gather enough samples for autocorrelation (at the expense of accuracy, particularly for slow tempi).

What are you ultimately using this algorithm for? Percival-Tzanetakis also doesn't keep track of phase. If you plan on using it to take some measure of metaphorical rhythm between, say, humans as they interact with each other or the environment, then it seems like phase would be highly important. Are we in sync or syncopated? Am I on your upbeats or do we together make a flam on the downbeats?

Mike

_________________________________________________________________________________________________


On Tue, Sep 2, 2014 at 4:09 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian,

Mike pointed out what for me is a serious constraint in the Percival-Tzanetakis tempo estimator : it is not realtime.
I wonder if you have any suggestion on how to modify the algorithm to run more “realtime” with less buffering if that’s the right word for it…

Anyway I’d trust Mike to talk with you since this is more your than my competence.   cc me for my edification and interest!

Xin Wei

_________________________________________________________________________________________________

On Sep 2, 2014, at 12:06 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Hi Xin Wei,

I read the paper last night and downloaded the Marsyas source, but only the MATLAB implememtation is there. I can work on getting the c++ version and porting it, but the algorithm has some serious caveats that I want to run by you before I get my hands too dirty.

The main caveat is that it was not intended to run in real-time. The implementations they provide take an audio file, process the whole thing, and spit back one number representing the overall tempo.

"our algorithm is more accurate when these estimates are accumulated for an entire audio track"

It could be adapted to run in sort-of real time, but at 44.1k the tempo estimation will always lag by 6 seconds, and at a control rate of 30ms (i.e. the rate touchOSC uses to send accelerometer data from iPhone) the  algorithm as described will have to gather data for over 2 hours to make an initial tempo estimation and will only update once every 5 minutes.

Once I get the c++ source I can give an estimation of how difficult it might be to adapt (in the worst-case scenario it would be time-consuming but not terribly difficult to re-implement the whole thing in your language of choice).

If you would still like me to proceed let me know and I will contact the authors about the source.

Mike

________________________________________________________________________________________________



On Mon, Sep 1, 2014 at 3:45 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
beat~ hasn't worked well for our research purposes so I'm looking for a better instrument.

I'm no expert but P & T carefully analyze the extant techniques.
the keyword is 'streamlined'

Read the paper.  Ask Adrian and John.

Xin Wei

Notes for Lighting and Rhythm Residency Nov (13)17-26

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience

NOT

 designing for spectator
.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:


The idea is to invite Chris and his students to work on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.


Lighting and Rhythm 
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector 
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei

Heartbot, Pulse Park, MacCallum+Naccarato (CNMAT, IRCAM) "Heart rate data from contemporary dancers"

(1) Heart Bot Turns Heartbeats Into Personalized Illustrations



(2)

Rafael Lozano-Hemmer, Pulse Park (2008)


Madison Square Garden, NYC

(3) BUT John MacCallum and Teoma Naccarato’s challenge is subtler:

Project Title: "Heart rate data from contemporary dancers"
Abstract:
 The composer John MacCallum and choreographer Teoma Naccarato propose a collaborative project that examines the use of real-time, heart rate data from contemporary dancers to drive a polytemporal composition for instrumental ensemble with live electronics.
During our residency, we will:

  1. develop and expand robust software tools that facilitate the composition and performance of polytemporal work, in which tempos are driven by real-time interaction—in the case of our project, heart rates of dancers, and
  2. examine strategies for heart rate manipulation via internal and external stimuli, including entrainment between bodily processes and music.

Designing a facile environment within which to explore this type of compositional and performative complexity will bring together a number of current research interests at IRCAM including recent developments in Antescofo, OpenMusic, and gesture following, as well as extensive work on polytemporal music conducted by MacCallum at CNMAT.

In collaboration with Musical Representations Teams as part of the EFFICAC Project

who can add Percival-Tzanetakis Tempo Estimator into our O2014 toolkit ?

Hi,

For our rhythm research we need a decent tempo estimator as a fundamental tool.

Adrian sent this:

Streamlined Tempo Estimation Based on Autocorrelation and Cross-correlation With Pulses
Graham Percival, George Tzanetakis (Trans. Audio, Speech, and Language Processing, 22.12, Dec. 2014)

It’s implemented in Marsyas (C++), Python and Matlab.

Is this available as an efficient Max/MSP external so we can incorporate it into our apparatus?

If not, who can do this, this Fall,  for Synthesis' RHYTHM research stream?

Synthesis lighting research cluster / responsive environments

Dear Chris, Omar,

In the responsive environments research area:

Let’s start gathering our notes into a Posthaven — for now use 

Kristi can help summarize once a fortnight  or so...








__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

[Synthesis] Portals needed

Hi!

We need portals supporting concurrent conversation via common spaces like tabletops + audio… (no video!)
not talking-heads.     It may be useful to have audio muffle as a feature — continuous stream audio, but default is to  “content-filter” the speech.   (Research in 1970’s … showed which spectral filters to apply to speech to remove “semantics” but keep enough affect…)

Maybe we can invite Omar to work with Garrett or Byron or Ozzie to install Evan’s version in the Brickyard and Stauffer and iStage as a side effect of the Animated spaces: Amorphous lighting network workshop with Chris Ziegler and Synthesis researchers.

BUT we should have portals running now ideally on my desk and on a Brickyard surface.  
And that workshop remains to be planned (October ??)
And possibly running also on the two panel displays re-purposed from Il Y A — now moved to Stauffer...

Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Bill Forsythe: Nowhere and Everywhere at the same time No. 2 (pendulums)

Two works by two choreographers Dimitris Papaioanno, and Bill Forsythe,
with very different and interesting approaches to causality and temporal texture…

- Xin Wei

On Jul 20, 2014, at 12:55 AM, Michael Montanaro <michael.montanaro@concordia.ca> wrote:

A beautiful choreographed work: NOWHERE (2009) / central scene / for Pina
from Dimitris Papaioanno



Begin forwarded message:

From: "Vangelis Lympouridis" <vl_artcode@yahoo.com>
\Date: July 22, 2014 at 8:39:27 AM GMT+2
To: "Adrian Freed" <Adrian.Freed@asu.edu>, "'Sha Xin Wei'" <shaxinwei@gmail.com>, "'John MacCallum'" <john@cnmat.berkeley.edu>

When you have a second please watch this 2 min video with Forsythe’s piece Nowhere and Everywhere at the same time No2.

I think it is SO to the core of what we reasoning about… J

                                                                                                                                                                                                                                                        

 
 
Vangelis Lympouridis, PhD

Visiting Scholar,

School of Cinematic Arts

University of Southern California

 
Senior Research Consultant,
Creative Media & Behavioral Health Center

University of Southern California

 
Whole Body Interaction Designer

Tel: +1 (415) 706-2638 

PDF of: calibration etc.; rhythm; Synthesis CFP

Hi, Since the mail server mangled my diagrams' positions, let me re-send the email trail as PDF, and to our research notebook:   http://textures.posthaven.com - Xin Wei

__________________________________________________________________________________
http://improvisationalenvironments.weebly.com  Feb 15 - March 7, 2014, Matthews iStage
__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis Center / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________