Re: who can add Percival-Tzanetakis Tempo Estimator into our O2014 toolkit ?

Hi Xin Wei,

This is related to what I have been working on for the drum robot. If there is not an existing object then I can make one, either using Marsyas or just implementing it directly in C. Do you need it to run on audio signals gathered with [dac~], or on regular numeric data (i.e. from sensors), coming from say [udpreceive]?

Mike


On Mon, Sep 1, 2014 at 2:29 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi,

For our rhythm research we need a decent tempo estimator as a fundamental tool.

Adrian sent this:


Streamlined Tempo Estimation Based on Autocorrelation and Cross-correlation With Pulses
Graham Percival, George Tzanetakis (Trans. Audio, Speech, and Language Processing, 22.12, Dec. 2014)

It’s implemented in Marsyas (C++), Python and Matlab.

Is this available as an efficient Max/MSP external so we can incorporate it into our apparatus?

If not, who can do this, this Fall,  for Synthesis' RHYTHM research stream?

Heartbot, Pulse Park, MacCallum+Naccarato (CNMAT, IRCAM) "Heart rate data from contemporary dancers"

(1) Heart Bot Turns Heartbeats Into Personalized Illustrations



(2)

Rafael Lozano-Hemmer, Pulse Park (2008)


Madison Square Garden, NYC

(3) BUT John MacCallum and Teoma Naccarato’s challenge is subtler:

Project Title: "Heart rate data from contemporary dancers"
Abstract:
 The composer John MacCallum and choreographer Teoma Naccarato propose a collaborative project that examines the use of real-time, heart rate data from contemporary dancers to drive a polytemporal composition for instrumental ensemble with live electronics.
During our residency, we will:

  1. develop and expand robust software tools that facilitate the composition and performance of polytemporal work, in which tempos are driven by real-time interaction—in the case of our project, heart rates of dancers, and
  2. examine strategies for heart rate manipulation via internal and external stimuli, including entrainment between bodily processes and music.

Designing a facile environment within which to explore this type of compositional and performative complexity will bring together a number of current research interests at IRCAM including recent developments in Antescofo, OpenMusic, and gesture following, as well as extensive work on polytemporal music conducted by MacCallum at CNMAT.

In collaboration with Musical Representations Teams as part of the EFFICAC Project

who can add Percival-Tzanetakis Tempo Estimator into our O2014 toolkit ?

Hi,

For our rhythm research we need a decent tempo estimator as a fundamental tool.

Adrian sent this:

Streamlined Tempo Estimation Based on Autocorrelation and Cross-correlation With Pulses
Graham Percival, George Tzanetakis (Trans. Audio, Speech, and Language Processing, 22.12, Dec. 2014)

It’s implemented in Marsyas (C++), Python and Matlab.

Is this available as an efficient Max/MSP external so we can incorporate it into our apparatus?

If not, who can do this, this Fall,  for Synthesis' RHYTHM research stream?

Synthesis lighting research cluster / responsive environments

Dear Chris, Omar,

In the responsive environments research area:

Let’s start gathering our notes into a Posthaven — for now use 

Kristi can help summarize once a fortnight  or so...








__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

[Synthesis] Portals needed

Hi!

We need portals supporting concurrent conversation via common spaces like tabletops + audio… (no video!)
not talking-heads.     It may be useful to have audio muffle as a feature — continuous stream audio, but default is to  “content-filter” the speech.   (Research in 1970’s … showed which spectral filters to apply to speech to remove “semantics” but keep enough affect…)

Maybe we can invite Omar to work with Garrett or Byron or Ozzie to install Evan’s version in the Brickyard and Stauffer and iStage as a side effect of the Animated spaces: Amorphous lighting network workshop with Chris Ziegler and Synthesis researchers.

BUT we should have portals running now ideally on my desk and on a Brickyard surface.  
And that workshop remains to be planned (October ??)
And possibly running also on the two panel displays re-purposed from Il Y A — now moved to Stauffer...

Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Bill Forsythe: Nowhere and Everywhere at the same time No. 2 (pendulums)

Two works by two choreographers Dimitris Papaioanno, and Bill Forsythe,
with very different and interesting approaches to causality and temporal texture…

- Xin Wei

On Jul 20, 2014, at 12:55 AM, Michael Montanaro <michael.montanaro@concordia.ca> wrote:

A beautiful choreographed work: NOWHERE (2009) / central scene / for Pina
from Dimitris Papaioanno



Begin forwarded message:

From: "Vangelis Lympouridis" <vl_artcode@yahoo.com>
\Date: July 22, 2014 at 8:39:27 AM GMT+2
To: "Adrian Freed" <Adrian.Freed@asu.edu>, "'Sha Xin Wei'" <shaxinwei@gmail.com>, "'John MacCallum'" <john@cnmat.berkeley.edu>

When you have a second please watch this 2 min video with Forsythe’s piece Nowhere and Everywhere at the same time No2.

I think it is SO to the core of what we reasoning about… J

                                                                                                                                                                                                                                                        

 
 
Vangelis Lympouridis, PhD

Visiting Scholar,

School of Cinematic Arts

University of Southern California

 
Senior Research Consultant,
Creative Media & Behavioral Health Center

University of Southern California

 
Whole Body Interaction Designer

Tel: +1 (415) 706-2638 

PDF of: calibration etc.; rhythm; Synthesis CFP

Hi, Since the mail server mangled my diagrams' positions, let me re-send the email trail as PDF, and to our research notebook:   http://textures.posthaven.com - Xin Wei

__________________________________________________________________________________
http://improvisationalenvironments.weebly.com  Feb 15 - March 7, 2014, Matthews iStage
__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts / Director • Synthesis Center / ASU
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Navid Navab: Re: correlation is a vast and nebulous space


I agree with Doug’s caution about the problem with ignoring away the “dependent” variables — values f[t] — and paying attention only to “zero”-crossings.   As Adrian would point out as well, this already encodes many assumptions on what is a significant event.   For example, that’s the basic problem with the “pluck” detector that Navid has coded and used

This is a big reduction about how I use plucks and triggers to ornament continueus events. In my first rough draft of GestureBending Principles (found here: http://gesturebending.weebly.com/principles-of-gesture-bending.html) I have clearly stated that triggers and and other event and onset detectors are solely used to modulate continues data with the goal of ornamenting their perceived formal structures that are driven continuously and often by the the trigger's dependant variable.

Maybe what is being referred to here as "pluck" i believe is our trigger detectors with hysteresis and debounce... Out of context this is just a very very simple element that people in our lab and elsewhere have put to different uses. Miller's bonk~ (onset detector) partially uses this and so does Vangelis's triggers and etc. We have in the recent past used this data to detect onsets and feed the onset times into our rhythm kit. Contextually meaningful modal bracketing starts from thoughtful feature extraction and the complementary rhythm kit provides a method for viewing, analyzing and manipulating the detected event onsets.

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement

As I’ve been saying.  This is more rhetorical fuel for why I want the Einsteins Dream as well as the Improvisational Environments workshop to host apparatus that tries to avoid  “absolute” time-indexes 

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement

Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it



https://medium.com/the-physics-arxiv-blog/d5d3dc850933

Re: correlation is a vast and nebulous space || was Re: Intel Announces Edison and a Wearables Contest

I agree with Doug’s caution about the problem with ignoring away the “dependent” variables — values f[t] — and paying attention only to “zero”-crossings. As Adrian would point out as well, this already encodes many assumptions on what is a significant event. For example, that’s the basic problem with the “pluck” detector that Navid has coded and used

(More precisely, for a fixed y, the intervals in the inverse image of y under f: { f^(-1)[y] }, assuming f is C0).)

But I have a fundamental reason which is to deliberately lever us away from mono-sense-modality-ness. It’s a very crude but hopefully effective method to get us to pay attention to the phenomenology of temporality.

Keeping in mind the modal bracketing that’s being performed by looking at intervals as Julian’s kit provides.

There are more sophisticated approaches — as Pavan pointed out in an AME seminar last month: well known in signal processing 101 as passing to frequency (time) domain. That raises other fundamental issues when the signal cannot be assumed to have a significant periodic component.

And so it goes. Meanwhile I say, let’s get crude and palpably relevant experiments working first, palaver later! Xin Wei