Re: [Synthesis] re. LIGHTING and RHYTHM: > 100-dimensional soundscape

Here are my 2 cents, for what they're worth from a student of musicology (~1.2 cents). 

 I’m fortunate enough to have seen this piece (poeme symphonique) live before; the experience was very rich. Imagine the musicologist’s problem then— how to analyze this music? How to elucidate something of the structure? of the timbre? of the space? 

Indeed, this kind of work shows the limitations of graphic analysis (and furthermore, spectrographs). This is an unsolved (and seemingly often unobserved) problem in music theory / musicology. Although advocates of spectrographic analysis have successfully escaped the lattice-based structures of traditional Western notation and opened up analysis to fine nuances of timbral textures, they remained chained to their x-axis. What’s the musicologist's solution? (What’s the art researcher's solution?) 

More sonic imaging? If we were to experiment with mapping other parameters to the x and y axis, or with producing self-similarity matrices, we might get some interesting results. Then the task is to determine whether this actually elucidates anything about a composition (or just that realization of a composition? or is it just an analysis of a recording of a realization of a composition? . . . "music is work” [cage] ) 

For the purpose of documentation and proliferation of this way of (non-teleological?) thinking to scholars and artists who can’t simply read Max, C, Processing (etc.), I wonder if “open-form” compositions are most effectively analyzed textually. I start thinking about Garth’s paper, "Pools, Pixies, and Potentials" (http://www.activatedspace.com/Research/papers/files/poolsandpixies_isea.pdf), wherein he uses flowcharts and hand-drawn figures to allow the reader to parse out the various permutations of “places” within his constructed the vector field. Certainly, the text here is clear and the figures intuitive. The form of the piece is reflected here; still, we are mostly in the dark about how the music sounds (of course, not a dig on the author, i don’t think this was the intent of the paper). Of course, an easy solution would be to do a spectrographic analysis of one performance, identify the vectors on the spectrograph and present that in tandem with the flow-charts. 

I am finding there is a lot to be unpacked here (or maybe i’m trying to unpack too much). The goals of the music theorist (or media art theorist) may be different in scope than synthesis’ goals (epistemology v. phenomenology?). Wehinger who created the graphic score of Artikulation wanted to make a fixed-media piece more accessible. However, if we go back to Earl Brown (to whom Garth looks in his paper), his graphic SCORES (inspired by Calder) do not necessarily display time on X-axis. Why do most instances of (musical) graphic analysis? 

(humorously - from Walter Levin’s “ten commandments” for choosing a piece — number 4: “If [a piece] uses a new notation, try to transcribe it into conventional notation; if you succeed, forget it”) 

 Certainly, there is more to be understood from Artikulation if we subtract time from our analysis, as we might do in experience. Synthesis largely has the advantage of giving people experiences instead of having to explain them. It goes without saying that probably any formal or textural analysis cannot stand in for an aesthetic, embodied experience (and isn’t MEANT to). My question (and the motive for this large wall of text) concerns the times when Synthesis (or TML, or other media arts labs) doesn’t have the opportunity to share these media-art experiences but wants to communicate something subcutaneous about the art. 

What tools do we have available to us? I haven’t mentioned simply using videos…. a mediated but experientially centered form of documentation (which I’ve noted is common-place in media art and art research). In that case, to what extent is this a problem in new media? Is this only a problem in music? Is there one single tool (am i sounding too positivist?) Or do we rely on giving as many possible angles (spectrographs, flow-charts, philosophy, poetry) to create a mediated mosaic (heuristic?) of our aesthetic experience? 


Garrett L. Johnson
Musicology MA candidate @ ASU  
Synthesis Center graduate research 
LOrkAs (laptop orchestra of arizona state),director 

re. LIGHTING and RHYTHM: > 100-dimensional soundscape


Here is a 100-dimensional soundscape

György Ligeti - Poème Symphonique for 100 metronomes


So even in the most reductive senses, we need to get out of the habit of thinking about engineering  and composing time-based media as functions of one scalar parameter, nominally labelled “time.”  (For an example of a waste of time, see this graphic animation of Ligeti’s Artikulation.)

But still in materially and machinically* reproducible way.

As Deleuze and Guattari observed, machinic ≠ mechanical .

Xin Wei

Re: who can add Percival-Tzanetakis Tempo Estimator into our O2014 toolkit ?

Is there any reason to use the algorithm described in the attached paper rather than [beat~]


which, if I remember correctly, is based on this seminal paper:


Mike


On Mon, Sep 1, 2014 at 2:29 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi,

For our rhythm research we need a decent tempo estimator as a fundamental tool.

Adrian sent this:


Streamlined Tempo Estimation Based on Autocorrelation and Cross-correlation With Pulses
Graham Percival, George Tzanetakis (Trans. Audio, Speech, and Language Processing, 22.12, Dec. 2014)

It’s implemented in Marsyas (C++), Python and Matlab.

Is this available as an efficient Max/MSP external so we can incorporate it into our apparatus?

If not, who can do this, this Fall,  for Synthesis' RHYTHM research stream?

Re: who can add Percival-Tzanetakis Tempo Estimator into our O2014 toolkit ?

Hi Xin Wei,

This is related to what I have been working on for the drum robot. If there is not an existing object then I can make one, either using Marsyas or just implementing it directly in C. Do you need it to run on audio signals gathered with [dac~], or on regular numeric data (i.e. from sensors), coming from say [udpreceive]?

Mike


On Mon, Sep 1, 2014 at 2:29 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi,

For our rhythm research we need a decent tempo estimator as a fundamental tool.

Adrian sent this:


Streamlined Tempo Estimation Based on Autocorrelation and Cross-correlation With Pulses
Graham Percival, George Tzanetakis (Trans. Audio, Speech, and Language Processing, 22.12, Dec. 2014)

It’s implemented in Marsyas (C++), Python and Matlab.

Is this available as an efficient Max/MSP external so we can incorporate it into our apparatus?

If not, who can do this, this Fall,  for Synthesis' RHYTHM research stream?

Heartbot, Pulse Park, MacCallum+Naccarato (CNMAT, IRCAM) "Heart rate data from contemporary dancers"

(1) Heart Bot Turns Heartbeats Into Personalized Illustrations



(2)

Rafael Lozano-Hemmer, Pulse Park (2008)


Madison Square Garden, NYC

(3) BUT John MacCallum and Teoma Naccarato’s challenge is subtler:

Project Title: "Heart rate data from contemporary dancers"
Abstract:
 The composer John MacCallum and choreographer Teoma Naccarato propose a collaborative project that examines the use of real-time, heart rate data from contemporary dancers to drive a polytemporal composition for instrumental ensemble with live electronics.
During our residency, we will:

  1. develop and expand robust software tools that facilitate the composition and performance of polytemporal work, in which tempos are driven by real-time interaction—in the case of our project, heart rates of dancers, and
  2. examine strategies for heart rate manipulation via internal and external stimuli, including entrainment between bodily processes and music.

Designing a facile environment within which to explore this type of compositional and performative complexity will bring together a number of current research interests at IRCAM including recent developments in Antescofo, OpenMusic, and gesture following, as well as extensive work on polytemporal music conducted by MacCallum at CNMAT.

In collaboration with Musical Representations Teams as part of the EFFICAC Project

who can add Percival-Tzanetakis Tempo Estimator into our O2014 toolkit ?

Hi,

For our rhythm research we need a decent tempo estimator as a fundamental tool.

Adrian sent this:

Streamlined Tempo Estimation Based on Autocorrelation and Cross-correlation With Pulses
Graham Percival, George Tzanetakis (Trans. Audio, Speech, and Language Processing, 22.12, Dec. 2014)

It’s implemented in Marsyas (C++), Python and Matlab.

Is this available as an efficient Max/MSP external so we can incorporate it into our apparatus?

If not, who can do this, this Fall,  for Synthesis' RHYTHM research stream?

Synthesis lighting research cluster / responsive environments

Dear Chris, Omar,

In the responsive environments research area:

Let’s start gathering our notes into a Posthaven — for now use 

Kristi can help summarize once a fortnight  or so...








__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

[Synthesis] Portals needed

Hi!

We need portals supporting concurrent conversation via common spaces like tabletops + audio… (no video!)
not talking-heads.     It may be useful to have audio muffle as a feature — continuous stream audio, but default is to  “content-filter” the speech.   (Research in 1970’s … showed which spectral filters to apply to speech to remove “semantics” but keep enough affect…)

Maybe we can invite Omar to work with Garrett or Byron or Ozzie to install Evan’s version in the Brickyard and Stauffer and iStage as a side effect of the Animated spaces: Amorphous lighting network workshop with Chris Ziegler and Synthesis researchers.

BUT we should have portals running now ideally on my desk and on a Brickyard surface.  
And that workshop remains to be planned (October ??)
And possibly running also on the two panel displays re-purposed from Il Y A — now moved to Stauffer...

Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2146
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Bill Forsythe: Nowhere and Everywhere at the same time No. 2 (pendulums)

Two works by two choreographers Dimitris Papaioanno, and Bill Forsythe,
with very different and interesting approaches to causality and temporal texture…

- Xin Wei

On Jul 20, 2014, at 12:55 AM, Michael Montanaro <michael.montanaro@concordia.ca> wrote:

A beautiful choreographed work: NOWHERE (2009) / central scene / for Pina
from Dimitris Papaioanno



Begin forwarded message:

From: "Vangelis Lympouridis" <vl_artcode@yahoo.com>
\Date: July 22, 2014 at 8:39:27 AM GMT+2
To: "Adrian Freed" <Adrian.Freed@asu.edu>, "'Sha Xin Wei'" <shaxinwei@gmail.com>, "'John MacCallum'" <john@cnmat.berkeley.edu>

When you have a second please watch this 2 min video with Forsythe’s piece Nowhere and Everywhere at the same time No2.

I think it is SO to the core of what we reasoning about… J

                                                                                                                                                                                                                                                        

 
 
Vangelis Lympouridis, PhD

Visiting Scholar,

School of Cinematic Arts

University of Southern California

 
Senior Research Consultant,
Creative Media & Behavioral Health Center

University of Southern California

 
Whole Body Interaction Designer

Tel: +1 (415) 706-2638