Notes for Lighting and Rhythm Residency Nov (13)17-26

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience

NOT

 designing for spectator
.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:


The idea is to invite Chris and his students to work on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.


Lighting and Rhythm 
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector 
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei

Re: [Synthesis] Transmutations Online

Hi,

In principle, I do know how to code this in HTML5 using the WebAudio API. However, I don't think I have much free time to work on it at the moment, although I would be willing to explain it to someone with good enough javascript skills to implement it.

Mike

On Wed, Sep 24, 2014 at 7:59 PM, Sha Xin Wei <synthesis.operations@gmail.com> wrote:
YES YES!

hence: Transmutations Online:  I attach the  proposal

Re: [Synthesis] Transmutations Online

Thanks Garth, Todd, and Xin Wei for these perspectives. 

What can be said about analysis as subjective (possibly experiential) readings of music? Or as suggested modes of listening?
I’m thinking now towards Western tonal music.  I’m forgetting now exactly who (maybe it was just my undergraduate set-theory prof), but this person suggested that analysis be considered one suggestive impression. The age old question of how to view passing 6/4 cadences (is it a tonic chord in second inversion or V with a 6 4 - > 5 3 suspension?)  comes to mind. 

Todd, I really like your fifth point. Is spectrographic analysis an algorithmic experience of music/sound recording? Fourier’s? Maybe it is important to note that there are discrepancies between how different entities see music — humans and computers included. The spectrographic paradigm can be novel (and maybe unhelpful) to the roman-number or Schenkerian; conversely, a consideration of the relationship between consonance/dissonance, tension/resolution might be a pleasant reminder to the sound artist (or a nostalgic flashback to undergrad).   

I’m curious to hear your opinions, as i’m contemplating this in relation to my thesis research. I’m contextualizing, analyzing, and theorizing the early improvisations in EEG music as made by Richard Teitelbaum, Alvin Lucier, and David Rosenboom. My thesis is that, in their description of the performance practice of this music, each composer suggests different philosophies of embodiment.

 My approach to analyzing these works is currently mosaic in nature:

 I am thinking that I will present the original electronic circuit/schematic published with each of these composers systems and ( transcribe that into Max  for Millennials) to show the sonic possibilities of each musical system. I did want to incorporate spectrographic renderings of different recordings to show that each realization is unique. I do feel I have to consider my audience, who might benefit from a demonstrative representation of the different performative possibilities within “system” composition. 

These approaches cover showing the piece as an open-form system, and as well speaks (from at least an algorithmic perspective) to variations in timbre, form, etc. This is important background for knowing something about the music. What is lacking of course is an analysis which actually supports my thesis — which leads me back to on of my initial questions: 

is (in this case, the performer’s) first-person text the best method for expressing/analyzing the phenomenology of experiential art? “off-line” prose seems to de-stratify time in much the same way that Ligeti does, as Xin Wei and Garth pointed out. 

Any thoughts are of course welcome — apologies for diverging from the Synthesis thread into my own work, but I hope that this discussion will feed back into related Synthesis work. 

Garrett L. Johnson
Musicology MA candidate @ ASU  
Synthesis Center graduate research 
LOrkAs (laptop orchestra of arizona state),director 

Re: [Synthesis] Transmutations Online

This is very interesting — what this tells me is that there’s a discrepancy between how composers think musical material and how most computer programs represent sound.   This discrepancy may be as significant as the discrepancy between how mathematicians think Riemannian geometry and how computer programs represent 3D graphics.  The former is impossible to represent in the latter, even in principle.  It’s possible to demonstrate this fairly rigorously in the case of differential geometry and analysis, but maybe the discrepancy for music is not obvious to non-practitioners.

Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2846
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Re: [Synthesis] Transmutations Online

Hi

I have been reading this discussion and wanted to pose some thoughts. They may not be entirely fully formed....

1. To say that one should avoid the habit of  "composing time-based media as functions of one scalar parameter, nominally labelled 'time'" is not the same as saying time can somehow be removed from experience.

2. While many composers may talk as if they are "composing time-based media as functions of one scalar parameter, nominally labelled 'time'" they are in fact not doing so and much music 'analysis' (at least for the past 200 or so years) they are subjected to in the West is in fact not looking at music as experienced in time.

3. Music analysis has at best a value in providing people with metaphors. As Nicholas Cook has argued "it is not an account of how people actually hear pieces of music, but a way of imagining them." He further states that "a musical culture is, in essence, a repertoire of means for imagining music; it is the specific pattern of divergences between the experience of music on the one hand, and the images by means of which it is represented on the other, that gives a musical culture its identity." 

4. In part because of 3 I would be wary of relying too heavily on musically derived theories of rhythm and time. Many or quite complex and intricate but that should not be mistaken for how they map to experience.  

5. Spectrograms are to timbre what MRI scans are to thinking. 



On Sep 24, 2014, at 7:59 PM, Sha Xin Wei <synthesis.operations@gmail.com> wrote:

YES YES!

hence: Transmutations Online:  I attach the  proposal

--
Research Stream: http://synthesis@posterous.com
Home: http://synthesis.ame.asu.edu

[Synthesis] Transmutations Online

YES YES!

hence: Transmutations Online:  I attach the  proposal


I'm talking with various possible allies, or sibling projects in Copenhagen, Moscow, and Beijing…

But Dehlia and I would like to push forward our own issue #2, which can take forms appropriate to the theme and contributions, quite different from the inaugural issue:


We wanted to use Jhave Johnston’s MUPS code, which Jhave graciously offered years ago.  Now it may be good to recode it in HTML 5. (Who is good enough to do that — Jen?)



__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2846
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Re: [Synthesis] re. LIGHTING and RHYTHM: > 100-dimensional soundscape

Here are my 2 cents, for what they're worth from a student of musicology (~1.2 cents). 

 I’m fortunate enough to have seen this piece (poeme symphonique) live before; the experience was very rich. Imagine the musicologist’s problem then— how to analyze this music? How to elucidate something of the structure? of the timbre? of the space? 

Indeed, this kind of work shows the limitations of graphic analysis (and furthermore, spectrographs). This is an unsolved (and seemingly often unobserved) problem in music theory / musicology. Although advocates of spectrographic analysis have successfully escaped the lattice-based structures of traditional Western notation and opened up analysis to fine nuances of timbral textures, they remained chained to their x-axis. What’s the musicologist's solution? (What’s the art researcher's solution?) 

More sonic imaging? If we were to experiment with mapping other parameters to the x and y axis, or with producing self-similarity matrices, we might get some interesting results. Then the task is to determine whether this actually elucidates anything about a composition (or just that realization of a composition? or is it just an analysis of a recording of a realization of a composition? . . . "music is work” [cage] ) 

For the purpose of documentation and proliferation of this way of (non-teleological?) thinking to scholars and artists who can’t simply read Max, C, Processing (etc.), I wonder if “open-form” compositions are most effectively analyzed textually. I start thinking about Garth’s paper, "Pools, Pixies, and Potentials" (http://www.activatedspace.com/Research/papers/files/poolsandpixies_isea.pdf), wherein he uses flowcharts and hand-drawn figures to allow the reader to parse out the various permutations of “places” within his constructed the vector field. Certainly, the text here is clear and the figures intuitive. The form of the piece is reflected here; still, we are mostly in the dark about how the music sounds (of course, not a dig on the author, i don’t think this was the intent of the paper). Of course, an easy solution would be to do a spectrographic analysis of one performance, identify the vectors on the spectrograph and present that in tandem with the flow-charts. 

I am finding there is a lot to be unpacked here (or maybe i’m trying to unpack too much). The goals of the music theorist (or media art theorist) may be different in scope than synthesis’ goals (epistemology v. phenomenology?). Wehinger who created the graphic score of Artikulation wanted to make a fixed-media piece more accessible. However, if we go back to Earl Brown (to whom Garth looks in his paper), his graphic SCORES (inspired by Calder) do not necessarily display time on X-axis. Why do most instances of (musical) graphic analysis? 

(humorously - from Walter Levin’s “ten commandments” for choosing a piece — number 4: “If [a piece] uses a new notation, try to transcribe it into conventional notation; if you succeed, forget it”) 

 Certainly, there is more to be understood from Artikulation if we subtract time from our analysis, as we might do in experience. Synthesis largely has the advantage of giving people experiences instead of having to explain them. It goes without saying that probably any formal or textural analysis cannot stand in for an aesthetic, embodied experience (and isn’t MEANT to). My question (and the motive for this large wall of text) concerns the times when Synthesis (or TML, or other media arts labs) doesn’t have the opportunity to share these media-art experiences but wants to communicate something subcutaneous about the art. 

What tools do we have available to us? I haven’t mentioned simply using videos…. a mediated but experientially centered form of documentation (which I’ve noted is common-place in media art and art research). In that case, to what extent is this a problem in new media? Is this only a problem in music? Is there one single tool (am i sounding too positivist?) Or do we rely on giving as many possible angles (spectrographs, flow-charts, philosophy, poetry) to create a mediated mosaic (heuristic?) of our aesthetic experience? 


Garrett L. Johnson
Musicology MA candidate @ ASU  
Synthesis Center graduate research 
LOrkAs (laptop orchestra of arizona state),director 

re. LIGHTING and RHYTHM: > 100-dimensional soundscape


Here is a 100-dimensional soundscape

György Ligeti - Poème Symphonique for 100 metronomes


So even in the most reductive senses, we need to get out of the habit of thinking about engineering  and composing time-based media as functions of one scalar parameter, nominally labelled “time.”  (For an example of a waste of time, see this graphic animation of Ligeti’s Artikulation.)

But still in materially and machinically* reproducible way.

As Deleuze and Guattari observed, machinic ≠ mechanical .

Xin Wei

Re: who can add Percival-Tzanetakis Tempo Estimator into our O2014 toolkit ?

Is there any reason to use the algorithm described in the attached paper rather than [beat~]


which, if I remember correctly, is based on this seminal paper:


Mike


On Mon, Sep 1, 2014 at 2:29 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi,

For our rhythm research we need a decent tempo estimator as a fundamental tool.

Adrian sent this:


Streamlined Tempo Estimation Based on Autocorrelation and Cross-correlation With Pulses
Graham Percival, George Tzanetakis (Trans. Audio, Speech, and Language Processing, 22.12, Dec. 2014)

It’s implemented in Marsyas (C++), Python and Matlab.

Is this available as an efficient Max/MSP external so we can incorporate it into our apparatus?

If not, who can do this, this Fall,  for Synthesis' RHYTHM research stream?