[Synthesis] Protentive and retentive temporality (Was: Notes for Lighting and Rhythm Residency Nov (13)17-26)

Casting the net widely to AME + TML + Synthesis, here follows very raw notes that will turn into the plans for Synthesis Center’s Lighting and Rhythm Residency Nov (13)17-26.  Forgive me for the roughness, but I wanted to give as early a note as possible about what we are trying to do here.   Think of the work here as live, experientially rich yet expertly built experiments on temporality -- a sense of change, dynamic, rhythm or more generally temporal texture.

Please propose experiential provocations relevant to temporality, especially those that use modulated, animate lighting. 

I draw special attention to the phenomenological, Husserlian proposition of:

“… something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?“

This would be an apt application of bread and butter statistical DSP methods!

TODOS:

• Chris Z. and I are reaching toward a strand of propositions that are both artistically relevant and phenomenologically informative.    We can start w a double strand of fragments of prior art and prior questions relevant to “first person” / geworfen temporality (versus spectatorship as a species of vorhanden attitude which is not in Synthesis’ scope, by mandate).
We need to seed this with some micro-etudes, but we expect that we’ll discover more as we go.    This requires that we do all  the tech development prior to the event, all gear acquisition, installation,  sw engineering should be done  prior to Nov 13.   The micro is a way to fight the urge to make a performance or an installation out of this scratch work.

Informed by conversation w Chris Z and all parties interested in contributing ideas on what we do during the LRR Nov 17-26, Chris R (and I) will
• Agreed on outcomes
• Timeline
• Organize teams
• Plan Publicity, Documentation


Begin forwarded message:

From: Xin Wei Sha <Xinwei.Sha@asu.edu>
Subject: Re: [Synthesis] Notes for Lighting and Rhythm Residency Nov (13)17-26
Date: October 4, 2014 at 2:31:10 PM MST

Please please please before we dive into more gear specs

What are the experiential provocations being proposed?

For example, Omar, everyone, can you please write into some common space more example micro-studies  similar to Adrian’s examples?
(See the movement exercises that MM has drawn up for past experiments for more examples.)
Here at Synthesis, I must insist on this practice, prior to buying gear, so that we have a much greater ratio of
propositions : gadgets.

Thank you, let’s play.
Xin Wei

_________________________________________________________________________________________________

On Oct 4, 2014, at 12:53 PM, Omar Faleh <omar@morscad.com> wrote:

I got the chance lately to work with the Philips Nitro strobes which is intensely stronger than the atomics 3000 for example. It is an LED strobe,  so you can pulse, flicker, and keep on for quite a while without having to worry about discharge and re-charge.. and being an all-led strobe, it isn't as voltage- hungry as the atomics..

 The LED surface is split into 6 sub rectangle that you can address individually or can animate by preset effects, which allows for a nice play with shadows with only one light (all DMX-controlled)
and there is an RGB version of it too.. so no need for gels and colour changers.

I am also looking into some individually-addressable RGB LED strips . Placing the order today so I will hopefully be able to test and report the findings soon


_________________________________________________________________________________________________

On 2014-10-04, at 3:30 PM, Adrian Freed <adrian@adrianfreed.com> wrote:

Sounds like a fun event!

Does the gear support simple temporal displacement modulations, e.g., delaying one's shadow or a projected image of oneself?

This is rather easy to do with the right gear.

I would like to see something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?

It would also be interesting to modulate a scrambling of oneself and connect its intensity to movement intensity. Navid has done things similar to this with sound. The experience was rather predictible but might well be different visually.


_________________________________________________________________________________________________

On Oct 4, 2014, at 11:53 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience, NOT designing for spectator.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:
Core Questions to extend:  http://improvisationalenvironments.weebly.com/about.html
Playing around with lights: https://vimeo.com/tml/videos/search:light/sort:date
Key Background:  http://textures.posthaven.com


The idea is to invite Chris and his students to work [richly] on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.




• Lighting and Rhythm
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

_________________________________________________________________________________________________


On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

_________________________________________________________________________________________________


On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei


_________________________________________________________________________________________________


On Sep 2, 2014, at 5:07 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Of course we could reduce the 6 second lag by reducing the window sizes and increasing the hop sizes, at the expense of resolution. Also, rather than using the OSS calculation provided, perhaps we could perhaps just use a standard amplitude follower that sums the absolute value of the signal with the absolute value of the Hilbert Transform of the signal and filtering the result. This would save us from decimating the signal on input and reduce the amount of time needed to gather enough samples for autocorrelation (at the expense of accuracy, particularly for slow tempi).

What are you ultimately using this algorithm for? Percival-Tzanetakis also doesn't keep track of phase. If you plan on using it to take some measure of metaphorical rhythm between, say, humans as they interact with each other or the environment, then it seems like phase would be highly important. Are we in sync or syncopated? Am I on your upbeats or do we together make a flam on the downbeats?

Mike

_________________________________________________________________________________________________


On Tue, Sep 2, 2014 at 4:09 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian,

Mike pointed out what for me is a serious constraint in the Percival-Tzanetakis tempo estimator : it is not realtime.
I wonder if you have any suggestion on how to modify the algorithm to run more “realtime” with less buffering if that’s the right word for it…

Anyway I’d trust Mike to talk with you since this is more your than my competence.   cc me for my edification and interest!

Xin Wei

_________________________________________________________________________________________________

On Sep 2, 2014, at 12:06 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Hi Xin Wei,

I read the paper last night and downloaded the Marsyas source, but only the MATLAB implememtation is there. I can work on getting the c++ version and porting it, but the algorithm has some serious caveats that I want to run by you before I get my hands too dirty.

The main caveat is that it was not intended to run in real-time. The implementations they provide take an audio file, process the whole thing, and spit back one number representing the overall tempo.

"our algorithm is more accurate when these estimates are accumulated for an entire audio track"

It could be adapted to run in sort-of real time, but at 44.1k the tempo estimation will always lag by 6 seconds, and at a control rate of 30ms (i.e. the rate touchOSC uses to send accelerometer data from iPhone) the  algorithm as described will have to gather data for over 2 hours to make an initial tempo estimation and will only update once every 5 minutes.

Once I get the c++ source I can give an estimation of how difficult it might be to adapt (in the worst-case scenario it would be time-consuming but not terribly difficult to re-implement the whole thing in your language of choice).

If you would still like me to proceed let me know and I will contact the authors about the source.

Mike

________________________________________________________________________________________________



On Mon, Sep 1, 2014 at 3:45 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
beat~ hasn't worked well for our research purposes so I'm looking for a better instrument.

I'm no expert but P & T carefully analyze the extant techniques.
the keyword is 'streamlined'

Read the paper.  Ask Adrian and John.

Xin Wei

Notes for Lighting and Rhythm Residency Nov (13)17-26

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience

NOT

 designing for spectator
.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:


The idea is to invite Chris and his students to work on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.


Lighting and Rhythm 
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector 
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei

Re: [Synthesis] Transmutations Online

Hi,

In principle, I do know how to code this in HTML5 using the WebAudio API. However, I don't think I have much free time to work on it at the moment, although I would be willing to explain it to someone with good enough javascript skills to implement it.

Mike

On Wed, Sep 24, 2014 at 7:59 PM, Sha Xin Wei <synthesis.operations@gmail.com> wrote:
YES YES!

hence: Transmutations Online:  I attach the  proposal

Re: [Synthesis] Transmutations Online

Thanks Garth, Todd, and Xin Wei for these perspectives. 

What can be said about analysis as subjective (possibly experiential) readings of music? Or as suggested modes of listening?
I’m thinking now towards Western tonal music.  I’m forgetting now exactly who (maybe it was just my undergraduate set-theory prof), but this person suggested that analysis be considered one suggestive impression. The age old question of how to view passing 6/4 cadences (is it a tonic chord in second inversion or V with a 6 4 - > 5 3 suspension?)  comes to mind. 

Todd, I really like your fifth point. Is spectrographic analysis an algorithmic experience of music/sound recording? Fourier’s? Maybe it is important to note that there are discrepancies between how different entities see music — humans and computers included. The spectrographic paradigm can be novel (and maybe unhelpful) to the roman-number or Schenkerian; conversely, a consideration of the relationship between consonance/dissonance, tension/resolution might be a pleasant reminder to the sound artist (or a nostalgic flashback to undergrad).   

I’m curious to hear your opinions, as i’m contemplating this in relation to my thesis research. I’m contextualizing, analyzing, and theorizing the early improvisations in EEG music as made by Richard Teitelbaum, Alvin Lucier, and David Rosenboom. My thesis is that, in their description of the performance practice of this music, each composer suggests different philosophies of embodiment.

 My approach to analyzing these works is currently mosaic in nature:

 I am thinking that I will present the original electronic circuit/schematic published with each of these composers systems and ( transcribe that into Max  for Millennials) to show the sonic possibilities of each musical system. I did want to incorporate spectrographic renderings of different recordings to show that each realization is unique. I do feel I have to consider my audience, who might benefit from a demonstrative representation of the different performative possibilities within “system” composition. 

These approaches cover showing the piece as an open-form system, and as well speaks (from at least an algorithmic perspective) to variations in timbre, form, etc. This is important background for knowing something about the music. What is lacking of course is an analysis which actually supports my thesis — which leads me back to on of my initial questions: 

is (in this case, the performer’s) first-person text the best method for expressing/analyzing the phenomenology of experiential art? “off-line” prose seems to de-stratify time in much the same way that Ligeti does, as Xin Wei and Garth pointed out. 

Any thoughts are of course welcome — apologies for diverging from the Synthesis thread into my own work, but I hope that this discussion will feed back into related Synthesis work. 

Garrett L. Johnson
Musicology MA candidate @ ASU  
Synthesis Center graduate research 
LOrkAs (laptop orchestra of arizona state),director 

Re: [Synthesis] Transmutations Online

This is very interesting — what this tells me is that there’s a discrepancy between how composers think musical material and how most computer programs represent sound.   This discrepancy may be as significant as the discrepancy between how mathematicians think Riemannian geometry and how computer programs represent 3D graphics.  The former is impossible to represent in the latter, even in principle.  It’s possible to demonstrate this fairly rigorously in the case of differential geometry and analysis, but maybe the discrepancy for music is not obvious to non-practitioners.

Xin Wei


__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2846
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Re: [Synthesis] Transmutations Online

Hi

I have been reading this discussion and wanted to pose some thoughts. They may not be entirely fully formed....

1. To say that one should avoid the habit of  "composing time-based media as functions of one scalar parameter, nominally labelled 'time'" is not the same as saying time can somehow be removed from experience.

2. While many composers may talk as if they are "composing time-based media as functions of one scalar parameter, nominally labelled 'time'" they are in fact not doing so and much music 'analysis' (at least for the past 200 or so years) they are subjected to in the West is in fact not looking at music as experienced in time.

3. Music analysis has at best a value in providing people with metaphors. As Nicholas Cook has argued "it is not an account of how people actually hear pieces of music, but a way of imagining them." He further states that "a musical culture is, in essence, a repertoire of means for imagining music; it is the specific pattern of divergences between the experience of music on the one hand, and the images by means of which it is represented on the other, that gives a musical culture its identity." 

4. In part because of 3 I would be wary of relying too heavily on musically derived theories of rhythm and time. Many or quite complex and intricate but that should not be mistaken for how they map to experience.  

5. Spectrograms are to timbre what MRI scans are to thinking. 



On Sep 24, 2014, at 7:59 PM, Sha Xin Wei <synthesis.operations@gmail.com> wrote:

YES YES!

hence: Transmutations Online:  I attach the  proposal

--
Research Stream: http://synthesis@posterous.com
Home: http://synthesis.ame.asu.edu

[Synthesis] Transmutations Online

YES YES!

hence: Transmutations Online:  I attach the  proposal


I'm talking with various possible allies, or sibling projects in Copenhagen, Moscow, and Beijing…

But Dehlia and I would like to push forward our own issue #2, which can take forms appropriate to the theme and contributions, quite different from the inaugural issue:


We wanted to use Jhave Johnston’s MUPS code, which Jhave graciously offered years ago.  Now it may be good to recode it in HTML 5. (Who is good enough to do that — Jen?)



__________________________________________________________________________________
Professor and Director • School of Arts, Media and Engineering • Herberger Institute for Design and the Arts • Synthesis • ASU • +1-480-727-2846
Founding Director, Topological Media Lab / topologicalmedialab.net/  /  skype: shaxinwei / +1-650-815-9962
__________________________________________________________________________________

Re: [Synthesis] re. LIGHTING and RHYTHM: > 100-dimensional soundscape

Here are my 2 cents, for what they're worth from a student of musicology (~1.2 cents). 

 I’m fortunate enough to have seen this piece (poeme symphonique) live before; the experience was very rich. Imagine the musicologist’s problem then— how to analyze this music? How to elucidate something of the structure? of the timbre? of the space? 

Indeed, this kind of work shows the limitations of graphic analysis (and furthermore, spectrographs). This is an unsolved (and seemingly often unobserved) problem in music theory / musicology. Although advocates of spectrographic analysis have successfully escaped the lattice-based structures of traditional Western notation and opened up analysis to fine nuances of timbral textures, they remained chained to their x-axis. What’s the musicologist's solution? (What’s the art researcher's solution?) 

More sonic imaging? If we were to experiment with mapping other parameters to the x and y axis, or with producing self-similarity matrices, we might get some interesting results. Then the task is to determine whether this actually elucidates anything about a composition (or just that realization of a composition? or is it just an analysis of a recording of a realization of a composition? . . . "music is work” [cage] ) 

For the purpose of documentation and proliferation of this way of (non-teleological?) thinking to scholars and artists who can’t simply read Max, C, Processing (etc.), I wonder if “open-form” compositions are most effectively analyzed textually. I start thinking about Garth’s paper, "Pools, Pixies, and Potentials" (http://www.activatedspace.com/Research/papers/files/poolsandpixies_isea.pdf), wherein he uses flowcharts and hand-drawn figures to allow the reader to parse out the various permutations of “places” within his constructed the vector field. Certainly, the text here is clear and the figures intuitive. The form of the piece is reflected here; still, we are mostly in the dark about how the music sounds (of course, not a dig on the author, i don’t think this was the intent of the paper). Of course, an easy solution would be to do a spectrographic analysis of one performance, identify the vectors on the spectrograph and present that in tandem with the flow-charts. 

I am finding there is a lot to be unpacked here (or maybe i’m trying to unpack too much). The goals of the music theorist (or media art theorist) may be different in scope than synthesis’ goals (epistemology v. phenomenology?). Wehinger who created the graphic score of Artikulation wanted to make a fixed-media piece more accessible. However, if we go back to Earl Brown (to whom Garth looks in his paper), his graphic SCORES (inspired by Calder) do not necessarily display time on X-axis. Why do most instances of (musical) graphic analysis? 

(humorously - from Walter Levin’s “ten commandments” for choosing a piece — number 4: “If [a piece] uses a new notation, try to transcribe it into conventional notation; if you succeed, forget it”) 

 Certainly, there is more to be understood from Artikulation if we subtract time from our analysis, as we might do in experience. Synthesis largely has the advantage of giving people experiences instead of having to explain them. It goes without saying that probably any formal or textural analysis cannot stand in for an aesthetic, embodied experience (and isn’t MEANT to). My question (and the motive for this large wall of text) concerns the times when Synthesis (or TML, or other media arts labs) doesn’t have the opportunity to share these media-art experiences but wants to communicate something subcutaneous about the art. 

What tools do we have available to us? I haven’t mentioned simply using videos…. a mediated but experientially centered form of documentation (which I’ve noted is common-place in media art and art research). In that case, to what extent is this a problem in new media? Is this only a problem in music? Is there one single tool (am i sounding too positivist?) Or do we rely on giving as many possible angles (spectrographs, flow-charts, philosophy, poetry) to create a mediated mosaic (heuristic?) of our aesthetic experience? 


Garrett L. Johnson
Musicology MA candidate @ ASU  
Synthesis Center graduate research 
LOrkAs (laptop orchestra of arizona state),director 

re. LIGHTING and RHYTHM: > 100-dimensional soundscape


Here is a 100-dimensional soundscape

György Ligeti - Poème Symphonique for 100 metronomes


So even in the most reductive senses, we need to get out of the habit of thinking about engineering  and composing time-based media as functions of one scalar parameter, nominally labelled “time.”  (For an example of a waste of time, see this graphic animation of Ligeti’s Artikulation.)

But still in materially and machinically* reproducible way.

As Deleuze and Guattari observed, machinic ≠ mechanical .

Xin Wei