tag:textures.posthaven.com,2013:/posts temporal textures 2016-11-18T07:11:25Z Xin Wei Sha tag:textures.posthaven.com,2013:Post/1067531 2016-06-27T14:21:51Z 2016-06-27T14:21:52Z AI's "white guy problem"
AI's "white guy problem" is a special case of a more primordial problem : in general, a priori classification-based algorithms such as learning algorithms all tend to perpetuate existing categories.  Iterated, they  accentuate existing prioritizations.

http://mobile.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?_r=0&referer=&login=email
]]>
tag:textures.posthaven.com,2013:Post/1059299 2016-06-03T10:32:54Z 2016-06-03T10:32:54Z [Synthesis] readings for rhythmanalysis group
(1) The Inert vs. the Living State of Matter: Extended Criticality, Time Geometry, Anti-Entropy – An Overview 
Giuseppe Longo, and Maël Montévil
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3286818/

(2) Emmy Noether
Invariants and symmetry theorems

Simplified version:

Full discussion (book) by Yvette Kosman-Schwarzbach
http://www.math.cornell.edu/~templier/junior/The-Noether-theorems.pdf
]]>
tag:textures.posthaven.com,2013:Post/1044971 2016-05-01T17:15:13Z 2016-05-01T17:15:13Z [rhythm] constantly shifting
Just as culture is underdetermined by (vulgar) economics, and metaphor underdetermined by syntax, rhythm as a special case of ontogenesis and individuation is underdetermined by meter.   This is too abstract.   So, listen for the constantly shifting rhythms between the sound, the breathing, the tensions in the performing body which you feel if you’ve learned play the instrument, against the regular meter.

• Sainkho Namtchylak 
Night Birds (1992)
http://epc.buffalo.edu/sound/mp3/ethno/sainkho/mp3/01.mp3
The longing and elasticity of this is not the same as metric regularity.


• A moving interpretation of the Bach Chaconne performed recently by a friend’s daughter Taiga (Ultan) on flute (starting at 3:08)


Listen for how she must negotiate her breathing against the uninterrupted singing line.   The Chaconne is a masterwork of implied voices.  For the violinist it’s already a great challenge to suggest and carry multipleimplied voices sliding across each other with shifting forces (rhythm!).  It’s an even greater challenge for the breathing flutist.  This young performer accomplishes it with sensitivity. 

( To hear one of the most wonderful performances of this in canonical form — for solo violin: Itzhak Perlman. )


•  Brahms : String Quartet No.1 In C Minor Op.51 No.1 - 1. Allegro ( Emerson String Quartet)

Listen for the elasticity which emerges from the interplay of swelling and fading voices, and the constantly shifting accenting (on top of the metric pulse)  The metric is a background grid that does not (and ought not) constrain the continuous multivalent shifts to fixed discrete choices.

• Example: In Mathematica: Play[|Riemann zeta function|]
This sounds totally otherworldly, not self-similar !, not predictable and not random!!


Multiscale self-similarity — “fractals”in pop literature — is just recursive regularity.   Pavan has some subtle ideas, based on dynamical systems theory, about signatures of human intention in movement, that are neither simple sums of periodic functions nor uniformly random processes.   There are connections between those ideas and spectral theory of operators, I think.  Worth discussion at multiple registers, artistic/ expressive, philosophical, as well as mathematical!

Adam, Pavan, PM me if you’d be ok with being added to this personal rhythm email list…

Xin Wei
]]>
tag:textures.posthaven.com,2013:Post/1044565 2016-04-30T22:08:41Z 2016-04-30T22:08:42Z Goldsmiths : April 23: Rhythm as Pattern and Variation -- Political, Social and Artistic Inflections
For our Rhythm scrapbook:




Rhythm as Pattern and Variation -- Political, Social and Artistic Inflections

April 23, 2016
Goldsmiths London


Organizers: 
Paola Crespi and Eleni Ikoniadou

Participants included

Pascal Michon (KEYNOTE)
“Could Rhythm Become a New Scientific Paradigm for the Humanities?"



RHYTHM and ART
Dee Reynolds
"Rhythmic Seascapes and the Art of Waves"
Paola Crespi
"'Time is Measurable and It's NOT Measurable': Polyrhythmicity in Rudolf Laban's Unpublished Notes and Drawings" 
Bruno Duarte
“Rhythm and Structure: Brecht's Rewriting of Hoelderin's 'Antigone'"


RHYTHM and THE SOCIAL
Ewan Jones
"How the Nineteenth Century Socialised Rhythm"
Mickey Vallee
"Notes Towards a Social Syncopation: Rhythm, History and the Matter of Black Lives"
John Habron
“Rhythm and the Asylum: Priscilla Barclay and the Development of Dalcroze's Eurhythmics as a Form of Music Therapy"


RHYTHM and MEDIA 
Simon Yuill 
and
Bev Skeggs
"Conflicted Rhythms of Value and Capital: Rhythmanalysis and Algorhythmic Analysis of Facebook" 
Sven Raeymaekers
“Silence as Structural Element in Hollywood Films"


RHYTHM and THE BODY 
Laura Potrovic
"Body-Flow: Co-Composing the Passage of Rhythmical Becoming(s)"
Mihaela Brebenel
"What Could Possibly Still Get Us Going: Rhythm and the Unresolved"
Eilon Morris 
“Rhythm and the Ecstatic Performer"



RHYTHM and NUMBER (Topology Research Unit Panel)
Peggy Reynolds
"Rhythms All the Way Down"
Julian Henriques
"Rhythmanalysis Weaponised"
Vesna Petresin
"Being Rhythmic"
Sha Xin Wei
“Rhythm and Textural Temporality: An Approach to Experience Without a Subject and Duration as an Effect"


RHYTHM and PHILOSOPHY 
Steve Tromans
"Rhythmicity, Improvisation and the Musical-Philosophical: Practice-as-Research in Jazz Performance"
Eliza Robertson
"Rhythm in Prose: Bergson's Duree and the Grammatical Verbal"
Yi Chen
“Rhythmanalysis: Using the Concept of Rhythm for Cultural Enquiry"


Sound Installation 
Annie Goh and Lendl Barcelos’ ‘DisqiETUDE'
St Hatcham Church G01
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/1040259 2016-04-25T02:07:55Z 2016-04-25T02:07:55Z Goldsmiths : April 23: Rhythm as Pattern and Variation -- Political, Social and Artistic Inflections
For our Rhythm scrapbook:

TALK SLIDES (video): https://www.academia.edu/24710149/Rhythm_and_Textural_Temporality_slides_

http://www.gold.ac.uk/calendar/?id=9756


Rhythm as Pattern and Variation -- Political, Social and Artistic Inflections

April 23, 2016
Goldsmiths London
http://www.gold.ac.uk/calendar/?id=9756


Organizers: 
Paola Crespi and Eleni Ikoniadou

Participants included

Pascal Michon (KEYNOTE)
“Could Rhythm Become a New Scientific Paradigm for the Humanities?"
http://rhuthmos.eu/



RHYTHM and ART
Dee Reynolds
"Rhythmic Seascapes and the Art of Waves"
Paola Crespi
"'Time is Measurable and It's NOT Measurable': Polyrhythmicity in Rudolf Laban's Unpublished Notes and Drawings" 
Bruno Duarte
“Rhythm and Structure: Brecht's Rewriting of Hoelderin's 'Antigone'"


RHYTHM and THE SOCIAL
Ewan Jones
"How the Nineteenth Century Socialised Rhythm"
Mickey Vallee
"Notes Towards a Social Syncopation: Rhythm, History and the Matter of Black Lives"
John Habron
“Rhythm and the Asylum: Priscilla Barclay and the Development of Dalcroze's Eurhythmics as a Form of Music Therapy"


RHYTHM and MEDIA 
Simon Yuill 
and
Bev Skeggs
"Conflicted Rhythms of Value and Capital: Rhythmanalysis and Algorhythmic Analysis of Facebook" 
Sven Raeymaekers
“Silence as Structural Element in Hollywood Films"


RHYTHM and THE BODY 
Laura Potrovic
"Body-Flow: Co-Composing the Passage of Rhythmical Becoming(s)"
Mihaela Brebenel
"What Could Possibly Still Get Us Going: Rhythm and the Unresolved"
Eilon Morris 
“Rhythm and the Ecstatic Performer"



RHYTHM and NUMBER (Topology Research Unit Panel)
Peggy Reynolds
"Rhythms All the Way Down"
Julian Henriques
"Rhythmanalysis Weaponised"
Vesna Petresin
"Being Rhythmic"
Sha Xin Wei
“Rhythm and Textural Temporality: An Approach to Experience Without a Subject and Duration as an Effect"


RHYTHM and PHILOSOPHY 
Steve Tromans
"Rhythmicity, Improvisation and the Musical-Philosophical: Practice-as-Research in Jazz Performance"
Eliza Robertson
"Rhythm in Prose: Bergson's Duree and the Grammatical Verbal"
Yi Chen
“Rhythmanalysis: Using the Concept of Rhythm for Cultural Enquiry"


Sound Installation 
Annie Goh and Lendl Barcelos’ ‘DisqiETUDE'
St Hatcham Church G01
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/1040206 2016-04-25T00:49:01Z 2016-04-25T00:49:01Z proposition 0.4 how about this as a working proposition:

0.4
rhythm is not a thing, not a form, not even a pattern, but a sense ?
(thus, a special case of temporality, which is the sense of dynamic, change, …)

added to
0.1
rhythm is not sonic

0.2
rhythmic is not unidimensional

0.3 
rhythm is not metrically regular, or metric at all.



after Goldsmiths talk: Rhythm and Textural Temporality: An Approach to Experience Without a Subject and Duration as an Effect 
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/1011552 2016-03-11T08:11:37Z 2016-03-11T08:11:37Z from teraswam to continuum mechanics, and rheology? Could RHEOLOGY and continuum mechanics be a source of insight for the continuum limit from internet of things to teraswarm and beyond?

That, plus a form of general relativity that has to take into account the interactions peculiar to media?

Rheology (/riːˈɒlədʒi/; from Greek ῥέω rhéō, "flow" and -λoγία, -logia, "study of") is the study of the flow of matter, primarily in a liquid state, but also as 'soft solids' or solids under conditions in which they respond with plastic flow rather than deforming elastically in response to an applied force.[1] It applies to substances which have a complex microstructure, such as muds, sludges, suspensions, polymers and other glass formers (e.g., silicates), as well as many foods and additives, bodily fluids (e.g., blood) and other biological materials or other materials which belong to the class of soft matter.
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/1008001 2016-03-06T02:33:23Z 2016-03-06T02:33:23Z another reason for 3d printing
Yet another reason for 3d printing, a math-poetic one:

http://laughingsquid.com/a-3d-printed-sundial-that-displays-the-time-in-digital-format-without-the-use-of-electronics/

this joins the aerogel
http://www.sciencealert.com/you-can-now-3d-print-one-of-the-world-s-lightest-materials-aerogel
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/1002756 2016-02-28T02:23:09Z 2016-02-28T02:23:09Z rhythm: hand jive++ Julio Pimentel, Brazilian, percussion jive
https://www.facebook.com/OfficialJulioPimentel/videos/767821399959209/
(thanks Adrian)
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/999345 2016-02-23T05:42:11Z 2016-02-23T05:42:11Z Henriques, Julian F.; Tiainen, Milla and Valiaho, Pasi. 2014. Rhythm Returns: Movement and Cultural Theory. Body and Society, 20(3/4), pp. 3-29 http://research.gold.ac.uk/10747/

Henriques, Julian F.; Tiainen, Milla and Valiaho, Pasi. 2014. Rhythm Returns: Movement and Cultural Theory. Body and Society, 20(3/4), pp. 3-29. [Article]

No full text available
Official URL: http://bod.sagepub.com/

Abstract or Description


This introduction charts several of rhythm's various returns as a way of laying out the theoretical and methodological field in which the articles of this special issue find their place. While Henri Lefebvre’s rhythmanalysis is perhaps familiar to many, rhythm has appeared in a wide repertoire of guises, in many disciplines over the decades and indeed the centuries. This introduction attends to the particular roles of rhythm in the formation of modernity ranging from the processes of industrialization and the proliferation of new media technologies to film and literary aesthetics as well as conceptualizations of human psychology, social behaviour and physiology. These are some of the historical antecedents to the contemporary understandings of rhythm within body studies to which most of the contributions to this issue are devoted. In this respect, the introduction outlines recent approaches to rhythm as vibration, a force of the virtual, and an intensive excess outside consciousness.

body culture
modernity
phenomenology
psychology
rhythmanalysis
vibration
the virtual


Item Type: Article
Identification Number (DOI): 10.1177/1357034X14547393
Departments, Centres and Research Units: Media and Communications
Item ID: 10747
Date Deposited: 13 Oct 2014 10:00
Last Modified: 16 Jun 2015 12:42
URI: http://research.gold.ac.uk/id/eprint/10747
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/997899 2016-02-20T19:45:12Z 2016-02-20T19:45:13Z complex systems phenomena of critical slowing down, and flickering
Hi Felix,

On Feb 20, 2016, at 8:00 AM, Fel Reb <rebfel@gmail.com> wrote:

I don't have access to respond to the posthaven blog, so I'm sending it directly to you....

Your questions made me think of meta-stability and Simondon... I don't know if if I'm off in left field but here are my two cents' worth... Gotta say though that f is not just any (differentiable) scalar function… it's a nice way at "inducing" continuity where the underlying may not have it..

Yes, right — in fact distribution theory : representing a function by convolving against approximations to the identity with a kernel that converges to the Dirac delta function is a well known and beautiful way to densely approximate any integrable function — a much vaster set of functions, which includes wildly non-differentiable and even discontinuous functions — by infinitely differentiable functions.

If the second time-derivative is going to zero, one would be approaching a steady state of no change, i.e. no new energy entering or leaving the system.

The second time-derivative of what, prices of Apple stock, immigrant flows through Ellis Island ?  That is only the case when we’re talking about position (potential energy mass * dx).   

If the second time-derivative is positive, why would that induce flickering? If the second derivative is positive at a point, are not you not only providing half the story? Wouldn’t you need to see how the change is changing over time rather than tending? 

Yes exactly, that’s why I speak of second time-derivative f’’: change is f’  and change of change is f’’.

If the potential is locally a quadratic with nonzero second derivative, then it looks like a parabola (in potential space).   The classic dynamic (solution) subject to that sort of potential (differential equation) is harmonic oscillation. 


For the thing to flicker, one would need discontinuities in the system or very tight oscillations to the changing system... tending-positive to infinity, finding another “plateau” of zero (or near zero) and then another tending-negative to infinity and repeat
This flickering effect feels like a cycling of meta-stability where contributing factors within the system impede the system from acquiring a one way or the other... or exit that meta-stable state... the correlation lengths would depend on the energy dynamics of the system, how rough the cycling is, i.e. how much energy is required to get out of the troughs of the meta-stability yet not enough to break away from the cycling and revert to the meta-stable trough. Experimentally, to break the spell one needs to introduce ever larger amounts of energy, heighten the amplitude of the energy dynamics as roughness into the cycling so one overwhelms the threshold boundary and break free from the prevalent dynamic onto another regime. You gotta introduce some rough stuff into the system, i.e. introduce difference or change, to mix it up and break free from the toxic stability....

Does this make sense?

Not clear what you mean by all this.  Are we speaking of the base space, or the state space of configurations, or the space of potential energy (functional on configurations)?

I hope this doesn’t land like a hair in the soup, like they say in Qc French.

hahaha , what’s that in quebecois?

Best, Felix

P.S. I found this reference on my way to somewhere else... thought it might be an interesting comment to the death scenario of the . 

From Nature

Universal resilience patterns in complex networks

Jianxi Gao, Baruch Barzel & Albert-László Barabási

Nature 530, 307–312 (18 February 2016) doi:10.1038/nature16948

Received 13 July 2015 Accepted 14 December 2015 Published online 17 February 2016

Félix


Félix Rebolledo

Email: rebfel@gmail.com

Fone: 51 9110 9920

View the post and reply »

]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/985843 2016-02-04T16:32:17Z 2016-02-04T16:32:17Z 4D printing, ripping off plants https://www.technologyreview.com/s/546126/gorgeous-new-4-d-printing-process-makes-more-than-just-eye-candy/]]> Xin Wei Sha tag:textures.posthaven.com,2013:Post/980386 2016-01-28T15:27:12Z 2016-01-28T15:27:12Z RHUTHMOS / Janvier 2016 Worth subscribing to RHUTHMOS!
And worth learning français!
Bien sur!
Xin Wei

_________________________________________________________________________________________________

Begin forwarded message:

From: Pascal Michon <pascal.michon1@sfr.fr>
Subject: Dernières publications sur RHUTHMOS / Janvier 2016
Date: January 27, 2016 at 12:51:55 PM MST
To: 'Pascal Michon' <pascal.michon1@sfr.fr>

Dernières publications sur RHUTHMOS / Janvier 2016
Plateforme internationale et transdisciplinaire de recherche
sur les rythmes dans les sciences, les philosophies et les arts

Janvier 2016

Derniers articles parus
* F. Bisson, Ainsi marche Anna Cruz

Aux EDITIONS RHUTHMOS
En librairie
* M. Salgaro (Hrsg.), M. Vangi (Hrsg.), Mythos Rhythmus.Wissenschaft, Kunst und Literatur um 1900
*J.-L. Evard, Du sensible au sensé

Galeries
* Hommage à Daniel Buren
* Seeing in the Rain – Chris Gallagher (1981)
* Variations on a Cellophane Wrapper – David Rimmer (1972)
* Savage Messiah – Ken Russell (1972)
* La jetée – Chris Marker (1962)
* Ring-o-graphy – Alexandra Savina (2011)
* Meilleurs Vœux 2016 – Marie Paccou et al.École des Métiers du Cinéma d’Animation
* My Ship – Kurt Weill (1942)
* Les yeux noirs – Django Reinhardt (1947)
* Mélodie au crépuscule – Django Reinhardt (1947)
* Sonnet 130 (1609) – William Shakespeare – Read by Alan Rickman
* Daffodils – William Wordsworth (1804)
Actualités
* TRANSDISCIPLINAIRES – Call For Papers For the Conference : « Rhythm as Pattern and Variation : Political, Social and Artistic Inflections »
Débats



Cet e-mail a été envoyé depuis un ordinateur protégé par Avast.
www.avast.com
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/976680 2016-01-23T10:13:06Z 2016-01-23T10:13:06Z standard model Lagrangian re-understood in terms of spectral action




BRAWN — the full “un-inspiring” version of the Standard Model’s Lagrangian as hacked together by physicists:




Connes and Marcolli’s formulation using and producing deep insight:



Brains over brawn.
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/961851 2016-01-02T08:38:43Z 2016-01-02T08:38:43Z sounds matter, architecture http://www.nytimes.com/interactive/2015/12/29/arts/design/sound-architecture.html?src=me&_r=2]]> Xin Wei Sha tag:textures.posthaven.com,2013:Post/959283 2015-12-29T05:54:01Z 2015-12-29T05:54:01Z analog rhythm jam session

Thanks to Adrian Freed for drawing attention to this analog machine + human rhythm jam session https://www.facebook.com/guanitoweb/videos/10206485558432152/ “Viviendo la nochebuena en un ricon de sevillano aquí os muestro algo nuevo la máquina de compas y os deseo una feliz navidad a todos, con armonia, sentimiento y compas”

]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/945870 2015-12-08T10:19:24Z 2015-12-08T10:19:24Z sensorimotor observations of collective movement, Heims Ingalls experiment Hi Todd,

Thanks.  Can we talk with Pavan, and then Steve Heims?

Yes I’d be very interested in such collective movement experiments.  But then it is urgent that we really prep our own measurement methods and team (Garrett?, ___ ? assisted by Julian).

As you know, I would want to measure correlations not (only) in the brain but across much more of the event.   
It is far more direct (scientifically rigorous) to measure as much of the global aspects of collective movement than to zero in on only one part of the body and in fact a part whose functions are extremely indirectly related to corporal kinetics, and in ways that are quite ill understood . 

That’s why I’ve asked Julian and our students to build out the rhythm kit to use all modalities of sensing intervallic rhythm.


in particular: 


and as an aside:

Can we talk with Pavan, and then with Steve?

Thanks,
Xin Wei


On Dec 4, 2015, at 12:41 PM, Todd Ingalls <TestCase@asu.edu> wrote:

could this be tied to rhythm. I think we are both skeptical of brain imaging but could still be interesting. 

todd from my phone

Begin forwarded message:

From: Stephen Helms Tillery <stillery@asu.edu>
Date: December 4, 2015 at 12:26:56 PM MST
To: Todd Ingalls <TestCase@asu.edu>
Subject: Back to music and brain

Hey Todd,

Hope you’re good.

I have been working and thinking about a couple of issues a lot lately in group neuroscience .. the two key topics are joint action and entrainment.   Joint action is just multiple actors working together to accomplish some task … like two people carrying a table together, or a couple of soccer players moving the ball down the field.   These are interesting problems because they require the actors to have some sense of what their partners are trying to accomplish and how they are going about that.   Entrainment is an entirely hypothesized process in which two brains come into “synchrony” in order to communicate .. this is thought to be important in language, but obviously is also important in music performance.

Entrainment, however, is pretty loosely defined at the moment … we have an idea for getting at entrainment using musicians.    The notion is to get an ensemble together, a good ensemble … and record simultaneous EEGs from the players as they work a piece.

To some extent this has been done before:   with saxophones (ugh!)   The focus of that paper was on EEG markers of empathy (even more ugh), and the usual expected changes in EEG associated with listening to and motor outputs for music.

What I’d like to do is do real analysis across multiple brains during performance, and see if we can see electrical signs of entrainment as they are working.   In a dream world, as the ensemble locks into “togetherness” … the brains will entrain.   Or vice versa.  

Anyway, to go after this we will need to synch up multiple EEGs, and more importantly, find a good ensemble that might be up for this.    

I thought of AME, and wondered if there would be somebody there interested in devoting a little bit of time and nominal resources to chasing this down.

In any case, have good holidays,

STeve
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/942125 2015-12-02T15:52:13Z 2015-12-02T15:52:14Z AME research and graduate proseminar: the problem with explaining things in terms of "'parts' of the brain"
Hardcastle and Stewart succinctly point out a fundamental problem at the heart of the methodology of neuroscience (and of cognitive science): the modularity thesis.

Neuroscience did not “discover” modules — loci of functions —  in brains.   Rather “they don’t even have a good way of accessing the appropriate evidence. It is a bias in neuroscience to localize and modularize brain functions.”

The problem with scientistic methodology is that you see what you expect to see.



There’s much more in play: Noah Brender’s work questions the modularity thesis underlying much of technoscience. 
However, another world is possible :)

Xin Wei
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/938587 2015-11-24T13:10:44Z 2015-11-24T13:10:44Z Synthesis rhythm: IMU's etc. Dear Rhythm people: Garrett, Gabby, Julian,

Thanks for being on the demo team !  Now we can get back to steady state  work, like rhythmanalysis 
textures.posthaven.com/
rhythmanalysis.weebly.com

Can you please check out the IMU’s that we bought last year as an input for our rhythm test platform?
Ask Ozzie or perhaps one of Prof. Turaga’s students who’s used them for permission and see if you can stream them into Max.

I’d like assemble a suite of inputs:
contact mic
air mic
camera (Julian)
IMU (Pavan’s group?)
xOSC gyros (Mike —> Julian)

and record them in parallel
with some movement scenarios to get multiple streams of time data.

Please define some scenarios : e.g.  assembling blocks small to giant size, cutting and washing .  try seated and upper body and locomotive.   Varsha’s done some movement scenarios with Grisha, but in very specialzied contexts.  How about quotidian ?

Let’s try some out on Monday Nov 30?

Cheers,
Xin Wei

cc Pavan

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
Fellow: ASU-Santa Fe Consortium for Biosocial Complex Systems
Affiliate Professor: Future of Innovation in Society; Computer Science; English
Founding Director, Topological Media Lab
skype: shaxinwei • mobile: +1-650-815-9962
_________________________________________________________________________________________________
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/926515 2015-11-03T02:53:09Z 2015-11-03T02:53:09Z irreversible processes: ink on paper
maybe useful videos of irreversible processes
to provide arrow of time so it’ll be clear to when we run elastic time or reverse time.
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/918591 2015-10-18T14:03:58Z 2015-10-18T14:03:58Z example of synthesis research: Naccarato and MacCallum, "From Representation to Relationality: Bodies, Biosensors, and Mediated Environments" JDSP 8.1 (2015)
Here’s a journal article published by a couple of researchers hosted at Synthesis last year that may be interesting to folks working on movement and responsive media, somatic experience, experimental dance and experimental technology, critical studies of technoscience, or philosophy of movement:

Teoma Naccarato, John MacCallum, “From Representation to Relationality: Bodies, Biosensors, and Mediated Environments,”  in Embodiment, Interactivity and Digital Performance, Journal of Dance and Somatic Practices, 8.1, 2015.

Teoma is starting a PhD with the Centre for Dance Research (C-DaRE), Coventry University UK
and John is a postdoc at the Centre for New Music and Audio Technologies (CNMAT) Department of Music, University of California at Berkeley. 

John and Teoma’s extended journal article is a good example of a durable outcome from the research cluster hosted by Synthesis in the Heartbeat Residency: Choreography and Composition of Internal Time.  This was a residency on temporality — sense of dynamic, change, rhythm — held February 15- 20, 2015. AME iStage, Matthews Center, ASU.




Ambient color changes according to whether dancer’s heart is faster or slower than some rate in the rhythm accompaniment software.  Synthesis Residency Jan 2015.   (The overhead tube lamps from Ziegler’s “forest2" were not used in this particular experiment.)


Improvisation with dancer Naccarato, composer / system creator MacCallum, Synthesis team and members of ASU laptop orchestra (Lorkas). Synthesis Residency Jan 2015.


________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
Fellow: ASU-Santa Fe Center for Biosocial Complex Systems
Affiliate Professor: Future of Innovation in Society; Computer Science; English
Founding Director, Topological Media Lab
skype: shaxinwei • mobile: +1-650-815-9962
_________________________________________________________________________________________________
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/848564 2015-04-29T02:19:06Z 2015-04-29T02:19:31Z HMM in Max On Fri, Apr 24, 2015 at 5:12 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Where can we get the best publicly available HMM external for Max
as a general purpose HMM package

Should we extend / modify  gf (which we have via IRCAM license )
and can we use it easily for non-audio data?   People claim to have tried it on video.
It seems that the real work is the preliminary feature extraction where a lot of interpretation happens
What are examples of code that do this in interesting ways?

Xin Wei

Navid Navab wrote 

While FTM is somewhat discontinued, this all is being moved to IRCAM's free Mubu package:
http://forumnet.ircam.fr/product/mubu/
download the package and quickly check some of their example patches.

poster: 


It contains optimized algorithms building on gf, FTM, cataRT, pipo, etc. While mubu is audio-centric, it is not necessarily audio-specific. mubu buffers can work with multiple data modalities and use a variety of correlation methods to move between these layers... This makes up a fairly wholesome platform without the need to move back and forth between gf, FTM, concatenative synthesis instruments, multimodal data handling, analysis, and etc.

As with most current IRCAM releases, it is highly under-documented. Besides gf that is distributed with their package, the mubu.hhmm object might be good place to start for what you are looking for:


also their xmm object might be of interest:
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/808642 2015-02-08T05:33:39Z 2015-02-08T05:33:40Z o4.track (video +sensor osc data), gyro (Was: Synthesis Center / Inertial Sensor Fusion) Great!

Mike, Can you generate data in Julian’s data structure and store them in a shared directory for us all
along with the journaled video?  Julian Stein wrote the object for journalling data.

On Nov 10, 2014, at 1:07 AM, Julian Stein <julian.stein@gmail.com> wrote:
Also included in the O4.rhyth_abs is a folder labeled o4.track. This features a simple system for recording and playing video with a synchronized osc data stream.
I’ll cc this to the SC team so they can point out those utilities on our github.

It’d be great if you can give the Signal Processing group some Real Live Data to matlab offline this week, as a warmup to Teoma + John’s data the week of Feb 15.

We must have video journaled as well, always.

I’d be interested in seeing an informal brownbag talk about Lyapunov exponents one of those mornings of the week of Feb 15, together with some analysis of the data. 

Let’s cc Adrian Freed and John MacCallum on this “gyro" thread —
Adrian’s got the most insight into this  and could help us make some actual scientific headway
toward publishable results.

My question is : By doing some stats on clouds of orientation measurements
can we get some measure of collective intention (coordinated attention)
not necessarily at any one instant of time (a meaningless notion in a relativistic world like ours) — 
but in some generalized (collective) specious present?

Let’s plan John and Teoma’s workshop hour by hour schedule this coming week at a tea?

Kristi or Garrett, or __: please let us know when the “heartbeat”  workshop weebly site is posted and linked to the Synthesis research ok?

Cheers,
Xin Wei

On Feb 6, 2015, at 12:13 PM, Michael Krzyzaniak <mkrzyzan@asu.edu wrote:

I translated Seb's sensor fusion algorithm into Javascript to be used within Max/MSP:


There was still quite a bit of drift when I tested it, but I was only using 100Hz sample rate which I suspect may have been the main issue.

Mike


 On Sat, Jan 31, 2015 at 3:45 PM, Adrian Freed <adrian@adrianfreed.com
 wrote:
  Thanks Xin Wei.
  Tt would indeed to at least develop a road map for this important work. We should bring the folk from x-io
  into the discussion because they have moved  their considerable energies and skills further into this space in 2015.
 
  I also want to clarify my relative silence on this front. As well as weathering some perfect storms last year, I found
  the following project attractive from the perspective of separating concerns for this orientation work: http://store.sixense.com/collections/stem
  They are still unfortnuately in pre-order land with a 2-3 month shipping time. Such a system would complement commercial and inertial measuring systems well
  by providing a "ground truth" ("ground fib") anchored to their beacon transmitter.  The sixense system has limited range for many of our applications
  which brings up the question (again as a separation of concerns not for limiting our perspectives) of scale. Many folk are thinking about orientation and inertial
  sensing for each digit of the hand (via rings).
 
  For the meeting we should prepare to share something about our favored use scenarios.

 On Jan 31, 2015, at 1:37 PM, Xin Wei Sha <Xinwei.Sha@asu.edu
 wrote:
 
 
  Can you — Adrian and Mike —  Doodle a Skype to talk about who should do what when to get gyro information from multiple (parts of ) bodies
  into our Max platform so Mike and the signal processing maths folks can look at the data?
 
  This Skype should include at least one of our signal processing  Phd’s as well ?
 
  Mike can talk what he’s doing here, and get your advice on how we should proceedL
  write our own gyro (orientation) feature accumulator
  get pre-alpha version of xOSC hw + sw from Seb Madgewick that incorporates that data
  adapt something from the odot  package that we can use now.
  WAIT till orientation data can be integrated easily (when, 2015 ?)
 
  Half an hour should suffice.
  I don’t have to be at this Skype as long as there’s a precise outcome and productive decision that’ll lead us to computing some (cor)relations on streams of orientations as a start...
 
  Cheers,
  Xin Wei
 
  __________


On Jan 31, 2015, at 1:27 PM, Vangelis <vl_artcode@yahoo.com wrote:

 Hello!
  Yes, there is great demand for something that works in sensor fusion for inertial sensors but I think the best way to do so is as part of o. so to benefit every inertial setup out there. It will take ages for Seb to implement it for xosc and would be an exclusive benefit. Seb's PhD is out there and I am sure he will help sharing new code for solving the problem. The question is can we do this? :)
  My warm regards to everyone!
  v


On Jan 30, 2015 6:45 PM, Adrian Freed <adrian@adrianfreed.com
 wrote:

  Hi.
  The experts on your question work at x-io. Seb Madgewick wrote the code a lot of people around the world are using for sensor
  fusion in IMU's.
  Are you using their IMU (x-OSC) as a source of inertial data?
 
  We started to integrate Seb's code into Max/MSP but concluded it would be better to wait for Seb
  to build it into x-OSC itself. There are some important reasons that this is a better approach, e.g.,
  reasoning about sensor fusion in a context with packet loss is difficult.
 
  It is possible Vangelis persisted with the Max/MSP route
 
 
  On Jan 30, 2015, at 3:01 PM, Michael Krzyzaniak <mkrzyzan@asu.edu  wrote:
 
  Hi Adrian,
 
  I am a PhD student at ASU and I work with Xin Wei at the Synthesis Center. We are interested in fusing inertial sensor data (accel/gyro/mag) to give us reliable orientation (and possibly position) information. Do you have in implementation of such an algorithm that we can use in (or port to) Max/MSP?
 
  Thanks,
  Mike
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/790294 2015-01-02T13:14:47Z 2015-01-02T13:14:48Z Physis, poiesis in the highest sense

Not only handcraft manufacture, not only artistic and poetical bringing into appearance and concrete imagery, is a bringing-forth, poiesis. Physis also, the arising of something from out of itself, is a bringing-forth, poiesis. Physis is indeed poiesis in the highest sense. For what presences by means of physis has the bursting open belonging to bringing-forth, e.g., the bursting of a blossom into bloom, in itself (en heautoi). In contrast, what is brought forth by the artisan or the artist, e.g. the silver chalice, has the bursting open belonging to bringing­ forth not in itself, but in another (en alloi), in the craftsman or artist.

[Heidegger, Question Concerning Technology, 11]

]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/788287 2014-12-27T09:48:25Z 2014-12-27T09:48:25Z [Synthesis] rhythm research: a self-organizing map (SOM) (jit.robosom)-->
Dear, Garrett, Mike, Julian, Omar, Chris Z,

Swiss-French artist Robin Meier used self-organizing maps
to animate LED lighting as a network of synced oscillators in Firefly Sync in Russia
http://robinmeier.net/?p=1234
Paper about Max Jitter patch: jit.robosom

Self-organizing map abstraction for MaxMSP Jitter.
http://robinmeier.net/?p=25

Don’t know if jit.robosom works or is very interesting in effect but his jit.robosom may be worth a try                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    for rhythm experiments driving our lighting instruments. 
This is a relatively trivial application of linked oscillators.
We should be able to achieve much more interesting behaviour, 
especially with live action in the loop.

A presentation by an authority in SOM’s: Timo Honkela (Finland)

23.05.2014 [Metalithikum K5] Timo Honkela - Self Organizing Map as a means for gaining perspectives


HUNCH : Mike K’s correlation-based method should yield more interesting temporal textures.

Xin Wei

________________________________________________________________________________________
Sha Xin Wei • Professor and Director • School of Arts, Media and Engineering + Synthesis
Herberger Institute for Design and the Arts + Fulton Schools of Engineering • ASU
skype: shaxinwei • mobile: +1-650-815-9962
Founding Director, Topological Media Lab •  topologicalmedialab.net/
_________________________________________________________________________________________________
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/767335 2014-11-09T14:39:19Z 2014-11-09T14:39:30Z Adrian Freed: fuelling imagination for inventing entrainments
Here’s a note relevant to entrainment and co-ordinated rhythm, as the Lighting and Rhythm workshop looms.

Adrian Freed’s collected a list of what he calls “semblance typology of entrainments.”
Notice that he does NOT say “types” but merely a typology of semblances, which helps us avoid reification errors.

Let’s think of this as a way to enrich our vocabulary for rhythm, entrainment, temporality, processuality in movement and sound.
Let’s not use this — or any other list of categories — as an absolute universal set of categories sans context. 
See the comments below.


On Nov 8, 2014, at 11:54 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:
I would like to enrich our imagination before we lock too much into talking about “synchronization” 
and regular periodic clocks in the Lighting and Rhythm workshop.


On Nov 8, 2014, at 9:41 PM, Adrian Freed <Adrian.Freed@asu.edu> wrote:
http://adrianfreed.com/content/semblance-typology-entrainments



On Nov 8, 2014, at 10:01 PM, Adrian Freed <Adrian.Freed@asu.edu> wrote:
I haven't thought about this in a while but the move to organize the words using the term "together" which I did for the talk at Oxford is interesting because it allows a formalization in mereotopology a la Whitehead but I would have to provide an interpretation of enclosure and overlap that involves  correlation metrics in some structure, for example, CPCA 
(Correlational Principal Component Analysis): http://www.lv-nus.org/papers%5C2008%5C2008_J_6.pdf



On Nov 9, 2014, at 7:05 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:


__________________________________________________________________________________
Sha Xin Wei, Ph.D. • xinwei@mindspring.com • skype: shaxinwei • +1-650-815-9962
__________________________________________________________________________________



Thanks Adrian,

Then I wonder if rank statistics —  ordering vs cardinal metrics — could be a compromise way.
David Tinapple and Loren Olson here have invented a web system for peer-critique called CritViz
that has students rank each other’s projects.  It’s an attention orienting thing…

Of course there are all sorts of problems with it — the most serious one being 
herding toward mediocrity or at best herding toward spectacle

and it is a bandage invented by the necessity of dealing with high student / teacher ratios in studio classes.

The theoretical question is can we approximate a mereotopology on space of 
Whiteheadian or Simondonian  processes using rank ordering 
which may do away with the requirement for coordinate loci.

Axiom of Choice gives us a well ordering on any set, so that’s a start, 
but there is no effective decidable way to compute an ordering for an arbitrary set.
I think that’s a good thing.   And it should be drilled into every engineer.
This means that the responsibility for ordering 
shifts to ensembles in milieu rather than individual people or computers.

Hmmm, so where does that leave us?

We turn to our anthropologists, historians, 
and to the canaries in the cage — artists and poets…

There’s a group of faculty here including  Cynthia Selin, who are doing what they call scenario [ design | planning | imagining ]
as a way for people to deal with wicked messy situations like climate change, or developing economies.   They seem very prepared for 
narrative techniques applied to ensemble events but don’t know anything about theater or performance.  
It seems like a situation  ripe for exploration.  If we can get sat the naive phase of slapping conventional narrative genres from
community theater or gamification or info-visualization onto this.

Very hard to talk about, so I want to build examples here.

Xin Wei
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/751301 2014-10-05T21:33:35Z 2014-10-05T21:33:35Z lag; auto-calibration of video projection
From: Adrian Freed <adrian@cnmat.berkeley.edu>
Subject: Re: Automatic projector calibration
Date: October 5, 2014 at 1:08:24 PM MST

The technical notion of lag, does not jive very well with the multiple temporal structures involved in experience.
Using it as a ground truth produces some ugly theories, e.g., http://en.wikipedia.org/wiki/Interaural_time_difference
Notice the frequency dependent hacks added to the theory and vagueness about delay/phase.  Also notice that the detailed
anatomical and signal flow analysis says nothing to support the theory other than that there are places the information from each ears meet. I encourage everyone to think this through carefully and build and explore speculatively.

We have been down this hole at CNMAT for pitch detection on guitars. People think you can synthesis sounds that tightly track pitch of a string. You can't. There are no interesting definitions of a low guitar string pitch that would make this possible. One "solution" is to track with a constant lag, i.e. a sort of echo. This conditions and constrains the space considerably. A few artists have done amazing things within these constraints (https://www.youtube.com/watch?v=VYCG5wZ9op8, https://www.youtube.com/watch?v=1X5qDeK3siw) but the apparatus has strong agency and may well interfer with other goals.

On Oct 5, 2014, at 10:44 AM, Evan Montpellier <evan.montpellier@gmail.com> wrote:

Pages 36-9 in the thesis deal with tracking moving projection surfaces
at near real-time rates. The short of it is that the max refresh rate
Dr. Lee was able to achieve was 12Hz, noting:

"feedback latency places a substantial constraint on the usage of
alternative patterns that may utilize recent sensor data to improve
tracking performance. Tracking algorithms that require instantaneous or
near instantaneous feedback from sensors are not likely to be executable
in practice."

Perhaps the lag would be be acceptable, though, within some of the
visual movement experiments that already play with time delay.

Evan

On 2014-09-30, 4:47 PM, Byron Lahey wrote:
From my perspective, the real value that would come from implementing an
auto-calibration system would be the potential for dynamic projection
surfaces: surfaces that enter or exit a space, expand and contract,
morph into different shapes, etc.

I'm interested but don't have much bandwidth to devote to this.

Byron

On Fri, Sep 26, 2014 at 1:36 PM, Evan Montpellier
<Evan.Montpellier@asu.edu <mailto:Evan.Montpellier@asu.edu>> wrote:

  For projects such as the Table of Content/Portals, automatic
  projector calibration would save a considerable amount of work and
  time. Here's an attractive looking solution from Dr. Johnny Chung
  Lee, presently of Microsoft:

  http://johnnylee.net/projects/thesis/

  Is anyone interested in attempting to implement an analogous system
  as part of the Synthesis-TML portal network?
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/751040 2014-10-05T06:23:01Z 2016-11-18T07:11:25Z [Synthesis] Protentive and retentive temporality (Was: Notes for Lighting and Rhythm Residency Nov (13)17-26) Casting the net widely to AME + TML + Synthesis, here follows very raw notes that will turn into the plans for Synthesis Center’s Lighting and Rhythm Residency Nov (13)17-26.  Forgive me for the roughness, but I wanted to give as early a note as possible about what we are trying to do here.   Think of the work here as live, experientially rich yet expertly built experiments on temporality -- a sense of change, dynamic, rhythm or more generally temporal texture.

Please propose experiential provocations relevant to temporality, especially those that use modulated, animate lighting. 

I draw special attention to the phenomenological, Husserlian proposition of:

“… something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?“

This would be an apt application of bread and butter statistical DSP methods!

TODOS:

• Chris Z. and I are reaching toward a strand of propositions that are both artistically relevant and phenomenologically informative.    We can start w a double strand of fragments of prior art and prior questions relevant to “first person” / geworfen temporality (versus spectatorship as a species of vorhanden attitude which is not in Synthesis’ scope, by mandate).
We need to seed this with some micro-etudes, but we expect that we’ll discover more as we go.    This requires that we do all  the tech development prior to the event, all gear acquisition, installation,  sw engineering should be done  prior to Nov 13.   The micro is a way to fight the urge to make a performance or an installation out of this scratch work.

Informed by conversation w Chris Z and all parties interested in contributing ideas on what we do during the LRR Nov 17-26, Chris R (and I) will
• Agreed on outcomes
• Timeline
• Organize teams
• Plan Publicity, Documentation


Begin forwarded message:

From: Xin Wei Sha <Xinwei.Sha@asu.edu>
Subject: Re: [Synthesis] Notes for Lighting and Rhythm Residency Nov (13)17-26
Date: October 4, 2014 at 2:31:10 PM MST

Please please please before we dive into more gear specs

What are the experiential provocations being proposed?

For example, Omar, everyone, can you please write into some common space more example micro-studies  similar to Adrian’s examples?
(See the movement exercises that MM has drawn up for past experiments for more examples.)
Here at Synthesis, I must insist on this practice, prior to buying gear, so that we have a much greater ratio of
propositions : gadgets.

Thank you, let’s play.
Xin Wei

_________________________________________________________________________________________________

On Oct 4, 2014, at 12:53 PM, Omar Faleh <omar@morscad.com> wrote:

I got the chance lately to work with the Philips Nitro strobes which is intensely stronger than the atomics 3000 for example. It is an LED strobe,  so you can pulse, flicker, and keep on for quite a while without having to worry about discharge and re-charge.. and being an all-led strobe, it isn't as voltage- hungry as the atomics..

 The LED surface is split into 6 sub rectangle that you can address individually or can animate by preset effects, which allows for a nice play with shadows with only one light (all DMX-controlled)
and there is an RGB version of it too.. so no need for gels and colour changers.

I am also looking into some individually-addressable RGB LED strips . Placing the order today so I will hopefully be able to test and report the findings soon


_________________________________________________________________________________________________

On 2014-10-04, at 3:30 PM, Adrian Freed <adrian@adrianfreed.com> wrote:

Sounds like a fun event!

Does the gear support simple temporal displacement modulations, e.g., delaying one's shadow or a projected image of oneself?

This is rather easy to do with the right gear.

I would like to see something more ambitious attempted along the lines of things we have played with using sound. By tracking feet we can produce the anticipatory  sound of a foot fall which messes with the neat and tidy
notion of retentions and protentions. Can you rework a visual representation of oneself (shadow, image, silouette, ghost) to move in anticipation of where one moves?

It would also be interesting to modulate a scrambling of oneself and connect its intensity to movement intensity. Navid has done things similar to this with sound. The experience was rather predictible but might well be different visually.


_________________________________________________________________________________________________

On Oct 4, 2014, at 11:53 AM, Xin Wei Sha <Xinwei.Sha@asu.edu> wrote:

Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience, NOT designing for spectator.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:
Core Questions to extend:  http://improvisationalenvironments.weebly.com/about.html
Playing around with lights: https://vimeo.com/tml/videos/search:light/sort:date
Key Background:  http://textures.posthaven.com


The idea is to invite Chris and his students to work [richly] on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.




• Lighting and Rhythm
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

_________________________________________________________________________________________________


On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

_________________________________________________________________________________________________


On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei


_________________________________________________________________________________________________


On Sep 2, 2014, at 5:07 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Of course we could reduce the 6 second lag by reducing the window sizes and increasing the hop sizes, at the expense of resolution. Also, rather than using the OSS calculation provided, perhaps we could perhaps just use a standard amplitude follower that sums the absolute value of the signal with the absolute value of the Hilbert Transform of the signal and filtering the result. This would save us from decimating the signal on input and reduce the amount of time needed to gather enough samples for autocorrelation (at the expense of accuracy, particularly for slow tempi).

What are you ultimately using this algorithm for? Percival-Tzanetakis also doesn't keep track of phase. If you plan on using it to take some measure of metaphorical rhythm between, say, humans as they interact with each other or the environment, then it seems like phase would be highly important. Are we in sync or syncopated? Am I on your upbeats or do we together make a flam on the downbeats?

Mike

_________________________________________________________________________________________________


On Tue, Sep 2, 2014 at 4:09 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Hi Adrian,

Mike pointed out what for me is a serious constraint in the Percival-Tzanetakis tempo estimator : it is not realtime.
I wonder if you have any suggestion on how to modify the algorithm to run more “realtime” with less buffering if that’s the right word for it…

Anyway I’d trust Mike to talk with you since this is more your than my competence.   cc me for my edification and interest!

Xin Wei

_________________________________________________________________________________________________

On Sep 2, 2014, at 12:06 PM, Michael Krzyzaniak <mkrzyzan@asu.edu> wrote:

Hi Xin Wei,

I read the paper last night and downloaded the Marsyas source, but only the MATLAB implememtation is there. I can work on getting the c++ version and porting it, but the algorithm has some serious caveats that I want to run by you before I get my hands too dirty.

The main caveat is that it was not intended to run in real-time. The implementations they provide take an audio file, process the whole thing, and spit back one number representing the overall tempo.

"our algorithm is more accurate when these estimates are accumulated for an entire audio track"

It could be adapted to run in sort-of real time, but at 44.1k the tempo estimation will always lag by 6 seconds, and at a control rate of 30ms (i.e. the rate touchOSC uses to send accelerometer data from iPhone) the  algorithm as described will have to gather data for over 2 hours to make an initial tempo estimation and will only update once every 5 minutes.

Once I get the c++ source I can give an estimation of how difficult it might be to adapt (in the worst-case scenario it would be time-consuming but not terribly difficult to re-implement the whole thing in your language of choice).

If you would still like me to proceed let me know and I will contact the authors about the source.

Mike

________________________________________________________________________________________________



On Mon, Sep 1, 2014 at 3:45 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:
beat~ hasn't worked well for our research purposes so I'm looking for a better instrument.

I'm no expert but P & T carefully analyze the extant techniques.
the keyword is 'streamlined'

Read the paper.  Ask Adrian and John.

Xin Wei
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/750890 2014-10-04T18:54:07Z 2014-10-04T18:54:08Z Notes for Lighting and Rhythm Residency Nov (13)17-26
Chris, Garrett, Julian, Omar, Chris Z, Evan, Byron, Prashan, Althea, Mike B, Ian S, Aniket, et al.

This is a preliminary note to sound out who is down for what for the coming residency on Lighting and Rhythm (LRR).  

The goal is to continue work on temporality from the IER last Feb March, and this time really seriously experimentally mucking with your sense of time by modulating lighting or your vision as you physically move.  First-person experience

NOT

 designing for spectator
.    

We need to identify a more rigorous scientific direction for this residency.  Having been asking people for ideas — I’ll go ahead and decide soon!

Please think carefully about:
Core Questions to extend:  http://improvisationalenvironments.weebly.com/about.html
Playing around with lightshttps://vimeo.com/tml/videos/search:light/sort:date
Key Background:  http://textures.posthaven.com


The idea is to invite Chris and his students to work on site in the iStage and have those of us who are hacking time via lighting play in parallel with Chris.   Pavan & students and interested scientist/engineers should be explicitly invited to kibbutz.


Lighting and Rhythm 
The way things are shaping up — we are gathering some gadgets to prepare for .

Equipment requested (some already installed thanks to Pete Ozzie and TML)
Ozone media system in iStage
Chris Ziegler’s Wald Forest system (MUST be able to lift off out of way as necessary within minutes — can an inexpensive motorized solution be installed ?)
3 x 6 ? grid of light fixtures with RGB gels, beaming onto floor
IR illuminators and IR-pass camera for tracking
Robe Robin MiniMe Moving Light/ Projector 
Hazer (?)
Strobe + diffuser (bounce?)
+ Oculus DK1, (Mike K knows )
+ Google Glass (Chris R can ask Cooper , Ruth @ CSI)

We need to make sure we have a few rich instruments (NOT one-off hacked tableaux!) coded up ahead of time -- hence the call to Max-literate students who would like to try out what we have in order to adapt them for playing in the LRR by November.

Note 1:
Let’s be sure to enable multiplex of iStage to permit two other groups:
• Video portal - windows : Prashan, Althea Pergakis, Jen Weiler
•  shadow puppetting, Prashan working with Byron

Note 2:
Garth’s Singing Bowls are there.  Think about how to integrate such field effects.
Mike can you provide a Max patch to control them — ideally OSC -- but at least to fade up/down without having to physically touch any of the SB hardware.

Note 3:
This info should go on the lightingrhythm.weebly.com  experiment website that the LRR leads should create Monday unless someone has a better solution — it must be editable by the researchers and experiment leads themselves.  Clone from http://improvisationalenvironments.weebly.com !

Xin Wei

On Sep 4, 2014, at 8:53 AM, Adrian Freed <adrian@adrianfreed.com> wrote:

I am afraid I can't be very helpful here. I don't do MIR work myself. The field for the most part does
offline analyses of large data sets using musicologically naive Western musical concepts of pitch and rhythm.

One exception to the realtime/offline choice is from our most recent graduate student to work on the
beat tracking problem, Eric Battenburg. Here is his dissertation: http://escholarship.org/uc/item/6jf2g52n#page-3
There is interesting machine learning going on in that work but it presumes that one can make a reliable
onset detector which is a reasonable (but narrow) assumption for certain percussion sounds and drumming practice.

The questions of phase and "in sync." raised below interest me greatly. There are is no ground truth to the beat
(up or down or on the "beat"). I remember being shocked recently to discover that a bunch of research on dance/music entrainment relied as a reference on hand-labeled visual beat markings from "expert listeners in the computer music lab next door" . Various concepts such as "perceptual onset time" have been developed to sufficiently complicate this question and explain the difficulty people have observing concensus on musical event timing and relating a particular beat measurement to features of the acoustic signals.
Even a "simple" case, bass and drums, is extremely difficult to unravel. The bass being a low frequency instrument complicates the question of "onset" or moment of the beat. The issue of who in this pair is determining the tempo
is challenging and the usual handwaving that the tempo is an emergent coproduction of the performers is not very helpful in itself in elaborating the process or identifying which features of the action and sound are relevant to the entrainment. My guess is that we will find models like the co-orbital arrangment of Saturn's moons Epimetheus and Janus.
What are the system identification tools to reveal these sorts of entrainment structures? Can this be done from the sound
alone or do we have to model  embodied motions that produce the sounds?


NOTE from Adrian XW Mike Krzyzaniak on Percival-Tzanetakis Tempo Estimator :

On Sep 3, 2014, at 6:38 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Phase: I’m interested in both the convention of syncing on peaks
but also in the larger range of temporal entrainment phenomena that Adrian has identified with suggestive terminology.
In practice, I would apply several different measures in parallel.

Yes, it would be great to have a different measure.  For example, one that detects when a moderate number (dozens to 100) of irregular rhythms have larger number of simultaneous peaks.  This is a weaker criterion than being in phase, and does not require periodicity.

Xin Wei
]]>
Xin Wei Sha
tag:textures.posthaven.com,2013:Post/746867 2014-09-26T01:55:44Z 2014-09-26T01:55:44Z Re: [Synthesis] Transmutations Online
Hi,

In principle, I do know how to code this in HTML5 using the WebAudio API. However, I don't think I have much free time to work on it at the moment, although I would be willing to explain it to someone with good enough javascript skills to implement it.

Mike

On Wed, Sep 24, 2014 at 7:59 PM, Sha Xin Wei <synthesis.operations@gmail.com> wrote:
YES YES!

hence: Transmutations Online:  I attach the  proposal
]]>
Michael Krzyzaniak