PLOrk Beat Science 2.0
暂无分享,去创建一个
PLOrk Beat Science (PBS) arose from the desire to explore the intersection of traditional and new music paradigms. We experiment with the fusion of divergent genres and forage through the tensions and complementarity between improvisation and automation, acoustic and electronic, spontaneity and control, human and machine. PBS 2.0 explores effective production of laptop chamber music and the interaction of human and machine intelligence. Sound emanates from individual laptops via (five) omni-directional hemispherical speakers, used in the Princeton Laptop Orchestra (PLOrk) and the Stanford Laptop Orchestra (SLOrk), providing a distributed array of 30 independently addressable audio channels. These speakers facilitate inter-performer communication and collaboration, much like traditional chamber settings. The coupling of speakers to localized sound sources also spatially informs the listener’s relationship to the music. The software system is implemented in the ChucK audio programming language. A server process maintains a shared and precise time grid across all machines, connected via a closed local area network. The machines serve as distributed soundbanks, whose contents are remotely triggered in real-time by performers. Tabla and drum sounds are sampled; all other sounds are synthesized in real-time. Layers of human and machine interaction arise and evolve throughout the piece. A live, unprocessed flute solo gradually becomes digitally mediated and dissected, only to then evolve itself into a mediator of the computer output, its timbral and pitch content used to drive synthesis parameters in real-time. The nature of human and machine control over the sound also develops via the mapping function from the various gestural controllers to synthesis and compositional parameters, which starts from a simplistic and static function and develops into an increasingly complex and novel relationship between gestural expression and sonic space. This real-time evolution of the mapping is a joint effort between human and machine, the result of an interactive machine learning process. Our on-the-fly machine learning approach, used for both real-time controller mapping evolution and construction of the flute timbral and pitch control, is described in a paper submitted to NIME 2009 by Rebecca Fiebrink, Dan Trueman, and Perry Cook.