As many of you know first hand, integrating the digital and analog realms is a real work of art! It's also, literally, a new art form. I had a chance to experience the premiere of Motion-e, a dance/motion-capture fusion. Motion-e uses real-time motion-capture to enhance live performance, with artificial intelligence (AI) interpreting dancers' movement and generating digital graphics and sound. While I'm not a dance aficionado, I love the nexus of arts and technology. This cool new integration explores the divide of the digital and physical worlds and new ways of integrating the two.
Renowned choreographer Trisha Brown premiered "how long does the subject linger on the edge of the volume..." last month in New York City as part of Lincoln Center's "New Visions" series. (The name of the piece is based on a quote from a technician during show development.)
Geometric animation on a transparent screen in front of the dancers and a post-modern electronic score create a human/electronics loop. Music and visuals respond to the dancers, who, in turn, could learn and sense the interrelationship between their movement and the sounds and images their motion generates.
Brown's piece, co-commissioned by Arizona State University and Lincoln Center, was part of the Motion-e project at ASU's Arts, Media and Engineering (AME) program, co-sponsored by the ASU Herberger College of Fine Arts and the Fulton School of Engineering. AME is one of the only U.S. programs combining arts, media, and engineering in a graduate degree concentration.
Motion-e represents three years of collaboration between well-known choreographers, visual artists, and engineers brought together by AME. The team developed a library of gestures and movements, correlated those to a sequence of motion-capture events, and wrote corresponding "gesture-capture" algorithms that could be used to trigger real-time output. The engineers also developed solutions for mounting motion-capture equipment in traditional theater spaces, overcoming height and width limitations, lighting interference, and floor reflectivity.
The Trisha Brown piece was unique in that it took a more abstract approach to motion-capture analysis, incorporating AI programming that gave the animation and music a "mind of its own." Marc Downey, a doctoral candidate at the Media Lab at MIT, developed the AI programming.
The underlying motion-capture technology was similar to that used for the creation of computerized characters in the Lord of the Rings films and The Polar Express. In typical applications, a motion-capture participant wears a set of more than 40 reflective markers on all joints and appendages. The motion capture is carefully tuned to provide a digital "skeletal model" of the human movement, capturing labeled data fitted to the human form.
A more abstract approach was pursued for the Brown piece: 50 markers were distributed across four dancers. Instead of skeletal models, Downey looked at "raw marker data, just points in 2D space that were sampled at 100 times per second." The raw data was then parsed to decipher patterns—"new ways of looking at the dance"—and the relationship of positions of dancers on the stage.
Visual artists Paul Kaiser, Shelley Eshkar, and Michael Girard and composer Curtis Bahn collaborated with Trisha Brown to create image and sound palettes and to define the interrelationships among movement, sound, and image. For example, when a certain pattern between dancers is recognized, digital graphics develop "branches" that reach between the dancers to connect them. Such graphics patterns, explains Downey, have established triggers but then move and morph according to complex AI algorithms, creating abstract correlations to the movement.
The movement was captured by 16 near-infrared motion-capture cameras from Motion Analysis Corp. To write the AI code, Downey created a Java-based graphical programming environment using the MIT Media Lab tool kit he's helped develop over the last six years. The program runs on a prerelease version of Mac OS X Tiger on two Mac G5s, with another Mac G5 for backup.
In addition to incubating some very cool art, the ASU program aims to improve the accuracy of motion detection. The Interdisciplinary Research Environment for Motion Analysis (IREMA) integrates researchers from 10 disciplines via a five-year Research Infrastructure grant from the National Science Foundation. IREMA students have founded a company called Motion Ease to develop products for the sporting equipment and security industries, as well as for improving gait recognition, movement rehabilitation, and assistance for blindness. Motion-capture lessons learned via Motion-e likely will enhance both the "real" world and the digital realms.