Search for a command to run...
Editors: Alexander Refsum Jensenius, Anders Tveit, Rolf Inge Godøy, Dan Overholt\nTable of Contents\n\n-Tellef Kvifte: Keynote Lecture 1: Musical Instrument User Interfaces: the Digital Background of the Analog Revolution - page 1\n-David Rokeby: Keynote Lecture 2: Adventures in Phy-gital Space - page 2 \n-Sergi Jordà: Keynote Lecture 3: Digital Lutherie and Multithreaded Musical Performance: Artistic, Scientific and Commercial Perspectives - page 3\n\nPaper session A — Monday 30 May 11:00–12:30 \n-Dan Overholt: The Overtone Fiddle: an Actuated Acoustic Instrument - page 4 \n-Colby Leider, Matthew Montag, Stefan Sullivan and Scott Dickey: A Low-Cost, Low-Latency Multi-Touch Table with Haptic Feedback for Musical Applications - page 8 \n-Greg Shear and Matthew Wright: The Electromagnetically Sustained Rhodes Piano - page 14\n-Laurel Pardue, Christine Southworth, Andrew Boch, Matt Boch and Alex Rigopulos: Gamelan Elektrika: An Electronic Balinese Gamelan - page 18\n-Jeong-Seob Lee and Woon Seung Yeo: Sonicstrument: A Musical Interface with Stereotypical Acoustic Transducers - page 24\n\nPoster session B— Monday 30 May 13:30–14:30 \n-Scott Smallwood: Solar Sound Arts: Creating Instruments and Devices Powered by Photovoltaic Technologies - page 28\n-Niklas Klügel, Marc René Frieß and Georg Groh: An Approach to Collaborative Music Composition - page 32\n-Nicolas Gold and Roger Dannenberg: A Reference Architecture and Score Representation for Popular Music Human-Computer Music Performance Systems - page 36\n-Mark Bokowiec: V’OCT (Ritual): An Interactive Vocal Work for Bodycoder System and 8 Channel Spatialization - page 40\n-Florent Berthaut, Haruhiro Katayose, Hironori Wakama, Naoyuki Totani and Yuichi Sato: First Person Shooters as Collaborative Multiprocess Instruments - page 44\n-Tilo Hähnel and Axel Berndt: Studying Interdependencies in Music Performance: An Interactive Tool - page 48\n-Sinan Bokesoy and Patrick Adler: 1city 1001vibrations: development of a interactive sound installation with robotic instrument performance - page 52\n-Tim Murray-Browne, Di Mainstone, Nick Bryan-Kinns and Mark D. Plumbley:The medium is the message: Composing instruments and performing mappings - page 56\n-Seunghun Kim, Luke Keunhyung Kim, Songhee Jeong and Woon Seung Yeo: Clothesline as a Metaphor for a Musical Interface - page 60\n-Pietro Polotti and Maurizio Goina: EGGS in action - page 64\n-Berit Janssen: A Reverberation Instrument Based on Perceptual Mapping - page 68\n-Lauren Hayes: Vibrotactile Feedback-Assisted Performance - page 72\n-Daichi Ando: Improving User-Interface of Interactive EC for Composition-Aid by means of Shopping Basket Procedure - page 76\n-Ryan McGee, Yuan-Yi Fan and Reza Ali: BioRhythm: a Biologically-inspired Audio-Visual Installation - page 80\n-Jon Pigott: Vibration, Volts and Sonic Art: A practice and theory of electromechanical sound - page 84\n-George Sioros and Carlos Guedes: Automatic Rhythmic Performance in Max/MSP: the kin.rhythmicator - page 88\n-Andre Goncalves: Towards a Voltage-Controlled Computer — Control and Interaction Beyond an Embedded System - page 92\n-Tae Hun Kim, Satoru Fukayama, Takuya Nishimoto and Shigeki Sagayama: Polyhymnia: An automatic piano performance system with statistical modeling of polyphonic expression and musical symbol interpretation - page 96\n-Juan Pablo Carrascal and Sergi Jorda: Multitouch Interface for Audio Mixing - page 100\n-Nate Derbinsky and Georg Essl: Cognitive Architecture in Mobile Music Interactions - page 104\n-Benjamin D. Smith and Guy E. Garnett: The Self-Supervising Machine - page 108\n-Aaron Albin, Sertan Senturk, Akito Van Troyer, Brian Blosser, Oliver Jan and Gil Weinberg: Beatscape, a mixed virtual-physical environment for musical ensembles - page 112\n-Marco Fabiani, Gaël Dubus and Roberto Bresin: MoodifierLive: Interactive and collaborative expressive music performance on mobile devices - page 116\n-Benjamin Schroeder, Marc Ainger and Richard Parent: A Physically Based Sound Space for Procedural Agents - page 120\n-Francisco Garcia, Leny Vinceslas, Esteban Maestre and Josep Tubau Acquisition and study of blowing pressure profiles in recorder playing - page 124\n-Anders Friberg and Anna Källblad:Experiences from video-controlled sound installations - page 128\n-Nicolas d’Alessandro, Roberto Calderon and Stefanie Müller: ROOM#81 —Agent-Based Instrument for Experiencing Architectural and Vocal Cues - page 132\n\nDemo session C — Monday 30 May 13:30–14:30 \n-Yasuo Kuhara and Daiki Kobayashi: Kinetic Particles Synthesizer Using Multi-Touch Screen Interface of Mobile Devices - page 136\n-Christopher Carlson, Eli Marschner and Hunter Mccurry: The Sound Flinger: A Haptic Spatializer - page 138\n-Ravi Kondapalli and Benzhen Sung: Daft Datum – an Interface for Producing Music Through Foot-Based Interaction - page 140\n-Charles Martin and Chi-Hsia Lai: Strike on Stage: a percussion and media performance - page 142\n\nPaper session D — Monday 30 May 14:30–15:30 \n-Baptiste Caramiaux, Patrick Susini, Tommaso Bianco, Frédéric Bevilacqua, Olivier Houix, Norbert Schnell and Nicolas Misdariis: Gestural Embodiment of Environmental Sounds: an Experimental Study - page 144\n-Sebastian Mealla, Aleksander Valjamae, Mathieu Bosi and Sergi Jorda: Listening to Your Brain: Implicit Interaction in Collaborative Music Performances - page 149\n-Dan Newton and Mark Marshall: Examining How Musicians Create Augmented Musical Instruments - page 155\n \nPaper session E — Monday 30 May 16:00–17:00 \n-Zachary Seldess and Toshiro Yamada: Tahakum: A Multi-Purpose Audio Control Framework - page 161\n-Dawen Liang, Guangyu Xia and Roger Dannenberg: A Framework for Coordination and Synchronization of Media - page 167\n-Edgar Berdahl and Wendy Ju: Satellite CCRMA: A Musical Interaction and Sound Synthesis Platform - page 173\n \nPaper session F — Tuesday 31 May 09:00–10:50 \n-Nicholas J. Bryan and Ge Wang: Two Turntables and a Mobile Phone - page 179\n-Nick Kruge and Ge Wang: MadPad: A Crowdsourcing System for Audiovisual Sampling - page 185\n-Patrick O’Keefe and Georg Essl: The Visual in Mobile Music Performance - page 191\n-Ge Wang, Jieun Oh and Tom Lieber: Designing for the iPad: Magic Fiddle - page 197\n-Benjamin Knapp and Brennon Bortz: MobileMuse: Integral Music Control Goes Mobile - page 203\n-Stephen Beck, Chris Branton, Sharath Maddineni, Brygg Ullmer and Shantenu Jha: Tangible Performance Management of Grid-based Laptop Orchestras - page 207\n \nPoster session G— Tuesday 31 May 13:30–14:30\n-Smilen Dimitrov and Stefania Serafin: Audio Arduino—an ALSA (Advanced Linux Sound Architecture) audio driver for FTDI-based Arduinos - page 211\n-Seunghun Kim and Woon Seung Yeo: Musical control of a pipe based on acoustic resonance - page 217\n-Anne-Marie Hansen, Hans Jørgen Andersen and Pirkko Raudaskoski: Play Fluency in Music Improvisation Games for Novices - page 220\n-Izzi Ramkissoon: The Bass Sleeve: A Real-time Multimedia Gestural Controller for Augmented Electric Bass Performance - page 224\n-Ajay Kapur, Michael Darling, James Murphy, Jordan Hochenbaum, Dimitri Diakopoulos and Trimpin: The KarmetiK NotomotoN: A New Breed of Musical Robot for Teaching and Performance - page 228\n-Adrian Barenca Aliaga and Giuseppe Torre: The Manipuller: Strings Manipulation and Multi-Dimensional Force Sensing - page 232\n-Alain Crevoisier and Cécile Picard-Limpens: Mapping Objects with the Surface Editor - page 236\n-Jordan Hochenbaum and Ajay Kapur: Adding Z-Depth and Pressure Expressivity to Tangible Tabletop Surfaces - page 240\n-Andrew Milne, Anna Xambó, Robin Laney, David B. Sharp, Anthony Prechtl and Simon Holland: Hex Player—A Virtual Musical Controller - page 244\n-Carl Haakon Waadeland: Rhythm Performance from a Spectral Point of View - page 248\n-Josep M Comajuncosas, Enric Guaus, Alex Barrachina and John O’Connell: Nuvolet : 3D gesture-driven collaborative audio mosaicing - page 252\n-Erwin Schoonderwaldt and Alexander Refsum Jensenius: Effective and expressive movements in a French-Canadian fiddler’s performance - page 256\n-Daniel Bisig, Jan Schacher and Martin Neukom: Flowspace – A Hybrid Ecosystem - page 260\n-Marc Sosnick and William Hsu: Implementing a Finite Difference-Based Real-time Sound Synthesizer using GPUs - page 264\n-Axel Tidemann: An Artificial Intelligence Architecture for Musical Expressiveness that Learns by Imitation - page 268\n-Luke Dahl, Jorge Herrera and Carr Wilkerson: TweetDreams: Making music with the audience and the world using real-time Twitter data - page 272\n-Lawrence Fyfe, Adam Tindale and Sheelagh Carpendale: JunctionBox: A Toolkit for Creating Multi-touch Sound Control Interfaces - page 276\n-Andrew Johnston: Beyond Evaluation: Linking Practice and Theory in New Musical Interface Design - page 280\n-Phillip Popp and Matthew Wright: Intuitive Real-Time Control of Spectral Model Synthesis - page 284\n-Pablo Molina, Martin Haro and Sergi Jordà: BeatJockey: A new tool for enhancing DJ skills - page 288\n-Jan Schacher and Angela Stoecklin: Traces – Body, Motion and Sound - page 292\n-Grace Leslie and Tim Mullen: MoodMixer: EEG-based Collaborative Sonification - page 296\n-Ståle A. Skogstad, Kristian Nymoen, Yago de Quay and Alexander Refsum Jensenius: OSC Implementation and Evaluation of the Xsens MVN suit - page 300\n-Lonce Wyse, Norikazu Mitani and Suranga Nanayakkara: The effect of visualizing audio targets in a musical listening and performance task - page 304\n-Freed Adrian, John Maccallum and Andrew Schmeder: Composability for Musical Gesture Signal Processing using new OSC-based Object and Functional Programming Extensions to Max/MSP - page 308\n-Kristian Nymoen, Ståle A. Skogstad and Alexander Refsum Jensenius: SoundSaber —A Motion Capture Instrument - page 312\n-Øyvind Brandtsegg, Sigurd Saue and Thom Johansen: A modulation matrix for complex parameter sets - page 316\n\nDemo session H— Tuesday 31 May 13:30–14:30 \n-Yu-Chung Tseng, Che