Technology

Since I began making music, I have been interested in computers and electronic music.  At the end of my Bachelor of Music degree at the University of South Florida, I started learned interactive electronics with Baljinder Sekhon in the PureData environment.  After moving to Montréal to study percussion, I came upon a new field that I had just only began to stratch the surface of: music technology.  Since then I have attended many seminars learning Max/MSP, SuperCollider, Arduino, OpenMusic, and a lot more! This section is to present my programming, compositions, and performing concerning music technology

 

Unsounding Objects

 UO_rice_carouselThe goal of Unsounding Objects is to create digital musical instruments which use the timbral qualities of pre-existing objects to drive sound synthesis and compositional structure during a musical performance. Non-musical objects are frequently integrated into contemporary percussion performance practice. Through audio feature extraction, these objects can be used as intuitive interfaces for the control of sound synthesis. Percussionists are accustomed to intimate control of timbre using a wide variety of performance techniques and our research will leverage this expert technique in order to allow for the intuitive control of digital musical instruments in a solo percussion composition.

Piezo contact microphones are commonly used to amplify acoustically inert objects for musical performance. For this composition, a platform was built and equipped with piezo-based contact mics. A variety of found objects are placed upon this platform, and the sounds generated as these objects are used to control sound synthesis. The sound of the objects which are miked will not be amplified, but will have perceptually relevant audio features extracted from it.

Many different perceptual audio features are able to be extracted(spectral centroid, roughness, BARK coefficients, harmonic flux, etc.), and perceptual parameters have previously been extracted from live audio in order to control sound synthesis ( cf. Todd Machover, Sparkler). In order to allow real-time control of sound synthesis, it is necessary to determine the fewest possible features which characterize the timbre of an object. The available features will therefore need to be evaluated in order to determine which are the most perceptually relevant depending upon the source object.

The perceptual features will then be mapped to two intermediate mapping layers. The first layer will extract characteristics of the performers’ gestures from the audio analysis and will be developed in tandem with the choice of instrumental gestures used for performance. The second layer will be for collaborative control, in which performers will share joint control of synthesis. The parameters of this layer will be determined by the development of a compositional strategy. The varying timbral characteristics of the objects mean that identical mapping strategies and sound synthesis algorithms will produce different sonic results when played on different objects. One compositional approach will be to develop musical motives which will be transformed by the objects which they are played upon. Percussion compositions frequently employ open instrumentation (Xenakis’ Psappha); we will employ this strategy in that which objects are played is undetermined but mapping strategies and sound synthesis algorithms are predetermined.

One of our primary goals is the interdisciplinary development of the instrument (interface, feature extraction, mapping, sound synthesis) with the performance practice and the composition. To this end, we will hold weekly workshop meetings which will inform our individual research during the course of the project. At the end of our research, we will present conclusions regarding optimal audio feature extraction algorithms, mapping strategies for the control of sound synthesis with perceptual audio features, and a composition for percussion ensemble utilizing non-musical objects for control of sound synthesis and compositional structure.

SuperCollider

 

SuperCollider is a text-based programming environment for music that allows you to control sound synthesis as well as data manipulation in real-time.  I have created some interesting improvisations and patches using this environment

 

 

Karlax

The Karlax is a new digital musical interface created by the DaFact company in Paris, France.  Think of a midi keyboard, where you might have 10-20 knobs and sliders the in reality you only move a couple at a time.  With the Karlax, because it uses accelorometers, pistons, keys, and twist, you can control up to 10 parameters with no problem at the same time (maybe even more depending on how you program it!).  Since the McGill music technology department acquired one, I’ve been experimenting with this instrument and organized the CIRMMT Karlax Workshop and Concert on May 4th, 2015 at McGill University. Below are some videos from the concert and others from my duo with Karlax (myself) and oboe (Krisjana Thorsteinson) called N[i]Quest.

 

 

 

Max/MSP

I have been using Max/MSP to create interactive compositions for a few years now.  Below is the result my 2015 fellowship at the Atlantic Music Festival with Mari Kimura.  In this piece, I use pitch tracking and pattern recognition to move through the piece and manipulate the sounds using various electronic music techniques.  The score and patch are available by contacting me directly.

 

 

 

Max Metronome