Talks

Mapping Numbers to Sound—From Scientific Exploration to Immersive Musical Experience

Lecture & live demonstration with biophysicist Martin Gruebele, composer/software developer Carla Scaletti, composer David Rosenboom Center for Advanced Study, University of Illinois October 2022

Why Sonification is a Joke

Keynote lecture for the 25th anniversary of the International Conference of Auditory Display: ICAD 2017 — Sound in Learning at Pennsylvania State University June 2017

2017 SEAMUS Award Acceptance Speech

The SEAMUS Award acknowledges the important contributions of its recipients to the field of electroacoustic music and is presented following a concert by the recipient at the annual conference for the Society of Electro-Acoustic Music in the US.

Ask, Recombine, Tumble (ART) a lecture for Charles Nichols’ Computer Music course at Virginia Tech University, December 2016

Some effective strategies for overcoming writer’s block and blank screen syndrome: tips and tricks in Kyma 7.1

Computer Music’s Prehistoric Roots a lecture for Eric Lyon’s History of Electronic Music course at Virginia Tech University, December 2016

From the mastery over fire, to the invention of software and the first web browser and why we started Symbolic Sound

Data sonification ≠ music an invited lecture in the New York University Department of Music Colloquium series, October 2016

Scientific data sonification is a mapping of data from an experiment or a model to one or more parameters of a sound synthesis algorithm for the purpose of interpreting, understanding, or communicating the results of the experiment or the model. Composer Carla Scaletti relates how her experiences working with scientists on mapping data to sound has unexpectedly changed the way she thinks about music.

Music and Technology a panel discussion at the cDACT 50th Anniversary of Experimental Arts Technology Colloquium Stony Brook University 2016

Phil Edelstein, Michelle Jaffe, Lauren Hayes, Izzi Ramkissoon, Troy
Rogers, and Carla Scaletti. Dan Weymouth moderator

Emergence a keynote address for KISS2016, the Kyma International Sound Symposium De Montfort University Leicester, UK

What is emergence? We look at definitions from Steven Strogatz, Jeffrey Goldstein, Stephen Wolfram and others and then, based on those definitions, try to create conditions under which emergence could arise.

What’s new in Kyma 7.1 a lecture demonstration at KISS2016, the Kyma International Sound Symposium in Leicester UK, September 2016

Dynamical systems, spherical panning, galleries everywhere, and other new features in the upcoming release of Kyma 7.1

Design Patterns for Live Performance a masterclass at KISS2016, the Kyma International Sound Symposium in Leicester UK, September 2016

Recognizing some recurring design patterns in live electronics performance and learning how to implement them in Kyma

(Video) New ways to play: visionary designers on their instruments a panel discussion with Carla Scaletti (Kyma), Gerhard Behles (Ableton Live), Roger Linn (LinnStrument) and Stephan Schmitt (Native Instruments) in conversation with Dennis DeSantis (author of Making Music) at the Ableton Loop Conference in Berlin, November 2015

The inventors on this panel are all visionary makers who have been able to create bespoke instruments with commercial appeal. Their work straddles the boundaries between software and hardware, between tools for expressive performance and environments for sophisticated sound design.

Looking Back, Looking Forward a keynote address for the 41st International Computer Music Conference in Denton Texas, September 2015.

It seems that humans are driven to use any new technology to connect with each other — to extend our networks of distributed cognition by telling stories, playing games, and making music with each another.

  • Brigham Young University, Spring 2016
  • 41st International Computer Music Conference (ICMC2015), Keynote Fall 2015

Picturing Sound (first contact) a keynote address for KISS2015, the Kyma International Sound Symposium in Bozeman Montana, August 2015

Like the original phonautograph invented in 1857, most of our sound recording and playback devices still rely on vibrating membranes. By using light’s interaction with sound pressure waves in the air, could we overcome our reliance on these physical membranes for picturing (and recording) sound?

Control patterns in Kyma 7 a masterclass at KISS2015, the Kyma International Sound Symposium in Bozeman Montana, August 2015

The more you work with time-varying parameter controls in Kyma, the more you start to notice certain useful models or patterns of control that seem to come up over and over again in different circumstances

Data-driven Sound: What scientific data sonification has taught me about music

  • Brigham Young University Barlow Lecture, Spring 2016
  • University of Santa Cruz Graduate Music Colloquium, Spring 2016
  • Cinèma Spoutnik GVA Sessions 2015, Fall 2015
  • Rensselaer Polytechnic Institute Haas Graduate Colloquium, Spring 2015
  • New York University Lecture, Fall 2014
  • University of Virginia Colloquium Series, Fall 2014
  • University of Illinois Composers’ Forum, Spring 2014

What is the Most Organic Sound? a keynote address for KISS2014 the Kyma International Sound Symposium in Lübeck Germany, September 2014

A pilgrimage on a quest for the most organic sound, by way of syphilis, mitochondria, network motifs, E. coli, morphogens, modularity, evolution, microbiota, perfect adaptation, integral feedback, chemotaxis, flagella.

Morphisms, Maps, Meaning and Magritte a keynote address for KISS2013 the Kyma International Sound Symposium in Brussels, Belgium, September 2013

Reel time, real time a keynote address for KISS2012 the Kyma International Sound Symposium in St Cloud, Minnesota, September 2012

Exploring Sound Space a keynote address for KISS2011 the Kyma International Sound Symposium in Porto Portugal, September 2011

Music is not a language: non-symbolic meaning in sound a keynote address for KISS2010 the Kyma International Sound Symposium in Vienna Austria, September 2010

Recombinance Makes us Human a keynote address for the First International Kyma Symposium in Barcelona Spain, October 2009

The power of our recombinant social networks is that they enable us to learn things we have not directly seen for ourselves–to exponentially expand what we can know, remember, and learn and to engage in what philosopher Mark Johnson calls ‘distributed cognition’. Not only do we possess recombinant-style brains, but we seem driven to extend our recombinant thinking beyond the boundaries of a single brain by trading ideas, knowledge, experiences, & observations, and recombining them with our previous experience to form novel ideas, new ways of looking at things, and new kinds of music.

Metaphor in Mathematics & Sound invited paper at Matematica e Cultura March 2006 in Venice

Composers have always drawn inspiration from mathematical ideas and 21st century composers make extensive use of mathematical tools in their work. More interesting, though, are the similarities in musical and mathematical thinking. Mathematicians and composers make use of many of the same conceptual metaphors for nonverbal reasoning, communication, discovery, and creation.

A Sound is a sound is a sound…

The design and implementation of a language for specifying, manipulating, combining and controlling digital audio signals
Kyma is a language for specifying, manipulating, and combining audio signals that can be controlled in real-time by audio inputs, MIDI, or parameter updates from external software. The language is being used for sound design in artistic, scientific, commercial and educational contexts.
Kyma makes use of hardware parallelism, distributed control, and a hierarchical data structure called a Sound to generate, process, and organize the large amounts of data required for real time digital audio synthesis and signal processing.
Kyma has been continuously evolving since its first implementation in 1986 on a Macintosh 512k computer. Intended for an engineering/computer science audience, this talk outlines and illustrates some of the solutions developed and lessons learned during its (ongoing) development.

The Body in the Sound: Can non-speech audio convey meaning? (and if so, what does it mean?)

It is clear to listeners, sound designers, and composers that sound and music do “mean” something. But it is just as clear that sound and music do not convey the same kind of “logic propositional” meaning as is conveyed by language. An alternative model for meaning comes from a cognitive “data structure” known as an “image schema”. They might hold the explanation for why music (literally) moves us and how the sound track of a film or computer game gives it visceral and emotional impact.

Sounds, Symbols & Cyborgs Lecture at Alte Schmiede Vienna March 1996.

Reflections on the function and future of music with computers

ICMC2015 crowd