Data sonification

I’ve been working with researchers to help them interpret, analyze, reason about, and communicate aspects of their data sets by mapping those data to sound. Doing data sonification has changed the way I think about data, about mapping, and about sound synthesis and control but, somewhat unexpectedly, it’s also changed the way I think about music.

There is a widely held misconception that words, and only words are capable of conveying meaning. But to restrict ourselves to symbolic language alone is to deny ourselves the full range of human expression, thought and communication. If your definition of creating meaning is limited to making logical assertions using propositional calculus, then yes, you may conclude that non-speech audio is an ineffective way to convey meaning, but in The Meaning of the Body: Aesthetics of Human Understanding, philosopher Mark Johnson reminds us of the myriad of other ways that humans create meaning — ways that include spoken and written language but which extend beyond symbolic representation.

Some sonification-related presentations and publications

“Sonification ≠ music” (2016) Book chapter in Alex McLean and Roger Dean (eds.), The Oxford Handbook of Algorithmic Composition. New York: Oxford University Press [forthcoming]

Data sonification is a mapping from data generated by a model, captured in an experiment, or otherwise gathered through observation to one or more parameters of an audio signal or sound synthesis model for the purpose of better understanding, communicating or reasoning about the original model, experiment or system. Although data sonification shares techniques and materials with data-driven music, it is in the interests of the practitioners of both sound art and data sonification to maintain a distinction between the two fields.

Sonification of the world

We don’t live on the earth. We are the Earth. An essay written for Joel Chadabe’s Ear to the Earth on how we can listen to the music of our sphere.

Talks on data-driven Sound and the score for Gilles Jobin’s QUANTUM

LHCSound

A website with sound examples and descriptions of work with Lily Asquith on sonifying data from the Large Hadron Collider.

Sonification Tools in Kyma

Tools for mapping data space to sound space, presented at the Kyma International Sound Symposium (KISS2011) in Porto.

An Introduction to Data Sonification (1993) Evans, B., R. Bargar & C. Scaletti.

Course Notes for Tutorial 81 of the SIGGRAPH 20th International Conference on Computer Graphics and Interactive Techniques. New York: Association for Computing Machinery.

Sound Synthesis Methods for Auditory Data Representation (1992)

An invited talk presented at the first International Conference on Auditory Display (ICAD) at the Santa Fe Institute, proceedings published as Auditory Display: Sonification, audification, and auditory interfaces”, Gregory Kramer, ed., Santa Fe Institute Studies in the Science of Complexity, 1994.

Using Sound to Extract Meaning from Complex Data (1991)

A video produced with Alan Craig at the National Center for Supercomputing Applications to demonstrate some of the ways in which data-driven sound can enhance and extend data-driven visualizations
Winner of the NICOGRAPH International 1991 Multimedia award

Using Sound to Extract Meaning from Complex Data  (1991) Scaletti, C. & A. Craig.

A talk presented at the 1991 SPIE Conference in San Jose and printed in Extracting Meaning From Complex Data: Processing, Display, Interaction II Volume 1459, Edward J. Farell Chair/Editor, SPIE-The International Society for Optical Engineering, San Jose, February 1991.