Data sonification

I work with researchers to help them interpret, analyze, reason about, and communicate aspects of their data sets by mapping those data to sound. Doing data sonification has changed the way I think about data, about mapping, and about sound synthesis and control but, somewhat unexpectedly, it’s also changed the way I think about music.

There is a widely held misconception that words, and only words are capable of conveying meaning. But to restrict ourselves to symbolic language alone is to deny ourselves the full range of human expression, thought and communication. If your definition of creating meaning is limited to making logical assertions using propositional calculus, then yes, you may conclude that non-speech audio is an ineffective way to convey meaning, but in The Meaning of the Body: Aesthetics of Human Understanding, philosopher Mark Johnson reminds us of the myriad of other ways that humans create meaning — ways that include spoken and written language but which extend beyond symbolic representation.

Presentations

Why Sonification is a Joke

Keynote lecture for the 25th anniversary of the International Conference of Auditory Display: ICAD 2017 — Sound in Learning at Pennsylvania State University June 2017

Mapping Numbers to Sound—From Scientific Exploration to Immersive Musical Experience

Lecture / demonstration with biophysicist Martin Gruebele, composer/software developer Carla Scaletti, composer David Rosenboom at Center for Advanced Study, University of Illinois

Other presentations

Podcasts

The Science of Sound (2023). Can sound help us understand the complex patterns in our universe? This question leads Nate to Symbolic Sound in Champaign, Illinois, where composer Carla Scaletti guides him on a journey where sound, music, and data intertwine in captivating and thought-provoking ways…

2022 Science We Missed (2022). Maura Armstrong & Bobby Frankenberger’s All Around Science podcast (starting at timecode: 35:43).

Press

Sounds of science: Why just look at your data when you could listen to it? (2023) by Sumeet Kulkarni, Los Angeles Times, Science and Medicine, February 3, 2023

Biochemist Martin Gruebele… uses a software program called Kyma to add a specific sound to each of the numerous bonds that occur as the protein folds. When played back, the sound brings order to the chaos by highlighting which particular interactions dominate.

“You have to think of that sound in the same way that you think about a graph as opposed to a painting,” Gruebele said.

… Scaletti agreed that sound has the power to convey a lot of meaning… That’s why she’s carving a new niche in the human soundscape for science.

Illinois musicians, chemists use sound to better understand science (2022) by Jodi Heckel, Illinois News Bureau, University of Illinois

Musicians join scientists to explore data through sound (2017) by Carolyn Beans, Science Writer for the Proceedings of the National Academy of Sciences of the United States of America (PNAS vol. 114 no. 18 Carolyn Beans,  4563–4565doi: 10.1073/pnas.1705325114).

Carolyn Beans’ Front Matter article in PNAS gives an overview of how musicians and researchers are working with data sonification, translating data into sound with the end goal of developing “deep insights into data revealed through sound”.

For composer and data sonifier Carla Scaletti, data sonification and music have different goals. Data sonification aims to “discover something about the original phenomenon that produced the data,” she says. “It’s almost like you don’t care that it was conveyed by sound. You’re trying to hear that underlying structure; whereas for music, you do want people to be aware of the sound.”

Scaletti likens aesthetic choices in data sonification to graphic design choices when preparing a chart for a scientific paper. “You choose colors and you choose a font, but all your choices are guided by the goal of wanting to make the data very clear.” When Scaletti isn’t working on scientific projects, she sometimes uses data in compositions, but she calls those works data-driven music, or just music.

Courses and tutorials

Sounds of Busan (2019) C. Scaletti, K. Lee, H. Park

Workshop presented at the KISS2019 conference in Busan, South Korea. In this hands-on session, we will take time series data related to the city of Busan (datasets from the 2017 Pohang earthquake and Busan tidal levels) and map the data to sound. Can we hear patterns in data that we might not otherwise detect?

Sonification Tools in Kyma (2011)

Tools for mapping data space to sound space, presented at the Kyma International Sound Symposium (KISS2011) in Porto.

An Introduction to Data Sonification (1993) Evans, B., R. Bargar & C. Scaletti.

Course Notes for Tutorial 81 of the SIGGRAPH 20th International Conference on Computer Graphics and Interactive Techniques. New York: Association for Computing Machinery.

Publications

“Sonification-Enhanced Lattice Model Animations for Teaching the Protein Folding Reaction,” Journal of Chemical Education, J. Chem. Educ. 2022, 99, 3, 1220–1230. Carla Scaletti*, Meredith M. Rickard, Kurt J. Hebel, Taras V. Pogorelov, Stephen A. Taylor, and Martin Gruebele*, February 16, 2022.

Supporting Information (includes links to sonification/animations on YouTube)

The protein folding reaction is one of the most important chemical reactions in the human body. Yet, despite its importance, it is sometimes omitted from undergraduate courses due to the challenging nature of some of the underlying concepts. To help make key concepts of the protein folding reaction accessible to our undergraduate students, we implemented three, simplified 2D lattice models of various amino acid chains, and we used these models to generate sound-enhanced animations that allow students to see and hear the dynamics of protein folding in action. In spring of 2021, we used these videos in remote learning biophysics and music courses to introduce four key concepts of the folding reaction: solvation and hydrophobicity; energy and conformational entropy; funneled energy landscape; and frustration and traps. Our lattice model animations and sonifications helped provide insight into protein folding dynamics for undergraduate and graduate biophysical chemistry students, undergraduate musicians, and even authors who are experts in this field. We plan to incorporate these and additional animations, along with enhancements to the 2D lattice models, in our future courses.

“Sonification ≠ music” (2018) Book chapter in Alex McLean and Roger Dean (eds.), The Oxford Handbook of Algorithmic Composition. New York: Oxford University Press [forthcoming]

Data sonification is a mapping from data generated by a model, captured in an experiment, or otherwise gathered through observation to one or more parameters of an audio signal or sound synthesis model for the purpose of better understanding, communicating or reasoning about the original model, experiment or system. Although data sonification shares techniques and materials with data-driven music, it is in the interests of the practitioners of both sound art and data sonification to maintain a distinction between the two fields.

Sonification of the world (2014 essay)

We don’t live on the earth. We are the Earth. An essay written for Joel Chadabe’s Ear to the Earth on how we can listen to the music of our sphere.

LHCSound (2013)

A website with sound examples and descriptions of work with Lily Asquith on sonifying data from the Large Hadron Collider.

Sound Synthesis Methods for Auditory Data Representation (1992)

An invited talk presented at the first International Conference on Auditory Display (ICAD) at the Santa Fe Institute, proceedings published as Auditory Display: Sonification, audification, and auditory interfaces”, Gregory Kramer, ed., Santa Fe Institute Studies in the Science of Complexity, 1994.

Using Sound to Extract Meaning from Complex Data (1991)

A video produced with Alan Craig at the National Center for Supercomputing Applications to demonstrate some of the ways in which data-driven sound can enhance and extend data-driven visualizations
Winner of the NICOGRAPH International 1991 Multimedia award

Using Sound to Extract Meaning from Complex Data  (1991) Scaletti, C. & A. Craig.

A talk presented at the 1991 SPIE Conference in San Jose and printed in Extracting Meaning From Complex Data: Processing, Display, Interaction II Volume 1459, Edward J. Farell Chair/Editor, SPIE-The International Society for Optical Engineering, San Jose, February 1991.

Advisory panels

“Sonification: a tool for research, outreach and inclusion in space sciences” in support of full and equal participation of persons with disabilities in the space sector, United Nations Office for Outer Space Affairs, April 2023.

“Accessible Oceans: Exploring Ocean Data Through Sound: Building knowledge about effective design and use of auditory display for inclusive inquiry in ocean science NSF Award Amy Bower (PI)