I have an oddity of brain wiring which allows me to see sounds as colors and patterns. It’s particularly strong when I close my eyes. This is a type of synesthesia called chromesthesia, where the brain mixes up the processing signals for sound and color. As brain oddities go, it’s kind of a cool one to have. A car door slam might be red-orange triangles. White noise like hot air from a heating vent might be an indigo wave. I’ve yet to find any sort of practical application for this weird superpower, but it has found me thinking about the ways human beings have attempted to visualize sound.
How do you depict something abstract like sound into something visual? Last year, I discovered a British electronic musician by the name of Daphne Oram. She, with some other electronic music enthusiasts, established the BBC Radiophonic Workshop back in the late 1950s. Her process of making electronic music was to paint shapes onto rolls of film, which she would then run through a custom built machine that could read the shapes and convert them into sounds. Prior to this particularly innovative method of visualizing music, the most common method of notation was writing a series of dots on lines. On paper.
Notation in western music uses a 12-tone system visualized most commonly by a staff of five lines. Dots on the lines, or between the lines, represent specific sound pitches, and stems on the dots represent the duration of the pitch. There are lots of other squiggles representing other things like when not to play a sound, when to repeat a certain phrase, what set of notes the staff represents, what key the notes are in, and so forth.
This style of notation is familiar to anyone who has studied music, but there are many other ways in which music can be visualized. I wrote an article a while ago about Karlheinz Stockhausen’s Studie II, which featured a visual notation score for an electronic music piece. Another method of visualizing sound after the advent of electronics and computers is with the audio waveform. You see this on every piece of sound uploaded to SoundCloud, or in the clips on a digital audio workstation.
It’s the waveform style of sound visualization that reminded me of a short clip from a television series called Connections 2 I saw many many years ago. It described a fascinating and ingenious method of getting the moving pictures to talk. Sound is little more than a wave vibrating through a medium. A microphone picks up differences in air pressure from a sound wave using a flexible diaphragm and converts the vibration into varying currents of electricity. That varying current can then be used on a light source to expose a length of film during the process of shooting. On playback, the shapes that the varying light source made on the film can get reproduced back into sound using the same technology that makes solar power possible. Note that these shapes look an awful lot like contemporary waveforms of digital sound data.
So a varying light source creates a varying electric current, which is used to vibrate the surface of a speaker, recreating the air pressure changes which produce sounds. That is cool! This is basically what Daphne Oram was doing with her custom-made Oramics synthesizer back in 1958. Just like the shapes of the sound track on an early Hollywood talkie, Daphne Oram’s painted spools of celluloid film could define pitch, timbre, loudness. When played through her synthesizer, they would produce music. She was drawing music, visualizing sound.
With computers, it’s really easy to visualize sound. In 1958, it was a really really time consuming process. (You might recall that it took me a month to reproduce Stockhausen’s Studie II, even though I was doing it 100% digitally). Daphne Oram was ahead of her time as far as the visualization of electronic sound. With the advent of electronic technology, the bridge between sight and sound was finally crossed and people could finally begin to see sounds kind of the way my own brain lets me see sounds.