Visualizing Sound

| Blog

I have an oddity of brain wiring which allows me to see sounds as colors and patterns. It’s particularly strong when I close my eyes. This is a type of synesthesia called chromesthesia, where the brain mixes up the processing signals for sound and color. As brain oddities go, it’s kind of a cool one to have. A car door slam might be red-orange triangles. White noise like hot air from a heating vent might be an indigo wave. I’ve yet to find any sort of practical application for this weird superpower, but it has found me thinking about the ways human beings have attempted to visualize sound.

How do you depict something abstract like sound into something visual? Last year, I discovered a British electronic musician by the name of Daphne Oram. She, with some other electronic music enthusiasts, established the BBC Radiophonic Workshop back in the late 1950s. Her process of making electronic music was to paint shapes onto rolls of film, which she would then run through a custom built machine that could read the shapes and convert them into sounds. Prior to this particularly innovative method of visualizing music, the most common method of notation was writing a series of dots on lines. On paper.

Notation in western music uses a 12-tone system visualized most commonly by a staff of five lines. Dots on the lines, or between the lines, represent specific sound pitches, and stems on the dots represent the duration of the pitch. There are lots of other squiggles representing other things like when not to play a sound, when to repeat a certain phrase, what set of notes the staff represents, what key the notes are in, and so forth.

This style of notation is familiar to anyone who has studied music, but there are many other ways in which music can be visualized. I wrote an article a while ago about Karlheinz Stockhausen’s Studie II, which featured a visual notation score for an electronic music piece. Another method of visualizing sound after the advent of electronics and computers is with the audio waveform. You see this on every piece of sound uploaded to SoundCloud, or in the clips on a digital audio workstation.

It’s the waveform style of sound visualization that reminded me of a short clip from a television series called Connections 2 I saw many many years ago. It described a fascinating and ingenious method of getting the moving pictures to talk. Sound is little more than a wave vibrating through a medium. A microphone picks up differences in air pressure from a sound wave using a flexible diaphragm and converts the vibration into varying currents of electricity. That varying current can then be used on a light source to expose a length of film during the process of shooting. On playback, the shapes that the varying light source made on the film can get reproduced back into sound using the same technology that makes solar power possible. Note that these shapes look an awful lot like contemporary waveforms of digital sound data.

From Wikipedia's article on Optical Sound

From Wikipedia’s article on Optical Sound

So a varying light source creates a varying electric current, which is used to vibrate the surface of a speaker, recreating the air pressure changes which produce sounds. That is cool! This is basically what Daphne Oram was doing with her custom-made Oramics synthesizer back in 1958. Just like the shapes of the sound track on an early Hollywood talkie, Daphne Oram’s painted spools of celluloid film could define pitch, timbre, loudness. When played through her synthesizer, they would produce music. She was drawing music, visualizing sound.

With computers, it’s really easy to visualize sound. In 1958, it was a really really time consuming process. (You might recall that it took me a month to reproduce Stockhausen’s Studie II, even though I was doing it 100% digitally). Daphne Oram was ahead of her time as far as the visualization of electronic sound. With the advent of electronic technology, the bridge between sight and sound was finally crossed and people could finally begin to see sounds kind of the way my own brain lets me see sounds.



The Admin Bar Serves No Booze

| Blog |

Ever since I discovered media queries and responsive CSS, I’ve endeavored to make all my websites responsive. Over time, and through trial and error, I’ve modified my site designs to be more minimalist, making the content the focus of the site rather than the container. For my webcomics, the first thing a visiting user should see on the site is … the comic (surprise surprise). Things that have always bugged me about the webcomic themes I’ve seen available are 1) the visual clutter, and 2) the scrolling. I’ve already dealt with decluttering, creating a theme which uses only the barest essentials for displaying an ongoing comic series online. No ads. No comments. Just comics. But I wondered if there was a way to design a webcomic theme to display the entire comic page within the browser window without scrolling?

It turns out, there is. Thanks to modern browsers that interpret CSS rules a little more predictably, it’s possible to have a huge image that fills up the entire browser window, and that scales responsively to a changing window size while maintaining the image’s aspect ratio.

Here’s the CSS:

html, body {
height: 100%; /* Declare both HTML and body tags as full height */
}
#image {
padding: 1em;
height: 100%;
position: fixed;
top: 0;
left: 24em; /* I have a 22em width sidebar with a 1em margin */
bottom: 0;
right: 0;
-moz-box-sizing: border-box;
-webkit-box-sizing: border-box;
box-sizing: border-box;
}
#image img {
max-width: 100%;
height: auto;
max-height: 100%;
-moz-box-sizing: border-box;
-webkit-box-sizing: border-box;
box-sizing: border-box;
-ms-interpolation-mode: bicubic;
}

And here’s the HTML:

<div id="image">
<img src="image.png">
</div><!-- #image -->

Neat! A huge image that scales to fit the browser window. But when I activated the theme, I discovered that the WordPress admin bar’s z-index made it overlap my image. Sad face! Fortunately I figured out a solution which still allowed me to retain a full-height image while accounting for the admin bar height.

When the admin bar is active, WordPress applies a “admin-bar” class to the HTML body tag. So here’s how I tweaked my CSS:

body.admin-bar #image {
padding: 32px 1em 1em 1em; /* change the top padding to the height of the admin bar */
border-top 1em solid #fff; /* make the border the same color as the body background */
}

The border gives me the extra top padding of 1em that gets eaten up by the now-required 32px top padding. Why do I have to use padding instead of margin? Because both padding and border are included in the border-box box sizing, while margin is not. I needed the container to display the full height of the browser window, and margins throw that off creating Y-axis overflow and an unwanted scroll bar.

I feel like none of that I just wrote made any sense. Hopefully other people’s CSS chops are honed and they will get what I’m trying to say. I just needed to document what I did so I could remember how I did it.

Update: 31 May 2014
My CSS was still producing an unwanted scroll bar for some reason. Digging into the page code, I discovered that WordPress injects a 32px top margin on both the body and the html tag. Fortunately there’s a function to eliminate this margin. Put this in your functions.php file:

add_action('get_header', 'my_filter_head');
function my_filter_head() {
remove_action('wp_head', '_admin_bar_bump_cb');
}

I found this code at CSS Tricks.

Whew! All fixed now.



Supercolliding Space Music

| Music |

I have a sound programming application called Supercollider which, though it has a steep learning curve, is a brilliant coding application for programmatic and generative music and sound. This recording is my first #sc140 tweet: Creating something interesting in Supercollider in fewer than 140 characters. Here is my code:

{var a = LFNoise0.kr(12).exprange(110,880);Resonz.ar(LPF.ar(CombN.ar(SinOsc.ar([a,a],0,0.2),2,0.2,10),48,16),880,0.5,8)}.play;

To me, it resembles space music of the 1950s. Ambient space music like the kind of thing playing in the background as spacemonauts explored alien worlds.



Red-1

Red: A Song About Thursday

| Music

Tonight I downloaded MuseScore notation software. This is the first thing I wrote. Attempting to use phrygian mode, because I like it. (The image attached is not the complete score.)



Stockhausen’s Studie II

| Music

Stanley Kubrick’s decision to scrap the musical score originally written for 2001: A Space Odyssey and go with something different is what introduced me to the fantastic, creepy, and wonderful music of Gyorgy Ligeti. When I first signed on to Pandora internet radio, one of my first stations was seeded by Ligeti as an artist. Doing this introduced me to pioneers of early electronic music like Tod Dockstader, Morton Subotnick, John Cage, David Tudor. However, one name which was surprisingly overlooked was Karlheinz Stockhausen. It was only after I created my @mondayazura Twitter account and started connecting with other musicians that I was introduced to his works. Looking back, I now wonder how I managed to go so long without having discovered Stockhausen.

Stockhausen was the first person to publish a score for electronic music. I took it upon myself to recreate that score using the technology I have available to me, namely my computer and audio editing software. I wanted to understand the process Stockhausen used to create a seemingly simple and short piece of electronic music. The piece is only three minutes long, but it took me just over thirty days to realize those three minutes of sound. Here’s how I went about doing it.

Stockhausen created Studie II in 1954 using just a single electronic oscillator, a bit of reverb, and reel-to-reel tape. With only one oscillator, his method for creating chords was quite clever. He recorded a short length of each pure tone, then he cut these recordings into 4 cm strips and taped them end-to-end into a loop of five ascending tones. Then he played the loop through a ten-second reverb to produce each of the 193 chords. I wanted to try and reproduce the technique and sound to the best of my ability using digital technology, so my process for creating each chord was quite similar to Stockhausen’s despite the use of a computer.

Frequency
So pretty!

I began in Berna 1.0 by recording 1 s clips of each tone, 81 total from 100 Hz all the way up to 17200 Hz. Each tone is separated from the other by a ratio of a twenty-fifth root of five 5^(1/25), and rounded to the nearest whole number. Stockhausen was limited by the technology of his time, but Berna allows for more precision in tone generation. Nevertheless, I rounded my numbers to the nearest whole in order to stay true to Stockhausen’s technique. My numbers turned out to be different from his because of my rounding methods. I used the intervals listed on the score (100, 138, 190, 263, 362, 500, 690, 952, 1310, 1810, 2500, 3430, 4760, 6570, 9000, 12500, 17200) and calculated the in-between frequencies using the ratio above.

100 Hz
x 5^(1/25) =~ 106.649 Hz
x (5^(1/25)^2) =~ 113.741 Hz
x (5^(1/25)^3) =~ 121.304 Hz
x (5^(1/25)^4) =~ 129.370 Hz
x (5^(1/25)^5) =~ 137.973 Hz

Repeat this process for each interval listed in the parentheses above. The fifth tone in each sequence should be close to the next one listed in the parentheses, in this case, 138 Hz.

With all the tones generated, the next step was to cut them all into 4 cm lengths. Reel-to-reel tape speed is 30 inches per second or 76.2 cm/s, making 4 cm about 5 milliseconds. One file at a time, I cropped each 1000 ms clip to 5 ms using Adobe Audition. Then I lined up each tone sequentially in groups of five the same way Stockhausen did to create his initial loops. Each chord is created by tone intervals of 1, 2, 3, 4, or 5 apart, and grouped together by their base frequency. Hopefully the chart below clarifies what I mean by that.

Studie-II-Chord-Chart
Actually, these aren’t grouped by base frequency. But it’s cool to see the pattern.

I had a bit of trouble with pops appearing in between each 5 ms tone. When I tried running the poppy sequence through the reverb, I wound up with a lot of white noise overwhelming the chord. So I had to figure out how to get rid of the pops. Eventually I settled on applying a linear envelope to each tone, fading in, then fading out, to eliminate the pops in the sequence. Running the cleaned up sequence through the reverb then produced the clean chord I was looking for.

Berna Settings
Plate reverb: 10000 ms.

The reverb I used was in Berna 1.0. Playing each sequence I’d created in a loop through a ten-second reverb in Berna, just as Stockhausen did with the tape loop, generated the chord I needed for the final realization of the piece. Since the longest note in the Studie II score is almost 5.5 seconds, I recorded each chord at 6 seconds. I could then shorten as needed using Adobe Audition.

While Stockhausen had to manually adjust the audio gain to produce the envelopes for each note, and play the tape in reverse to get increasing volume, I was able to use the faders and gain control for each clip to generate the necessary envelopes for each note in the score. This no doubt has an effect on the way my recording sounds compared to Stockhausen’s method. Considering how much time I put into this piece doing it digitally, I can only imagine how much longer it must have taken Stockhausen to create this piece. After a month of recording, fiddling, manipulating, editing, and tweaking, I finally had the finished three-minute piece completed. Including the extra day I took to write this blog post, I worked on this recording over 33 days, from 29 November 2013 to 2 January 2014. The original piece was written and performed in 1954, and my version comes nearly 60 full years later. Cool!

My previous seven audio pieces on my Soundcloud page were all generated live. None of them has a score to follow, though I did try to document the settings I used to create each piece. This is the first piece of music I made that was from a score. What I hope to do in the future with my sounds is to write scores for them in much the same way Stockhausen did with his Studie II so that others can perform my works if they choose, and not merely listen to them. What this creation and performance of Stockhausen’s score has taught me is that electronic music generated from oscillators and tape is just as complex and time consuming as scoring music on traditional staves for traditional instruments. I have a long way to go before I’m writing music of my own, but I’m one step closer to doing so.



Turanga-Leela

Futurama, French Composers, and Frequencies

| Music

The alluring cyclops pictured is Turanga Leela from Futurama. Leela’s name bears striking similarity to a musical work by French composer Olivier Messiaen called the Turangalîla-Symphonie, which features an rather unusual electronic instrument capable of producing sounds similar to the Theremin.

The same year Léon Theremin patented in America the electronic instrument bearing his name, the Ondes Martenot featured in the Turangalîla-Symphonie was invented in France by cellist Maurice Martenot. It is essentially an electronically enhanced piano capable of producing a variety of electronic tones either through the keys, or with a ring attached to a device which controls pitch, timbre, and glissando.

I absolutely love the frenetic intensity of the Turangalîla-Symphonie. It reminds me a lot of the dynamism of Igor Stravinsky’s The Rite of Spring. Then again, I’m partial to discordant tonalities in music.

It’s interesting that while the Theremin and the Ondes Martenot are both capable of producing similar sounds, the latter seems to have been taken more seriously as a proper symphonic instrument, while the former was relegated to campy b-movies of the 1950s and 1960s. Perhaps that’s simply because the Ondes Martenot has a more familiar interface with its piano keyboard. The Theremin is a difficult instrument, but when it’s played by an accomplished player, it too sounds simply amazing.

The video above is excelsior Theremin player Pamelia Kurstin speaking and performing at TED. She explains how a Theremin works and demonstrates to an awed audience just what it’s capable of doing beyond eerie science fiction sound effects.

I’m really glad the folks behind Futurama are such fantastic geeks. Without them, I probably would have never heard of Olivier Messiaen. And while I’m thanking folks for introducing me to French composers, a big shout-out to the band The Art of Noise for not only introducing me to the music of Claude Debussy, but also to Italian futurist Luigi Russolo.

Futurama TM and © Twentieth Century Fox


Space Music

| Music

The Juno spacecraft zipped by Earth recently on its way to Jupiter, and the people of Earth broadcast a collective “hi” at the passing probe in Morse code (**** **). NASA JPL processed the data from hundreds of ham radio operators broadcasting “hi” nearly simultaneously, to reveal the sound in the video above. The extra wooshing noise is the sound of our planet’s magnetosphere. Sounds like space!

One particular tool astronomers use to “look” up at the heavens is the radio telescope. As space itself expands, the energy waves traveling through it get stretched out into lower and lower frequencies, far lower than red or even infrared. The best way to observe these waves is with radio telescopes. These telescopes, such as the Arecibo Observatory in Puerto Rico or the Atacama Large Millimeter Array in Chile, are essentially giant “ears” pointed at the sky in order to “hear” what’s going on out in space.

In 1960, Bell Labs built a big antenna as a receiver for early satellite transmissions. Two employees, Arno Penzias and Robert Wilson, wanted to use the antenna to collect radio astronomy data from nearby galaxies. When they did so, they discovered a consistent and at the time inexplicable static coming from everywhere in the sky. No matter what direction the antenna was pointing, the same uniform static was found. It turned out to be the ancient birth pangs of the universe itself, the cosmic background radiation. NASA would launch satellites in later years, the Cosmic Background Explorer (COBE) in 1989 and the Wilkinson Microwave Anisotropy Probe (WMAP) in 2001, to collect data on the cosmic background radiation in order to better understand the origins of our universe. Some of this data has been sonified (made into sound), and the results are fascinating.

Turning data received from radio telescopes into sound is one way astronomers can better conceptualize the data. Listening to these sonifications of cosmic background radiation, it’s easy to hear how the energy from the early universe got stretched out as space itself expanded. In 1991, astronomer Dr. Fiorella Terenzi released an album of such space music titled Music from the Galaxies. This album is generated from real astronomical data, but it sounds like something out of a science fiction movie from the 1950s.

Speaking of science fiction movies from the 1950s… In 1956, the film Forbidden Planet was released. It was the first feature film to have its musical soundtrack generated entirely electronically. The pioneers behind this score, Louis and Bebe Barron, were electronic music enthusiasts who used oscilloscopes, patches, filters, reel-to-reel tape recorders, and other equipment to create a symphonic cacophony of electronic noise. The Barrons were credited on the film as the makers of “electronic tonalities” as opposed to a musical orchestration. While their sounds were remarkable, they were deemed too unusual to be considered “proper music”.

Léon Theremin was an inventor, a musician, a professor, and even a spy for the KGB. The instrument which bears his name, the Theremin, is a device which produces electrically generated tones. However, the Theremin is quite capable of more than just the eerie space sounds associated with science fiction, as the clip below of Theremin himself performing music with his invention will demonstrate.

Fiorella Terenzi, Louis and Bebe Barron, Léon Theremin, and more are all musicians who have made unusual music with unusual instruments. But the human fascination with unusual sounds isn’t merely an electronic phenomenon. Other unusual instruments include the musical saw, Luigi Russolo’s Intonarumori, or Benjamin Franklin’s Glass Armonica. It would be fascinating to hear a “space music” performance made with such non-electronic instruments.

The Space Age doesn’t have a monopoly on unusual sounds, though it has helped to establish the musical genre of “space music”. The noises of the world, and the cosmos, fascinate me and I’m happy that I’m finally creating “space music” of my own.



Pranayama

| Music

Pranayama is the practice of breath control in yoga. The pulsing white noise is reminiscent of breathing, and the slow crescendo of the tones is akin to the OM of the universe, or the vibration of a singing bowl. Made with Berna 1.0.





Phrygian Sea

| Music

I’m currently obsessed with phrygian mode, so I thought I’d try writing something that uses only the eight natural notes between (and including) E3 and E4. The texture of the repeating eighth notes pulsing throughout the piece was inspired by the works of Terry Riley and Steve Reich.



Klavierstuck IX

| Music

Six weeks ago I discovered the work of Karlheinz Stockhausen. In that six-week time, I have done very little subsequent investigation into the musical works of this composer because, life being what it is, I have been otherwise occupied. Today I was reading up on odd time signatures, and Stockhausen’s name came up with this particular piece of music. From what little I understand of it, it’s constructed using varying patterns of the Fibonacci sequence. (Watch the ‘Donald in Mathmagic Land for a bit about the Fibonacci sequence.)

Like the phased patterns Steve Reich uses in his compositions to create complex varying textures of sound, Stockhausen uses the Fibonacci sequence and varied time signatures throughout the piece to create a piece of music that is truly amazing to listen to. I realize that I have a long way to go before I’m composing piano pieces like Reich or Stockhausen (refer to my SoundCloud composition in the previous post), but with influences like these, I hope I will soon be composing much more complex experiments before too long.



Short Piano Composition

| Music

My first attempt to write a piece of music for piano. Written in Garageband using the on-screen piano and the notation editor.