← BACK TO SHOP
← BACK TO SHOP

The [COMPRESSED] history of music mastering

Artwork provided by Michael Zhang.

Artwork provided by Michael Zhang.

This episode was written and produced by Casey Emmerling.

Join us on a musical journey from the Golden Age of analog mastering to the digital methods of today. We’ll find out why the music industry became obsessed with loudness, and learn how the digital era transformed the way that music sounds. Featuring Greg Milner and Ian Shepherd.

MUSIC FEATURED IN THIS EPISODE

Isn't it Strange by Spirit City
Stand Up by Soldier Story
Lonely Light Instrumental by Andrew Judah
Who We Are by Chad Lawson
No Limits Instrumental by Royal Deluxe
Crush by Makeup and Vanity Set
Rocket Instrumental by Royal Deluxe
Light Blue by UTAH
Love is Ours Instrumental by Dansu
Shake This Feeling Instrumental by Kaptan
Wrongthink by Watermark High
Rocket Instrumental by Johnny Stimson
Lola Instrumental by Riley and the Roxies
Quail and Robot Convo by Sound of Picture

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Become a monthly contributor at 20k.org/donate.

If you know what this week's mystery sound is, tell us at mystery.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Check out Ian Shepherd’s podcast The Mastering Show.

Check out Greg Milner’s book, Perfecting Sound Forever.

Consolidate your credit card debt today and get an additional interest rate discount at lightstream.com/20k.

Go to forhims.com/20k for your $5 complete hair kit.

Check out SONOS at sonos.com.

View Transcript ▶︎

You’re listening to Twenty Thousand Hertz. I’m Dallas Taylor.

[music in]

Even those of us who know next to nothing about the music industry probably have some idea what mixing is. For instance, we all know mixing involves some sort of leveling— like how loud [SFX], or quiet [SFX] you want something to be. It also involves panning—whether you want an instrument or vocal part to be on the left [SFX], the right [SFX], or somewhere in the middle. And while you might use some effects while recording, a lot of other effects get added during the mixing phase. Maybe you want to add some reverb to the vocals [SFX], double-track to give it a little more oomph [SFX], or autotune those sweet vocals [SFX].

While working on a song, a mixing engineer will make a ton of decisions like these, both big and small. But after being mixed, songs go through a whole other process before they get released. This stage is much harder to explain, and while it’s definitely more subtle than mixing, it still ends up having a huge impact on the final sound. This process is called “mastering,” and even inside the music industry, it’s considered something of a dark art—something that only a small group of elite specialists know how to do.

[music out]

Greg: Mastering is the final step in making a commercial recording.

That’s Greg Milner. He’s written about music and technology for publications like Slate, Wired, Rolling Stone and The New York Times.

Greg: It's taking the fully mixed recording and essentially making it absolutely pristine and correct to actually make it into something that people will listen to or buy. In the old days, before digital technology, the mastering engineer was the person who would literally make the physical master from what the recordings would be stamped from.

Ian: Back in the day that would've been a vinyl master, then cassette, then CD, and these days for digital files.

This is Ian Shepherd.

Ian: I'm a mastering engineer. I have a podcast called The Mastering Show, and I run the Production Advice website.

Ian says mastering isn’t just about preparing music for public consumption.

Ian: It's also an opportunity to get the music to sound the best that it can be.

Ian: If it's a hard rock song [music in] maybe you want to bring even more aggression and density into the sound [music out]. Or, if it's a gentle ballad, [music in] maybe you want a lovely, soft, sweet, open sound… [music fade out]. So it's very much a collaboration between you and the artist.

[music in]

So how is mastering different from mixing?

Greg: Mixing is when you take all the individual tracks, the separate tracks that go into making a recording and you mix them together. I like to visualize it as if you had a lot of jars of different colored sand and you poured them all into one big jar, and you wanted to control how much of each color was there. You might pour a little of one, more of another, into the big jar, but then the sand would be in that jar permanently. You couldn't actually extract the different colors, so that's a finished recording. That's mixing. And then mastering is maybe taking that jar of sand and doing little things to it, maybe moving stuff around here and there, but it's already mixed. You're not doing any mixing when you're mastering. You're working with a fully mixed recording.

Ian: The other analogy is that mastering is like Photoshop for audio. So, we've all taken photographs, you know, on a mobile phone or a camera, and then maybe you have one that you actually want to print out or put on the wall. And you look at it, and actually you suddenly realize it's not quite as good as you thought it was. So, maybe you want to tweak the color balance, or enhance the contrast and the brightness, or maybe take out some red eye from a flash.

Ian: Mastering is the same thing for audio. So, you might adjust the equalization, which is the overall amount of bass [SFX] and treble [SFX] and mid range [SFX] in the sound, to get the tonal balance as good as it can be. You might want to adjust the balance of loudness and dynamics, which is like adjusting the contrast and the brightness in a picture. You might want to take out clicks [SFX] or thumps [SFX] or hiss [SFX] or buzz [SFX], and that's a bit like fixing red eye in a photograph.

[music out]

Ultimately, the mastering engineer is responsible for making an album sound cohesive, rather than just a random collection of songs.

Ian: Often, if you have a collection of recordings maybe from a bunch of different studios, and over quite a long length of time, it's a chance to balance those against each other, optimize the levels, the overall sound, to get the best possible results.

That includes deciding whether songs have gaps of silence between them, or whether they flow naturally into each other.

Ian: The final thing about mastering is to actually choose all of the starts and ends of the songs, and put them in sequence, and choose the gaps between them. And if you widen out the Photoshop analogy a little bit, that's maybe like doing a presentation of your images, maybe laying them out in a photo book or even a little exhibition, you know, and saying... what frame am I going to put this in? How am I going to light this? Should this be large, should it be small? All those kind of things.

Let’s compare the way a song sounds before and after it’s been mastered. Here’s a clip from the song Closer from Nine Inch Nails, before mastering:

[Music clip: Nine Inch Nails Closer Unmastered]

And here’s the mastered version:

[Music clip: Nine Inch Nails Closer Mastered]

Now, they both sound great, but the mastered version sounds fuller, clearer, and noticeably louder. It’s the same song, just...a little better. This shows how simple the effects of mastering can be.

But mastering engineers don’t just work on new music. It’s also common for older albums get remastered using newer technology.

Ian: The advantage is quite often you can go back to the original master tapes, you can make a clean transfer with the best possible equipment.

Ian: And the remastering is also an opportunity to maybe correct come faults.

For instance, Ian was once hired to restore and remaster a 1967 song called “Hush,” by the British songwriter Kris Ife. You may know Deep Purple’s version of the song, from a year later.

[Music clip: “Hush”]

Unfortunately, the original master tape of the track had been lost, so all Ian had to work with was an old vinyl 45. As you’ll hear, the record was in pretty bad shape. But through the magic of mastering, Ian managed to cut out the hiss and crackle. He also tweaked the EQ to make the song sound warmer and punchier. Here’s the original:

[Music clip: Remaster (first section)]

And here’s Ian’s remaster:

[Music clip: Remaster (second section)]

Ian: Sometimes, what was on the vinyl didn't sound as good as what was on the master tapes. And remastering is an opportunity to let people hear that. So that’s the ideal.

But the most controversial part of mastering has to do with loudness.

Ian: Part of the process of mastering is to get a great balance between the dynamics of the music and the loudness. So, the dynamics mean contrast in the music. So, in an orchestral score, you have pianissimo for the quietest moments [music in] and fortissimo for the loudest moments [music up]. And the same thing applies to a rock song [music in], for example. You want the introduction to be quiet and gentle, maybe, and then the verse and the chorus to get louder [music up], and you want the screaming guitar solo to really lift up in level to have the right emotional impact [music up].

[music out]

The natural difference between loud and soft sounds in music is referred to as dynamic range. The word “loudness” has an easier definition. It works just like your volume knob - basically a mastering engineer will change the overall loudness of each song so they all play nicely together as an album, and you don’t have to reach for the volume knob on your sound system.

In the ‘70s and ‘80s, when vinyl was king and recording was all analog, songs could only be as loud as the equipment would allow.

The machines that physically cut music into vinyl records were especially fragile.

Greg: In an analog system... you're really limited.

Greg: So I think their mindset was a little bit different in the '70s and '80s. The mindset was that there is this limit beyond which we really can't go so we have to be very, very careful about the way we master these recordings.

As a result, music from this period tends to have a very high dynamic range. So, there’s a lot of contrast between the quietest parts of a song and the loudest.

Greg: So many things back then had a great dynamic range. You know, you listen to Abbey Road for example, “Here Comes the Sun.” If you really listen closely you can really hear the range.

Here’s the quietest part of “Here Comes the Sun:”

[Music clip: Here Comes the Sun (intro)]

And here’s the loudest part:

[Music clip: Here Comes the Sun (loud)]

Just to be clear, we didn’t adjust the volume at all between the two clips, that’s the exact dynamic range from the album.

Greg: But you know what? If you listen to a Black Sabbath song that came out about a year later, a lot of those actually have an even greater dynamic range.

The song “Black Sabbath,” from Black Sabbath’s first album, Black Sabbath, shows off it’s impressive dynamic range within the first minute. At the start, it’s extremely subdued, with nothing but the sounds of rainfall and church bells.

[Music clip: Black Sabbath (intro)]

Suddenly, the song erupts into a monstrous guitar riff.

[Music clip: Black Sabbath (main riff)]

The energy peaks in the final seconds.

[Music clip: Black Sabbath (end riff)]

If you grew up on.

[Music clip: Black Sabbath (actual end riff)]

...I always forget about that. Anyway, If you grew up on classic rock radio, then you have heard these songs many times but may never have realized how they were affected by mastering.

This also applies to all genres of music, from hip hop to classical. Nearly all music gets mastered before it is released.

If you’re a classic rock fan, you’re probably sick of the song “Stairway to Heaven,” but there’s no denying that the song is a powerful example of dynamic range.

[Music clip: Stairway (intro)]

Greg: There's a reason, I think, that “Stairway to Heaven” was so popular. There's several reasons, but one thing is it just has striking dynamic range…

[Music clip: Stairway (drum verse)]

You can tell by how rich the drums often sound. Drums and vocals are I think the things that benefit most from really strong dynamic range.

[Music clip: Stairway (outro)]

From start to finish, that’s a huge change. We’re not just talking about in increase in energy, but in actual volume. A lot of the most beloved music from this era just is like this.

Ian: Pink Floyd, Wish You Were Here is a classic audiophile album with amazing dynamics.

[Music clip: Wish You Were Here].

Greg: Then of course the Eagles, love 'em or hate 'em, those early Eagles records had really stunning dynamic range, especially when they were mastered on to the Greatest Hits album that became the biggest selling album of all time. There's just a spaciousness to those records.

Like in the Song Witchy Woman.

[Music clip: Witchy Woman]

Greg: It was really kind of an embarrassment of riches in a way, but you could almost pick and choose, and chances are you'd be listening to something with strong dynamic range.

[music in]

But starting in the late ‘80s, the spread of digital technology caused seismic shifts in the music industry. For one thing, songs could made be louder than ever.

Ian: The new digital technology just allowed people to go even further, push the loudness higher and higher.

One of the main ways they did this was through dynamic range compression. Essentially, this type of compression clamps down the loudest parts of a track so they’re closer to the quiet parts, and once everything is evened out, you can boost the whole thing up. That way, the song stays closer to a maximum level the whole time, with less dynamic range from second to second, or minute to minute.

Of course, compressors weren’t invented in the 80s.

[music out]

Greg: Compression has been something that's been around at least since the advent of multitrack recording.

Ian: In fact, the reason that The Beatles got Abbey Road to buy the first Fairchild compressor, was to try and compete in terms of loudness with the music that was coming out of Motown.

[Music clip: You Can't Hurry Love]

Like this song, “You Can’t Hurry Love” by The Supremes.

[Music clip continued: You Can't Hurry Love]

But while analog compression had been around for decades, digital compression was a whole new ballgame.

Greg: With the advent of the compact disc it became easier to employ very, very harsh dynamic range compression to make things sound louder.

Ian: But there's also a limit in digital formats as well. There's this ceiling, basically, above which you can't go any higher because, at the end of the day, there is a number that is the largest number you can store in the digital format, and there are no numbers larger than that.

In other words, in a digital format, we can now make the volume max out riiiight before it’s absolute maximum possible level. With old analog tech, you it was very wishy washy, so mastering engineers had to be much more conservative in their approach.

[music in]

Ian: I have a bit of a crazy analogy to explain this, which is, if you imagine that the music is a person on a trampoline. If they're in a big sports hall with high ceilings, they can bounce as high as they like, and there's no restriction. But if you then think about raising the floor of the room up towards the ceiling, for a while that's no problem, there's plenty of headroom for the person bouncing on the trampoline, or for the music. But as you get closer and closer to the ceiling, the person bouncing is going to have to maybe start ducking their head or curling over, and twisting and turning to avoid crashing into the ceiling. And exactly the same thing happens with music. For a while, you can lift the loudness up with no problems. But as you get towards that digital ceiling, the highest level that can be recorded, you have to start processing the audio, squashing the audio down into a smaller and smaller space to make it fit.

Ian: You can do that quite gently, which can be beneficial and help things sound glued together and dense and powerful. But if you go too far, it can dull things down, and they start to sound lifeless and weak.

And by the time you’re hearing me right now, we’ve slowly compressed Ian’s voice, my voice, and the music. So right now, what you’re hearing is super compressed. Can you tell? [music plays] ...and here it is back with much lighter compression... Ahhhhh [music clip without compression].

So why was the music industry so obsessed with loudness? If hyper compression can degrade the sound quality of a song, why would an artist ever want it?? And how did all of this affect the future of the music industry? That’s coming up, after the break.

[music out]

[MIDROLL]

[music in]

In the analog era of recorded music, songs were mastered to be very dynamic. This meant that there could be a lot of contrast between the quietest parts of a song, and the loudest. But once digital technology hit the scene, mastering engineers could make songs louder than ever before. To do so, they used extreme compression, which boosts the volume but reduces dynamic range. So why were artists so eager to make their songs louder?

[music out]

Ian: If I play you two identical pieces of audio, but one of them is just a fraction louder than the other, they will actually sound different to you, even though they're the same and the only difference is the loudness. So the louder one might sound like it's got a little bit more bass, a little bit more treble, and on the whole, people will tend to say that they think the louder one sounds better.

So let’s try it. Here’s a clip from the song “Juice” by Lizzo. Which one sounds better to you? This:

[Music clip: Juice (quiet)]

Or this?:

[Music clip: Juice (loud)]

You probably picked the second one, and if so, you’re not alone.

Ian: Even though the audio is identical…

Greg: Their initial reaction is often going to be, "Oh, the loud one sounds better. It's just fuller. It's, you know, coming out of the speakers."

Ian: And that means that, if you're producing any kind of audio where you want to catch people's attention, there's a benefit to being loud.

And music isn’t the only place where some people think louder is better. There’s one industry in particular where getting people’s attention matters more than anything else.

[SFX clip: Billy Mays: Hi, Billy Mays here for the Grip and Lift, when you need some extra help for those outdoor chores, it’s a must have!]

That’s right: commercials. And just like music, the volume of commercials used to be limited by analog equipment.

[SFX clip: Bounty Ad (60s): That’s why I switched to Bounty paper towels. They absorb faster than any other leading brand. Bounty is the quicker picker upper.]

But as technology improved, commercials kept getting louder and louder.

[SFX clip: Bounty Ad (00s): The quilted quicker picker upper, Bounty!]

Eventually, things got so bad that Congress had to be the noise police for the entire country. In 2010, Congress passed the CALM Act, which stands for Commercial Advertisement Loudness Mitigation. Under this law, ads are prohibited from being broadcast louder than TV shows.

It basically works by measuring loudness over time. TV shows are longer, so they have time for peaks and valleys in the volume. But TV ads are often just a block of maximum loudness for 30 seconds, so they can still feel a lot louder even though they’re technically the same.

Greg: It's still at the same level, it's just that it's hitting those maximum peaks much more often than the TV show before it.

Ian: We've actually seen a similar thing happen in music, where people have been using loudness to try and get music to stand out as well. On record, originally, and on the radio, and these days, on CD and online... and it's called the Loudness War, because it's basically a sonic arms race. Because people know that if they can be a little bit louder, maybe they'll stand out a little bit more, or sound a little bit better.

Greg: Imagine a jukebox in a crowded bar [SFX] It's set as some kind of master volume. If the song that comes before yours has been mastered to sound louder, naturally that's where the volume is going to be set. [Music clip : Love Is Ours - Instrumental - by Dansu] When your song comes on it's gonna sound kind of weak and wimpy by comparison. Maybe you won't even be able to hear it over the crowd noise, [Music clip: Shake This Feeling - Instrumental - by Kaptan] There was this thought that music really had to just jump out of the speakers and really attack you.

Greg: What's the Story Morning Glory by Oasis: really, really, really aggressively compressed…

[Music clip: Morning Glory - by Oasis]

But on the other hand…

Ian: By modern standards, Nevermind by Nirvana.

Ian: Is quite a quiet record. But nobody ever complained that it didn't sound loud enough, because they just crank it up.

[Music clip: Lithium by Nirvana]

Greg: And that's the thing. We have plenty of volume to go around. All we need to do with records if they're not as loud as we want is just turn up the volume.

Still, Nirvana’s Nevermind ended up being something of an outlier, as more and more artists opted for a loud, ultra-compressed sound.

Greg: While this was all going on, the same thing was happening in radio. Radio stations were facing the same sort of problems. You want your radio station to pop out of the speakers so someone listening to it if they turned to it on the dial and less likely to go to someone else's. So, you had this Loudness War in radio and this Loudness War in recordings and it just combined to be this really crazy morass of loudness and compression.

Ian: Over time, the loudness levels just creep up, and creep up, and creep up.

By the end of the millenium, the Loudness War had spiralled out of control. Music was being hyper-compressed by mastering engineers, and again by radio broadcasters. Just when it seemed like things couldn’t get any worse, mp3s appeared, and music got compressed even more. This time, it was through a process called “data compression.”

Unlike dynamic range compression, which is applied while mixing and mastering, data compression happens when a recording is encoded from one digital format to another, like when you used to rip a CD onto a computer.

[SFX: Vintage CD tray SFX]

So let’s rewind to say 2001, [Show Me the Meaning of Being Lonely - dream sequence-y] and you want to get the music from your new Backstreet Boys CD onto your computer, and then put it on your mp3 player, or most likely you want to share it to Napster. Of course, you shouldn’t be uploading other peoples’ music to the internet, but it’s 2001, and you don’t know that yet.

So you open a program that turns CDs into mp3’s. But you probably didn’t pay attention to the settings. [SFX and Music out] And something most of us don’t realize is that when you turn those CD’s into a bunch of mp3 files, you are throwing away a huge amount of the actual sound of the music through data compression.

Greg: When MP3s came on the scene, they figured out that you could apply algorithms that would take out a huge amount of the music, and I'm talking like a gigantic amount of the music, because at any given moment there are certain frequencies that our ears are not going to hear because they're being overwhelmed by other frequencies.

[music in]

Ian: So I actually find it pretty impressive that lossy data compression works at all. When you think that often as much as 90% of the information is being discarded in order to get the file size down, it’s amazing that they sound as good as they do.

At higher-quality settings, most people probably won’t notice any loss in sound quality. But when you compress the file down enough, the sound really starts to suffer.

Ian: So what you tend to get back has similar tonal balance to the original, you can hear all of the instruments, it still sounds like the same piece of music. But when you do a direct comparison, you’ll often find, if you’re listening in stereo, what used to sound wide and spacious and lush collapses down into the center of the stereo image. You get much less of that sense of space and depth, and everything sounds a bit claustrophobic, a bit constrained… And the other thing that you hear as the data rate goes down is extra mulch, to use a technical term. It’s just this kind of squelchy, scrunchy, slightly distorted quality to the sound.

We’ve actually been gradually compressing the data of this audio over the last minute or so. Here’s how it sounded when we started, [Back to normal, high bitrate] and here’s where we ended up [Back to low bitrate]. It’s one of those things that if you don’t know what’s happening, you can’t really pick it out. But when you compare the two, you can definitely hear the difference.

[music out]

Ian: It probably won’t leap out at you, but once you start to hear it, it’s quite distinctive. For me, it just makes things sound duller, less interesting, less involving. I’m less likely to be sucked into a recording, and lose myself in it. It’s much less likely to give me goosebumps.

[music in]

Data compression in audio is still a big issue today. When you stream music, or listen to a podcast, the audio files gets encoded down pretty heavily to save bandwidth. This does make sense up to a point, since higher-quality files do take longer to buffer. And of course, a lot of us pay by the gigabyte for our mobile data. On the other hand though, internet speeds are faster than ever these days, and unlimited data plans are pretty common. You can stream 4K video from YouTube and Netflix, so why hasn’t audio caught up?

Unfortunately, audio still often gets treated like a second-class citizen compared to video, and the bar for what’s considered acceptable is significantly lower. Between over-compression at the mastering stage, and over-compression at the encoding stage, most of us have to put up with subpar sound all the time, whether we realize it or not.

[music out]

Ian: It’s quite interesting; because it’s such a subtle effect, if you didn’t do the comparison, you might never notice it. But I think it has quite a profound effect on the way that we feel when we listen to the music, and the way that we’re likely to keep on listening, or switch it off and do something else instead.

[music in]

Here at Twenty Thousand Hertz, we care about sound quality, and we think you do too. If you want to make the music you hear sound a little better, go into the settings of your music streaming app, and turn on “High Quality Streaming.” It’s not going to fix all of the issues we’ve talked about, but it does make a difference.

At this point, things seem pretty dire, but there are some signs of hope. While music has been getting pummeled by the Loudness War, some artists and mastering engineers have been fighting to keep dynamics alive. And while streaming services don’t have a great track record when it comes to sound quality, they might end up being the biggest game changers in the Loudness War. How?

We’ll find out next time.

[music out]

[music in]

Twenty Thousand Hertz is presented by Defacto Sound, a sound design team dedicated to making television, film and games sound insanely cool. Go listen at defactosound.com.

This episode was written and produced by Casey Emmerling and me, Dallas Taylor. With help from Sam Schneble. It was sound Edited by Soren Begin. It was sound designed and mixed by Nick Spradlin.

Special thanks to our guests Greg Milner and Ian Shepherd.

If you want to dive deeper into these subjects, be sure to check out Ian’s podcast, it’s called The Mastering Show. His website is called Production Advice. And check out Greg Milner’s book, Perfecting Sound Forever. You’ll find links in the show description.

The background music in this episode came from our friends at Musicbed. Visit musicbed.com to explore their huge library of awesome music.

What album captivates you with its amazing sound? I’d love to know. You can get in touch with me and the rest of the 20K team on Twitter, Facebook, or through our website at 20k.org. And if you enjoyed this episode, tell your friends and family.. And be sure to support the artists you love by buying their music… and preferably in high quality.

Thanks for listening.

[music out]

Recent Episodes

You’ve Got Mail: The voice behind AOL

mail.jpg

This episode was produced by Colin DeVarney.

How a simple soundbite on America Online became one of the most recognizable sounds of the internet age, plus the creation of a whole new musical instrument. This episode features Elwood Edwards, the man behind the famous AOL “You’ve Got Mail” soundbite, and Bosco and Maya Kante, inventors of the ElectroSpit.

MUSIC FEATURED IN THIS EPISODE

Dust in Sunlight by Sound of Picture
Fingernail Grit by Sound of Picture
Fives by Sound of Picture
Massive by Sound of Picture
Jack 12 by Sound of Picture
Tipsy Xylo by Sound of Picture
Twinkle Toes by Sound of Picture

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Become a monthly contributor at 20k.org/donate.

If you know what this week's mystery sound is, tell us at mystery.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Sign up for The Great Courses Plus and get a free month at thegreatcoursesplus.com/20k.

Get a 14 day free trial of Zapier at zapier.com/20k.

Check out and subscribe to Gastropod wherever you get your podcasts.

Check out and subscribe to Just the Beginning wherever you get your podcasts.

View Transcript ▶︎

You’re listening to Twenty Thousand Hertz. I’m Dallas Taylor.

[music in]

There are so many stories out there that we wanted to share with you, but some of the stories just don’t need an entire episode. So this show is going to be a little different. I want to present two completely separate bite-sized stories. The first is about a small little phrase that’s become one of the most recognizable sounds in recent history. And later in the show, we’ll hear about the creation of a brand new spin on a modern musical instrument. So without further ado, this is Jack Dearlove reporting on our first story.

[music out]

Jack: So, I'm sitting here, on my phone, and currently I have 17 unread emails. Most are spam, there are a couple of newsletters, there's something from a letting agent that I probably should look at more carefully.

Jack: But it got me thinking, it wasn't always like this.

[music in]

Jack: Do you remember when checking your email was a pretty major process? Turn on your computer, dial up, log in, wait for it all to load. I mean, you probably still had 17 unread emails waiting for you, but you were the boss of when they were looked at, rather than your phone.

Jack: And there was something else about that era of email that was pretty special. After you've gone through all the process of getting online, you were probably greeted by something that's gone down in Internet history. The voice of a guy who'd tell you-

Elwood: You've got mail.

[music out]

Elwood: I've been a television broadcaster since I graduated from high school.

Jack: So, this is the man himself. He's called Elwood Edwards, he's now in his mid-sixties, and the story of how he became the voice of AOL starts the same way a lot of stories do. Boy meets girl.

[music in]

Elwood: I had just purchased a Commodore 64 computer, and in a Christian chat room I started talking with a woman who was KarenJ2. I was in Gaithersburg, Maryland and she was in Fairfax, Virginia. After we had talked for several months, I invited myself over for dinner. She fixed tuna salad, I remember that... and we became inseparable.

Elwood: We were married in December 1988.

Jack: What I love about this story is that we still treat relationships that start online like they're a new thing, but this was the eighties. They're definitely not.

Elwood: She was a customer service rep for the company called Quantum Computer Services, which in 1989 became America Online. She overheard Steve Case, one of the principles of America Online. He was discussing with some programmers the idea of adding a voice to the software.

Elwood: Karen volunteered me, and on a cassette deck in my living room, I recorded, Welcome! You've got mail. File's done, goodbye.

Jack: What did you think of it when you heard it for the first time?

Elwood: Well, I've been an announcer, even though you wouldn't know it by my voice today. Gee whiz. I've been an announcer my entire broadcasting career. I started in radio while I was in high school, then I was always a staff announcer at the various TV stations I worked at.

Elwood: So, it was nothing new to me to hear my voice coming out of a little speaker. I didn't really think anything of it at the time.

Jack: “I didn't really think anything of it at the time.”

[music out]

Jack: It was just an average day in a series of average days. It was one recording, three little words that are still in use today.

Elwood: I don't think anyone had any idea what it would become. Certainly, had I realized it at the time, I would now be retired, but I'm not. Even today, I have an AOL account, email account, but if you go on AOL.com and then you either open your mail or you create an email account, when you sign onto that and you have new mail, you still hear me say, you've got mail.

Jack: I will be honest. The first time I heard it, El's voice is still there, I couldn't believe it. I actually went and signed up for an AOL account myself, just to double check, and yup. There he is.

Elwood: You've got mail.

Jack: But he's not the only voice you could have over the years.

[music in]

Elwood: Along with the history of all of this, AOL used to have an occasional, I guess it was an annual for a while, celebrity voice contest where users of the system could change from the default voice, mine, to the voice of various celebrities who had recorded the phrases as well.

Elwood: I know Mick Jagger said...

[SFX clip: Mick Jagger: You've got some letters.]

Elwood: But fewer than 20% of the AOL subscribers, throughout the years, had elected to change from my voice.

Jack: El is really proud of this, you can hear it in his voice.

Elwood: I would like to think they like to hear what I sounded like. I don't know for sure, but that's what I like to think.

Jack: It's almost like you've got a secret identity, you know, a bit like a super hero?

Elwood: Yeah, that's sort of true, yeah.

Elwood: It's not something I go around blowing my horn about, you know. My ex-wife used to be my greatest cheerleader. She would be the one who would open up the conversations, and then people would have me perform, if you would.

Elwood: I was on the Tonight Show with Jimmy Fallon.

[music out]

[Clip: Jimmy Fallon:Elwood!

Elwood: They had me do the "Welcome, you've got mail."

Jimmy Fallon: Elwood!

Elwood: Then they had me say some other things.

Jimmy Fallon: Thank you for coming on the show, Elwood. Now, to prove that it's really you, can you say the classic “You've got mail” line?

Elwood: Welcome! You've got mail.

Jimmy Fallon: That's worth the price of admission, right there. That's enough.

Jimmy Fallon: Now, we've got some other phrases we'd love for you to say, so whenever you're ready, read the cue cards.

Elwood: Uptown funk.

Elwood: Adele Dazim

Elwood: File's done. Goodbye.]

Elwood: That was a great deal of fun, and I really appreciated the recognition. I was slightly taken aback by the audience reaction, it was rather thunderous in the studio, which I had not expected.

Jack: This is a guy who has been famous for decades, but he talks about going on a show watched by millions on TV, and online, all around the world like it was just a nice day out. Maybe that's it.

Jack: He could be milking his fame for everything it's worth, but he's not. He's just happy to have been part of your life.

[music in]

Jack: Do you ever get tired of it, at all?

Elwood: Oh, no. No, not at all.

Elwood: If anything, I enjoy the look on people's faces when they realize who I am. At the TV station where I work, I'm a News Editor, I run the studio cameras. I'm really a behind-the-scenes kind of person, I've never been one to really want to be in the limelight, but it's quite gratifying when somebody does realize who I am, and their reaction to that knowledge.

Elwood: Our world is full of people who were in the right place at the right time, and I'm glad to be one of those.

[music out]

[music in]

The decision to add a voice to America Online probably felt pretty insignificant at the time, but it really became a cultural icon. Elwood was only paid $200 and recorded it on a whim. It was a favor. This phrase has gone on to be synonymous with the early days of the internet, so much so that even younger generations know the phrase. It also made Elwood famous in a unique, hidden way. Almost no one would recognize him if they saw him on the street, case and point - here’s Twitter user Brandee Barker finding out that ther Uber driver was Elwood.

[Clip: Twitter video:

Brandee: This is my Uber Driver and he just told me something very special, that he’s the voice behind

Elwood: Welcome you’ve got mail.

Brandee: No way! Do it again! Do it again! Welcome, you’ve got mail. Yay, ok whats your name?

Elwood: Elwood Edwards.

Brandee: Elwood Edwards, thank you!

Elwood: You bet!]

After the break, we’ll take a look at another story about sound and technology. It’s about an inventor that combined our oldest instrument with modern technology to create something entirely new. After this.

[music out]

MIDROLL

[music in]

The human voice is our oldest instrument. It doesn’t take any sort of gear or technology to use it. It’s sort of the opposite of modern day synthesizers if you think about. But naturally, people have tried to blend these two opposites together to create something different and new. Our second story comes from the podcast Just the Beginning, which is about how independent creators bring their ideas to life. This story is of a husband and wife team that created an instrument called the ElectroSpit. Put simply, it kinda lets you sing like a robot. This story is reported by Michael Garofolo.

[music out]

Maya: A melodic robot. [Laughs]

Bosco: Yeah. That’s a great description. A robot…

Maya: Who has a soul.

Bosco: [laughs]

Maya: A robot with a soul.

[Bosko singing with ElectroSpit]: Oh yeah. Welcome, welcome, EeeElectroSpit…

[Bosco continues to improvise beneath intros]

Maya: My name is Maya Kante. I am in charge of business strategy, marketing, and cracking the daily whip.

Bosco: [Laughs] My name is Bosco Kante.

[SFX: Singing: My name is Bosco…]

Bosco: I am charge of engineering, the vision for the company… which is a shared vision.

Maya: Yeah, I was about to say I don’t know about that. [Laughs]

[Bosco continues on electrospit: "We’re going to give you the backstory — oh."]

Michael: I got to see the ElectroSpit when we sat down for this interview, and it looks a little like a pair of headphones that you wear around your neck… with the parts that you’d normally put over your ears — Bosco calls them soundcups — resting right on your throat.

Bosco: So, the way the ElectroSpit works, the sound comes into the soundcups [SFX]. If I put it on my neck it goes through my neck and out of my mouth. It replaces your vocal chords [SFX]. So if I talk at the same time you can kind of hear it in the background [SFX] but if I open the back of my throat now you can hear it now you can hear it oh… That’s what it sounds like.

[Music: Zapp “More Bounce To The Ounce”]

Michael: The ElectroSpit is actually based on an older instrument, called the talkbox… that was used a lot in the 1970s and early 80s… and that’s when Bosco got hooked…

Bosco: I was in middle school at the time, and I would ride in my neighbor’s ’65 Impala, and he would play Zapp, “More Bounce the Ounce”, and then we would go to the skating rink and they would have breakdancing and popping competitions, and that was the main song for those competitions. Ever since that time, I wanted to know how to make that sound, how do they do that.

[Music: Zapp “More Bounce To The Ounce”]

Michael: Bosco spent years mastering his talkbox technique. And he is a master. Bosco is one of the few go-to guys in the music business and his credits prove it. He’s played talk box on tracks by Bruno Mars and Big Boi.

Michael: So, why is he trying to reinvent it?

Michael: Well, first of all, the talkbox is notoriously difficult to play… there are some… let’s say, basic design flaws… for example… you have to try to sing while holding a plastic tube in your mouth.

Bosco: And if you hold it in the wrong place, it doesn’t sound right. And even if you hold it in the right place, it still sounds like you have a tube in your mouth.

Michael: And then, there’s Kanye.

Bosco: Kanye, okay. So, I had the opportunity to play live on the American Music Awards with Kanye West because I did this song called, “Kanye’s Workout Plan,” that I wrote, and there’s a big talkbox solo. But before the show, they’re talking about what the performance is gonna be like and it’s gonna have all these dancers and you’re gonna be moving around.

Maya:‘Cause they were doing a workout routine, dance routine.

Bosco: Right. And the talkbox is not mobile. So I’m gonna have to lip sync. Which sucks because this is my big moment to like show everybody in the world how great of a talkboxer I am and no, I’m out there doing a Milli Vanilli. That was the inspiration for ElectroSpit.

[SFX: Bosco improvising with ElectroSpit]

Maya: Some of our early prototypes we had like a person with a keyboard tie, and you know how they have those snorkeling things where they have the thing in their nose, we thought maybe we could do that.

[SFX: Bosco improvising with ElectroSpit]

Bosco: I had like an attachment to the tube, like I thought of the talkbox as the tube.

Maya: And the more you thought about it it was like that makes it so you can’t share it because it make it unsanitary. And that means that less people can use it. When you go to a studio, anybody can pick up a guitar, right? But if somebody has a spare talkbox laying around, unless you have a clean tube, nobody wants to touch that thing.

[Bosco improvising with ElectroSpit]

Michael: There was maybe no one more qualified to bring the talkbox into the 21st century than Bosco. He’s not only a musician — he’s also a mechanical engineer. He got his first big break in the music industry while he was still in college when he was commissioned to do the theme song for the TV show In Living Color.

[SFX: In Living Color Theme Song]

Michael:And it seems Bosco’s particular brand of genius that combines music and technology, it runs in his family.

Bosco: My mom plays French horn, my grandmother plays trumpet. My aunt plays trumpet. My other aunt plays guitar and sings. So, you know, Christmas carols are very lively.

Maya: I sit silently. [laughs]

Bosco: So music was a huge part of our family. And then, in addition, everybody in my family did math. My mom is a math … she was a math professor and now she’s a civil engineer. My grandmother was a math professor, but before that she was working as an electrical engineer and she was actually part of the team that invented the microwave. My mom’s first cousin invented the laser.

[Music: ElectroSpit “Now Is So Last Year”]

Michael: Like I said… Bosco seemed destined to build this instrument.

Michael: And with a backstory like this, it makes sense that Bosco and Maya really do consider ElectroSpit a family business… even if what they are doing doesn’t exactly look like a mom and pop type of thing.

Bosco: Everything for us is family, you know?

Maya: Yea, everything.

Bosco: Yea, it’s just everything.

Maya: Some people were like, “How do you work together and live together and you’re married?” And I was like, “Well, we actually really do like each other.”

Bosco: That’s right.

Bosco: But when we first got together, Maya had come from the corporate world.

Maya: There was some learning to be done about what looks like work. Entertainment looked like kick it time to me. He’s like, “No, this is a business meeting.” I was like, “No, you’re having drinks.”

Bosco: And I had never had a quote unquote job, I mean…

Maya: You’ve always been an entrepreneur.

Bosco: I’ve always been-

Maya: And people don’t think of that as a job, but it’s so much more grueling than a job because nobody tells you what to do, there’s no set hours. Like, he had way more of a job than anybody that I’ve ever known.

Bosco: Well, yeah, if I didn’t sell this particular song then I wasn’t gonna be able to pay my mortgage. So initially anytime we would face some adversity in our entrepreneurial ventures, Maya would, she started looking at the job-

Maya: Job boards.

Bosco: Job boards.

Maya: And I’d be applying for jobs and stuff. ” And he was like, “You’re just fooling yourself.”

Bosco: You’re just wasting time. Now, when we face some type of adversity or challenge, it’s “we can do it, we can figure this out, we’re gonna get creative.”

Maya: We’re doing it. It’s always we’re doing it.

Bosco: See? We’re doing it. It’s done. Consider it done.

Maya: Yeah.

[Music: ElectroSpit “Now Is So Last Year”]

Bosco: Initially, she looked at ElectroSpit as “this is Bosco’s thing. He’s the producer, he plays talkbox.”

Maya: This crucial turning point where our son was trying to give me a compliment, and he goes, “Mommy, maybe when I grow up I wanna be a music helper like you.” And I was like, “What?” I was like, “I’m a boss.”

[Music: ElectroSpit “Now Is So Last Year”]

Michael: And how about their son? Even though he’s still in elementary school, he’s already angling to take over the family business.

Maya: At his school, they had a project called The Living History of Hip Hop. His dad came in as a part of that whole project and did a demonstration of the ElectroSpit. And all the kids got up and tried it. And then after school that day, our son said, “Okay, so I need to be the salesman.” Because he said, “Everybody in class says that they each have $100, so I think that’s a good price point, around $100.” I was like, okay, you’re in the fourth grade and you’re nine years old and you’re trying to basically pimp out your classmates to buy the ElectroSpit [laughing].

[Music: ElectroSpit “No Chute”]

Michael: When I spoke to Bosco and Maya, the ElectroSpit was just about to go into production. And I couldn’t help but notice that as they talked about the upcoming release, they sounded a bit like parents watching their kid grow up.

Bosco: You know, the talkbox is gonna be out there and people are gonna do all kinds of stuff. And I know that there’s gonna be some kid that’s gonna pick it up and be 10 times better than me and play it upside down or behind his back and that’s the exciting part.

Maya: We don’t wanna put any limitations on it. We’re just excited to see what other people do.

[Music: ElectroSpit “No Chute”]

That story came from our friends at the Just the Beginning podcast. The hosts Zakiya Gibbons and Nick Yulman and they present some fantastic stories on creatives making their dreams become a reality. So take a moment to go find it and hit that subscribe button... You can also find out more about the Electrospit at electrospit dot com.

[music out]

[music in]

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, a sound design team dedicated to making television, film and games sound incredible. Find out more at defactosound.com

This episode was produced by Colin DeVarney and me, Dallas Taylor, with help from Sam Schneble. It was sound edited by Soren Begin, and mixed by Jai Berger.

Thanks to reporter Jack Dearlove and the Just the Beginning podcast for letting us share their stories. And if you happen live in London, Jack actually made an awesome app that tells you the status of the London Tube through emojis. So check that out at tubemoji dot com.

Also, if you’ve heard any other great stories about sound or read in another article about sound, be sure to send it to us. You can do that by writing us on Twitter, Facebook or by email at hi@20k dot org. Seriously, my favorite part of doing this show is hearing from our amazing listeners, so don’t be shy.

Thanks for listening.

[music out]

Recent Episodes

From Spinal Tap to The Simpsons: Voice acting w/ Harry Shearer

Artwork provided by Mike Andrews.

Artwork provided by Mike Andrews.

This episode was written and produced by Fil Corbitt.

We rarely think about the way we speak. For most of us, it just happens. In this episode, we catch up with two professional voice artists and chat about their rituals and techniques that help them communicate. Featuring voice actor Harry Shearer and NPR vocal coach Jessica Hansen.

MUSIC FEATURED IN THIS EPISODE

Fantasy (Instrumental) by De Joie
Trembling Care (Instrumental) by How Great Were the Robins
Wishing Well Wheel by Sound of Picture
Do Better by Sound of Picture
Por Supuesto by Sound of Picture
Peas Corps by Sound of Picture
Bright White by Sound of Picture
Platformer by Sound of Picture

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Become a monthly contributor at 20k.org/donate.

If you know what this week's mystery sound is, tell us at mystery.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Subscribe to Harry's podcast Le Show.

Consolidate your credit card debt today and get an additional interest rate discount at lightstream.com/20k.

Try ZipRecruiter for free at ziprecruiter.com/20k.

Check out SONOS at sonos.com.

View Transcript ▶︎

[SFX: voice clearing/Dallas’ Vocal prep]

You’re listening to Twenty Thousand Hertz. I’m Dallas Taylor… uh, hold on. Let’s do that one more time. You’re listening to Twenty Thousand Hertz. I’m Dallas Taylor. There we go.

[music in]

Since I’ve been making this very podcast, I’ve had to start thinking about something I never thought about in my entire life: my OWN voice.

Every episode tells stories of people who study and design the way our world sounds. But - getting meta here - It’s MY voice communicating these stories, and it’s super weird and I hate my voice just as much as you do. And it’s not like I ever considered myself a vocalis, of any sort.

But there was this one time last Winter where I completely lost my voice but I had to record an episode, I had no idea what to do. So, as you do, I posed the question to twitter asking for advice and was completely blown away when one of my childhood heroes responded.

[music out]

Harry Shearer: And we’re recording.

This is Harry Shearer. You might not know his name, but you’ve definitely heard him.

He was a cast member on Saturday Night Live, he was Derek Smalls in the movie Spinal Tap....And of course, he’s the voice of many characters on The Simpsons.

[SFX: montage of Ned Flanders Clip, Seymour Skinner Clip and Burns & Smithers].

[music in]

In addition to acting and voice acting, he also has been on the radio for decades.

Harry: Well, that's where I started. That's where I've had a foothold for coming on 35 years now as a so called grown up. So I probably have been more of a regular presence or more of a presence on radio than in any other medium.

So this is someone who knows how to use his voice. And after he responded to my tweet, with a recipe for a throat-soothing drink, I figured why not take this moment and set up an interview and see what else I could learn from a voice master.

Harry: There's a world of effects you can create with the voice and with these tools that we have, and that can both spellbind a listener and take a listener into a world of imagination that visual kind of overwhelms and wipes out, and you can spend literally millions of dollars of CGI work trying to create an effect that the listener's imagination can create very easily just with a sound and a few words.

All we need is our voice to tell a story and sound can elevate that to another level. But there are so many nuances that make our voice engaging. This can take years to master. And our voice is very fragile. So it’s incredibly important to find ways to protect it.

[music out]

Harry: My wife is a singer and her dad was an opera singer, and she taught me his warm ups. The most tired my voice gets is doing what we're doing right now, talking in some version of my actual voice. So, I always warm up before that and certainly if I'm going to do Simpsons or stuff for my radio show, she just taught me that's the essential thing is to warm up and it's about a 10 minute routine and then she also taught me, I think what I suggested to you, which is apple cider vinegar, honey and hot water and then someone else added and I sometimes will do this as well, some garlic and lemon juice to the preceding ingredients.

Are there certain voices that are more difficult or strenuous?

Harry: Oh yeah. Oh yeah. The decision as to how a character sounded, they call it a decision is to dignify it unsuitably because it was basically just a sort of a stab, an intuitive leap I'll call it if I want to dignify it at all, in the beginning of the show, I don't know about anybody else in the cast but I know I didn't see any drawings.

Oh really?

Harry: Yeah, yeah, yeah. I only saw was script and like a one sentence description of the character, so it's really just I think it sounds like this and if you had told me then you're going to be doing this voice for 30 years, there are several voices I would have changed how they were done, Otto [SFX: Otto] and even more particularly a character that they mercifully finally killed off, Dr. Marvin Monroe, who sometimes reappears magically from the dead on a Halloween show.

[SFX: Dr. Marvin Monroe]

Harry: Marvin Monroe was designed to be as grating as humanly possible. He was a family counselor who was supposed to, you know, have a benevolent kind of reassuring bedside manner but I think it was written into the description that he had just this horrible voice that was grading and totally went against the grain of the effect he was supposed to have on people, so that's what I did but I mean it was not good on the cords and Otto, I will say this, we do that at the end of every session.

[SFX: OTTO]

[music in]

Establishing a warm-up routine and having a concoction to clean out your pipes are great first steps to Vocal Health. But wading into this world of using my voice professionally, I’ve realized there is so much more than just voice health. There’s breathing, there’s phrasing and of course there’s the pronunciation of words, or what’s better known as, diction.

Harry: I haven't heard the word diction used in public in so many years, ‘cause people seem to have forgotten about it. Yeah, I mean, listen to the way people talk.

Harry: I never thought of myself as a dialectician, and if you listen to some of my accents on The Simpsons, you'd agree with me… but it's just my observation of what I've seen, tend to emphasize pronunciation as a key to an accent or a dialect.

When doing an accent, Harry says it’s actually inflection that can make it believable, instead of the diction.

Harry: You've learned the inflection of the way your parents talk before you knew what they meant. You don't make a mistake with that, and so a musical ear will clue you into the music that each accent encodes and you can make dozens of mistakes with pronunciation and still sound like you're doing the accent.

[music out]

I’m going to be totally honest -- it’s hard to use your voice to its full potential. And it’s something we’re all born with, but it’s also something we rarely think about. And zooming out a little, that’s true about sound in general.

[music in]

Sound often takes a backseat to the other senses, even though it can really shape our experiences.

Harry: If you're doing a film, sound is the guy at the bottom of the food chain. The actors have been called to the set, lights have been set and you hear this all the time, oh waiting on sound. It's the last guy who has to sort of finally get his two cents in and it’s “oh this isn't right, I got to fix something”, sigh, waiting on sound…

Harry says he made a low-budget film about 20 years ago, and his understanding of sound is what made it possible.

Harry: When you're working low budget, you really have to be inventive with everything but I learned you can almost trick people into thinking they saw something if you use sound correctly and combine it with a couple other things, so effects that you just can't afford to do, you can almost be sure that people will think they've seen that effect in your film if you use sound properly with as I say a couple other treatments.

[music out]

Through sound, you can trick the mind into thinking it saw something, and Harry says that makes sound a subversive effect.

Harry: It's so powerful in all sorts of ways. In mood, coloring how you perceive something and this is a golden age as far as I'm concerned in terms of what is now being made available in terms of tools to play with sound.

[music in]

Sound is powerful, and were all born with this little built in sound box. This whole podcasting experience got me thinking that I need to learn how to use this tool better. So I went searching for somebody who could help.

That -- and some pretty embarrassing sounds coming from my voice -- after the break.

[music out]

MIDROLL

[music in]

One really fast way to learn how to use your voice professionally is to start a podcast and figure it out as you go. That’s what I did, but last year I started thinking, maybe I should ask a professional to teach me some tricks. So I emailed Jessica Hansen.

[music out]

Jessica: I am the in house voice coach at NPR. I'm also the voice of NPR funding credits.

In case you’re not familiar with NPR funding credits, here’s some of Jessica’s work.

[SFX: Jessica Hansen reading funding credits]

Jessica works specifically with NPR Journalists to help them find a voice for radio.

Jessica: The primary reason for NPR needing a voice coach is because we are an audio product and most people don't have training in using their voices as storytellers. They have training in how to write, how to find sources, how to cultivate the sources, how to put together the story, how to ask the right questions, how to be in the right place at the right time, but they just don't get voice training.

And all of that hard work to write a story, can fall flat if your voice can’t engage the audience.

If you don’t sound excited, people will pick up on that. And if you sound too authoritative, people might not identify with you.

Jessica: Most people say, you know, "Oh, well sound more conversational", but then the person doesn't know how to sound more conversational, because you are reading and it is hard to lift words up off a page. It is the trick in this business.

[music in]

So how do you start?

Jessica: Breathe. Uh, gosh, breathing solves almost every problem. Breathing solves nerves, breathing solves phrasing, breathing solves decisiveness, and breathing helps you to open your voice.

It’s so easy to run out of breath without even realizing it’s happening. Just learning to think about your breathing is huge.

Jessica: I'm also often being asked to solve the problem of a voice being placed wrong. You know? She sounds too nasal, he's talking out of his throat, he has vocal fry, she sounds like she's whispering. And so I solve a lot of resonance problems. Helping people to put their voices forward in their faces so that they're resonating and they're not speaking out of the backs of their throats, and that they feel like they're using their whole voices and sounding like a whole person that's present and not just part of a voice.

[music out]

We often think of our voice as a natural part of our self, but like any muscle, we have to train it to unlock its full potential. Without thinking about it, we limit our ability to communicate.

For instance, you can work to expand vocal range. That’s the variation between high notes and low notes.

Jessica: I think increasing vocal range is one of my favorite things to work on. A lot of people use only a few notes in their range. We speak on maybe two or three or four notes because, you know, we're grownups and we're trying to sound like we're adults.

This sort of adult tone can get really monotonous.

Jessica: I love to work with people on increasing the range of their voices, and helping people to find that higher notes don't necessarily sound shrill, and lower notes aren't the only thing that you can do to sound authoritative. And so really playing with vocal range, and giving people a broader spectrum to choose from is not only fun, but I think it's really important.

Remember, training and vocal work is not about changing the voice, but expanding it.

Jessica: People are scared they're going to be talking way up here like Minnie Mouse, but that's not the result either. If you work on talking like Minnie Mouse, and like the Wicked Witch of the West, and like some Dark Lord villain character, and then you marry all three of those together we get various places in the voice that blend and merge, and all three of those qualities together create the whole voice.

I actually took vocal lessons with Jessica for about 3 months. And they were totally different from what I had expected. Instead of singing scales, or trying to hit certain notes, She had me do all kinds of weird stuff.

Like lay totally flat on my back at NPR making cat noises and weird grunts. I would also do things like singing twists where I spin my whole body and just sing… Just go (uuhuuuhuuuh). Things like lip trills (brrrrrr). Barrel shimmies, these are things where I’m shaking my whole body and just gonna (ugh ugh ugh ugh). Lazy tongue where I just let my tongue sit in my mouth and not use it. Toddler (ME! ME! ME!) I can’t do it, it’s just so ridiculous. Anyway there's a ton of laughing and just ridiculousness. But its all to just stretch your entire comfort level, to find out where your voice can go, really.

Anyway, we tend to think of our voices as pretty fixed. But they really aren’t -- even without training, they can change quite a bit over time. If you go back and listen to the earliest episodes of your favorite podcasts, you’ll probably be surprised a how different the host sounds. I’m not gonna play anyone else’s show, but I can play mine.

The first episodes of this show really weren’t that long ago -- it was late 2016 -- and still, I can hear a clear difference in my voice. It is horribly cringey for me.

Anyway, very reluctantly, here’s me from the first episode of Twenty Thousand Hertz:

[SFX: #1 Siri]

It’s always weird to hear your own voice recorded, but hearing an old version is even weirder. I sound weird and unhappy. And it really sounds like I got pulled out of bed at four o’clock in the morning and someone put a microphone in front of my face.

[music in]

It’s weird that you have to work so hard just to sound natural. And this goes beyond podcasting and voice acting. If you give a speech or just want to communicate with your boss, a lot of the times the feelings in your head just doesn’t really translate much to the voice. I think everyone could benefit in some way by just practicing their voice.

Jessica: I think that the voice is a really good expression of who we are. You know the expression, 'the eyes are the window to the soul'? I think it's true of the voice as well. Every voice is unique. Every person has his or her own unique sound. And no matter how much training you give it, it's still an expression of that person's inner self.

Jessica says, when you train your voice, you gain a wider range of expression.

Jessica: So people who work on expanding their vocal range, they have more options for expressing themselves or what they're trying to communicate, whether it's storytelling or a presentation in a boardroom, or giving an inspirational speech. Whatever it is, even if it's just your Thanksgiving toast around the family dinner table. Just having more options for color, and tone, and lyric and being able to express yourself more fully.

And being able to express yourself more fully, and more accurately, is a pretty cool skill to learn.

Jessica: I think it's important for professional voice users to remember that the most important thing is to make a connection with your listener.

Jessica: The more free and open, and the more possibilities for expression, the better we feel. The better we individually feel physically, emotionally and mentally. And just know that everything that you have to offer is exactly enough, and just to open that up and give yourself the range and the freedom to express what you have to say, because everyone has a different perspective, everyone has a different story, everyone has a different point of view, and everyone has a different voice, so we want to hear them.

[music out]

[music in]

Twenty Thousand Hertz is produced out of the studios of Defacto Sound a sound design team dedicated to making television, film and games sound incredible. Find out more at defactosound.com.

This episode was produced by Fil Corbitt and me, Dallas Taylor, with help from Sam Schneble. It was sound edited by Soren Begin, it was mixed by Jai Berger. The writer of this episode Fil Corbitt is the host of Van Sounds, a podcast about movement. It’s a unique blend of music journalism, travel writing and experimental radio. You can find Van Sounds on apple podcasts or wherever you listen.

A huge thanks to Harry Shearer and Jessica Hansen. You can find more of his work, links and news at Harry Shearer Dot Com and Jessica’s work at jessicahansen DOT net.

Thanks to Stephen Indrisano for naming this episode.

Finally, if you have a friend or loved one thats an actor or somebody who has a podcast or anyone who uses their voice professionally where it be in a meeting or just in work. Be sure to take a moment to share this episode. We are 100% independent so the only way people will know about us is if you tell them. So whether its this episode or any of your other favorite episodes be sure to tell your friends. And remember this is a totally clean podcast, its politics free and it will always be those two things.

You can find us in any podcast player and you can connect with us on Twitter, Facebook, or by writing hi at 20k dot org.

Thanks for listening. One more time. Thanks for listening, thanks for listening…

[music out]

No lets do this again, thanks for listening. No, thanks for listening, thanks for listening, thanks for listening.

Recent Episodes

Hear Here: The messy history of architectural acoustics

Artwork provided by Jon McCormack.

Artwork provided by Jon McCormack.

This episode was written and produced by Fran Board.

Humans have been fascinated with acoustics since our earliest ancestors. From Roman amphitheaters to modern symphony halls, we’ve designed our spaces with sound in mind. But the relationship between acousticians and architects isn’t always smooth sailing. In this episode, we explore the way acoustics has shaped our history and what we might do to make our spaces sound better today. Featuring Emily Thompson, author of The Soundscape of Modernity and Professor of History at Princeton University, and Trevor Cox, author of Sonic Wonderland and Professor of Acoustic Engineering at the University of Salford.

MUSIC FEATURED IN THIS EPISODE

Oh My My (Instrumental) by Summer Kennedy
Going Forward Looking Back by Sound of Picture
Bambi by Sound of Picture
Gears Spinning by Sound of Picture
Tweedlebugs by Sound of Picture
Algorithms by Sound of Picture
Trundle by Sound of Picture
Delta by Sound of Picture
Massive Attack by Sound of Picture
Lone Road by Sound of Picture
Flutterbee by Sound of Picture

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Become a monthly contributor at 20k.org/donate.

If you know what this week's mystery sound is, tell us at mystery.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Consolidate your credit card debt today and get an additional interest rate discount at lightstream.com/20k.

Go to forhims.com/20k for your $5 complete hair kit.

View Transcript ▶︎

[SFX: Inchindown sax]

You're listening to Twenty Thousand Hertz. I'm Dallas Taylor.

[SFX: Inchindown sax up]

Believe it or not, this is the sound of one, single, saxophone. The angelic sound is created by the space around the saxophone. This recording was done in an old oil depot called Inchindown. It’s an underground complex of huge oil tanks in Scotland. Some of these tanks are forty feet high and double the length of a football field. But the coolest thing about them is they hold the record for the longest reverberation time of any man-made structure.

[SFX: Inchindown sax crossfades with the music track]

[music in]

Acoustics is the study that deals with how sound works in a space. It’s something we don’t usually think about, but it actually plays a huge role in our lives.

Here’s the good news... You’re already an acoustics pro! Humans are great at listening for clues about our surroundings. That’s how you already know that I’m speaking to you from a recording studio. You’d notice right away if I were somewhere else, like in a bathroom [SFX reverb] or a cathedral [SFX reverb]. See? You already inherently know all about acoustics.

And while we don’t usually come across acoustics quite as spectacular as this oil depot, they play a big part in our lives wherever we are.

[music out]

We’ve only just begun to really understand acoustics in the last hundred years. But our fascination with it goes back thousands of years.

[music in]

Trevor: If you go into a cave or you go into a stone circle, the acoustics would have been unusual to our prehistoric ancestors. It would be really surprising if you didn't go in there and enjoy the acoustics. After all, if a toddler goes into a railway tunnel, they all start yelping because it sounds exciting.

That’s Trevor Cox. Trevor’s a Professor of Acoustic Engineering at the University of Salford. He’s also the saxophonist you heard at the start of the episode.

Trevor: There's this theory around that where cave paintings are found is where the acoustics are good.

Researchers found that cave paintings of animals like horses and bison are usually found in more reverberant spaces. If the most interesting acoustics were in a narrow tunnel that was difficult to paint in, our ancestors would sometimes just draw red marks on the wall instead. It’s as if they were highlighting the interesting sounds.

[music out]

The theory is that these places were used for ceremonies and storytelling. We all know how much more interesting voices sound when they’re echoing off the walls. These reverberations could even turn the sound of hoofed feet [SFX] into a herd of galloping horses [SFX]. Our ancestors might not have understood the science of acoustics, but it sure seems like they were fascinated by them.

[music in]

Man-made structures have been built with acoustics in mind since the earliest human civilizations.

Emily: People have been considering how sound behaves in space really for as long as we have records, at least within Western civilization. You can go back to Ancient Greece and Rome, and writings indicate that people were considering these problems.

That’s Emily Thompson. She’s a professor of history at Princeton University. Her studies focus on sound technologies in American culture.

Emily: It's important to understand that, back in the time of Ancient Greece and Rome, architecture, science, music, were all considered a kind of part of the same holistic intellectual entity. They weren't considered distinct or separate in the way we perceive them today.

Many of these ancient civilizations believed that everything was tied together by harmonic ratios.

Emily: This connected the movement of the planets to principles of design for architecture, as well as the harmonies of music, and all of nature was really understood to be tied together by perfect ratios.

Emily: And so, that was one way to connect sound and space: To design spaces that embodied the kinds of harmonic ratios that were seen as the foundation of music.

[music out]

One of the best examples of man-made acoustics from this era is the amphitheater. These spaces hosted gruesome gladiator battles and chariot races [SFX], as well as theater and music. The largest amphitheaters could hold about fifteen thousand spectators. Architects designed these spaces to filter out background noise so everyone could hear what was going on. And considering there was no electricity for amplification, it’s a pretty remarkable feat for an ancient civilization if you think about it.

And just like today, a lot of old spaces were designed with music in mind. But other times, composers crafted their music around the space instead. It’s actually had a significant impact on our music history.

Trevor: If you look at Western music, going back to, sort of, 16th, 17th century, it's all about what was happening in churches really, in terms of Western classical music.

Trevor: There's no point, for example, going to a grand cathedral and writing something with lots of very fast moving music and words that are rapidly delivered, ‘cause everything would have been a mush and unintelligible

Trevor: That's the reason you have things like plainsong. It's a kind of way of getting words across which is more intelligible in a very reverberant environment.

[SFX: Plainsong]

In the 16th Century, churches started being built with balconies inside of them. That sounds like a small detail… but even small changes can alter acoustics drastically.

Trevor: The acoustics tend to get a bit drier, less reverberant. That then influences music. Because you can write more intricate music. There's people who argue that Bach's music, some of his very fast moving pieces would never have been written if church acoustics had never changed.

[SFX: Bach’s B Minor Mass]

Some people think that this seemingly tiny detail is actually one of the most important factors in the history of music. And it was all thanks to acoustics.

[music in]

Our understanding of acoustics evolved dramatically in the late 1800’s. Harvard University had just constructed a new museum. But they soon discovered one of its lecture halls was completely unusable due to the acoustics. The room was huge, with semi-circular walls and a domed ceiling. Because of this, students couldn’t tell at all what the professor was saying... So, the university’s president turned to a young physics lecturer, Wallace Sabine, to try and fix the room’s sound problem.

Emily: The president probably thought that he would just do a little bit of research, figure out why the music hall at Harvard sounded pretty good, and then apply that knowledge to this new room which didn't sound good.

But it wasn’t quite that easy.

Emily: Sabine was a kind of consummate, perhaps even obsessive, experimenter, and he took this small query and actually spent three years working late at night, when the campus was quiet, painstakingly taking measurements of the sound of spaces all over campus.

One time, Sabine threw out thousands of measurements after he realised that his clothes were having a tiny effect on his results. To most of us it might not have mattered. But for Sabine, this was a big deal. He started all over again, and from that point on he always wore the same outfit.

[music out]

Sabine would move huge amounts of soft surfaces into a room, like cushions and rugs. Then, he’d measure how it changed the sound of the room. He didn’t have any fancy technology to do this - just an organ pipe and a stopwatch.

[music in]

Emily: Sabine pored over his data, the data that he had been collecting painstakingly in notebooks for years and years, and he finally discovered a mathematical relationship between all these data points, that would ultimately provide a kind of a key to connecting the different materials that make up an architectural space with the reverberant or echoey quality of that space.

He figured out that the time it takes for sound to fade away is based on the size of the room and the amount of absorbent material in it. It may sound obvious to us now, but this breakthrough is the cornerstone of all of modern-day acoustics.

[music out]

Right away, Sabine’s formula was changing the way buildings sounded.

Emily: This became a very powerful design tool that offered the authority of scientific understanding, but at the same time it didn't force the architect's hand. It allowed you to choose what kind of materials you wanted to use, and by doing so proportionally, you could create any kind of reverberant quality you wanted.

Around that time, the Boston Symphony Hall was being built and an acoustics expert was needed.

Emily: The idea was to create a temple for this musical sound.

So they hired Sabine to advise them on how to make the hall sound just the way they wanted.

Trevor: It actually made a great concert hall, which is still revered as one of the great concert halls in the world today.

There’s even a plaque dedicated to Sabine in the lobby of Symphony Hall. It commemorates the building as the first auditorium in the world to be built according to his specifications and formula.

Here’s what Symphony Hall sounds like. This is the Boston Symphony Orchestra performing Shostakovich’s Presto from Symphony number 6.

[SFX: Symphony Hall, Boston]

[music in]

Trevor: If you look at a modern concert hall and look at what Sabine was working with, it's like comparing a Model T Ford with a modern car. A lot of the basics are very similar but there's a lot and lot of development.

Modern day materials can help spread the sound more evenly across an entire symphony hall. This gives the audience a more equally enjoyable listening experience no matter where they are seated.

Emily: They developed a way to create a tile that had a porous surface, and those pores would absorb sound energy.

These tiles let architects design spaces that sounded completely different from how they looked. You could make a big Gothic cathedral sound more like a small, intimate space.

Emily: It was clear that the way a room looked was no longer inherently connected to the way it would sound, in the sense that had always characterized the sound of architecture, for centuries really.

Acoustic materials, like special plaster and flooring, are used in all types of modern buildings to control acoustics.

[music out]

Nowadays, modern concert halls can even change their sound on demand. This is great for music fans, since it means one space can be used for all sorts of different performaces.

Trevor: Often, if it's a venue where there's a very famous orchestra, the primary purpose will be designed to make it to work for the classical orchestra. [SFX: Classical music] But, then if you go and bring along a rock band [SFX: Rock music], you'll find it sounds awful, a soupy sound, doesn't work with electronic reinforcement with loud speakers. What you typically do is you bring in an absorbents, you bring in material, fluffy stuff that deadens the acoustic.

While we understand acoustics pretty well, there still isn’t one mathematical formula for creating the perfect concert hall. Sometimes, it’s just down to personal preference.

Trevor: There isn't a definitive ideal design for a concert hall.

Trevor: There are people who like to listen to lots of reverberation, so they like to have a swimming sound, a little bit like being in a cathedral. But, there's the other people who prefer a clear sound, a bit more like listening to a CD.

[music in]

Concert halls today look and sound amazing. Thanks to Wallace Sabine, we can enjoy Beethoven symphonies, Chopin nocturnes, or even modern rock music in a space tailored perfectly for it. But even though we’ve come a long way, good acoustic design is still slipping through the cracks. And this oversight might just be jeopardising our future. We’ll find out how, after the break.

[music out]

[MIDROLL]

[music in]

A lot of thought goes into the acoustics of modern concert halls and theaters. So you’d think that other important buildings would sound good too, right? Well… not exactly.

Let’s think about some buildings where sound really matters. For me, schools and hospitals are near the top of the list… Maybe offices and transit stations too.

[SFX: train station]

Unfortunately, these places are well known for having lousy acoustics.

[music out]

Trevor: The design of everyday spaces tends to get overlooked, but it's incredibly important.

If a concert hall sounds terrible, people will notice. Designers know that it’s important that they sound just right. But acoustics in schools and offices have been a massive problem for decades and few people have spoken up.

Trevor: I think the problem with architecture is it's taught very much as a visual art. So, if you go to an architect school, you'll see lots of pictures up, you submit your folder of visual images about the building you're making, or you might get a walk through, nowadays, in a VR suite, but it probably won't have any sound on it.

Trevor: So, they're taught to think about circulation, light and visual, but they're not really taught so much to deal with the acoustic. It's obviously a bit harder to get your head around, because it's not something you can print on a page.

As a result, the architecture-acoustics relationship is pretty murky.

Trevor: Bexley Business Academy is a really good example of what happens if architects and acousticians don't work together to make it a success.

[music in]

Bexley Business Academy was built in London in the early 2000’s. It was designed by award-winning architects. The British Prime Minister opened the building and it was even short-listed for a prestigious architectural award. But amazingly, the architects had designed the classrooms with no back walls.

Trevor: So you can imagine a sort of a big office block, where you have a central atrium, and off to the sides, you have what would normally be the offices, but in this case were the classrooms.

This was an open-plan school?

Trevor: And the added stupidity was they put design and technology at the ground floor. So there were people using machines down on the ground floor, the noise would come up through the atrium and leak into the classrooms. [SFX: machinery noise] You can imagine how amazingly distracting and how difficult it is to teach in such an environment.

They had to spend tons of money sorting out the acoustics.

Trevor: To give you a sense, I think it was nearly a million dollars worth of remedial work to put walls back in.

Trevor: It shows you how much money you can waste if you don't get the acoustics right the first time.

[music Out]

Thankfully, there aren’t too many classrooms without back walls. But bad acoustics are a big problem in traditional classrooms as well. Modern design trends are a big part of the problem. Hard shiny surfaces like glass and polished wood may look nice, but they bounce sound around the room like crazy. Even older school buildings can be a problem, with high ceilings and hard floors.

The most obvious problem with this sort of design is that it’s hard to hear the teacher. But that’s just the tip of the iceberg.

[SFX: classroom babble]

Trevor played the sound of this classroom chatter to a group of teenagers while they were taking a test. He wanted to find out how it affects their performance, it lowered their cognitive ability by three years. Study after study shows that noise is terrible for learning.

It can cause stress, hearing loss, bad behavior, high blood pressure, and more… and these aren’t just abstract theories. They’re happening right now.

The good news is you don’t have to be an engineer or a physicist to improve the acoustics around you. Things like carpets and cushions can make a real difference. And it’s certainly worth giving it a try. Scientists have tested whether better acoustics would improve classrooms. In one case, kids’ grades improved, and in another, teacher illness plummeted by thirteen percent.

The key is creating a more thoughtful relationship between architecture and acoustics.

Trevor: One of the problems we have in architectural acoustics is the people like me, the acousticians, the experts, are engineers. We work with charts and graphs and we really understand it. The architect comes from a completely different background and probably has very little or no acoustic training.

[music in]

Fortunately, there’s a modern breakthrough that could solve this problem. It’s called Auralization. It lets architects actually hear what a building will sound like before it’s built. Imagine how that might have helped that open-plan school...

Trevor: We're all listeners. That can be the start of a conversation to say okay, if you design it this way, it's not going to sound good and rather than say this number is wrong, we can say, listen to it. Can you hear that effect?

Listening gives architects and acousticians a common language, which is something we clearly need.

Emily: Architectural acoustics matters because the ways we experience and engage with our sonic environment really tie us very physically and materially to that place where we are, as human bodies.

We’ve been fascinated with acoustics since our earliest ancestors made paintings in caves. Today, we have the knowledge to design beautiful sounding spaces that make our lives better. It’s a testament to the amazing things human ingenuity is capable of. And we can use that ingenuity everywhere, not just in concert halls and theaters. We’re all experts in acoustics, so it’s important we get them right.

[music out]

[music in]

Twenty-Thousand Hertz is produced out of the studios of Defacto Sound, a sound design team dedicated to making television, film, and games sound incredible. Find out more at defactosound.com.

This episode was written and produced by Fran Board, and me, Dallas Taylor, with help from Sam Schneble. It was sound edited by Soren Begin. It was sound designed and mixed by Jai Berger.

Thanks to Trevor Cox of Salford University and Emily Thompson for speaking with us. If you’d like to find out more about acoustic technology and its effect on culture, check out Emily’s book, “The Soundscape of Modernity.”

Thanks also to Danielle Marcum York for naming this episode. If you’d like to help name future episodes, or want to tell us what you think is the best sounding concert hall, write us on facebook or twitter, you can also email us at hi@20k.org.

Thanks for listening.

[music out]

Recent Episodes

What makes Stradivarius violins so special?

Artwork provided by George Butler.

Artwork provided by George Butler.

This episode was written and produced by Elizabeth Nakano.

Stradivarius violins are reputed to have an exquisite sound that cannot be replicated or explained. Why is that? And what, exactly, is a Stradivarius violin anyway? This episode features interviews with The Strad magazine’s managing editor, Christian Lloyd, and violin maker Joseph Curtin.

MUSIC FEATURED IN THIS EPISODE

African by Kingpinguin 
Whiskey Boomed by Aj Hochhalter 
Champion by Dexter Britain 
Spring by Cathedral 
The Races by David A Molina 
Horizon Rainfall (Piano and Strings) - Instrumental by Future of Forestry 
Journey Towards Home by Shawn Williams

CLASSICAL MUSIC FEATURED IN THIS EPISODE

Violin Concerto in D Major, OP. 61 - III. Rondo: Allegro by US Marine Chamber Orchestra
String Quartet no. 2, Op. 68 - I. Andantino; allegretto by Steve's Bedroom Band
3 Fantasy Pieces for String Quartet - No.1 by Steve's Bedroom Band
I. Allemanda by Steve's Bedroom Band
Phantasie by Steve's Bedroom Band

(*all tracks have been edited for this episode)

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Become a monthly contributor at 20k.org/donate.

If you know what this week's mystery sound is, tell us at mystery.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Can you tell the difference between a Stradivarius violin and a modern violin? Take the informal test here!

Our classical tracks came from Musopen. Check them out at musopen.org.

Try ZipRecruiter for free at ziprecruiter.com/20k.

Check out SONOS at sonos.com.

View Transcript ▶︎

You're listening to Twenty Thousand Hertz...I'm Dallas Taylor.

[music clip: Antonious from the MET]

The music you are hearing right now isn’t coming from just any violin. This is a Stradivarius violin, a family of instruments so distinguished and mysterious that it has become legendary. This one in particular is named Anotnius and it’s being played at The Met, however these instruments are spread all over the world. Stradivarius violins are renowned for their supposedly unique sound. They’re also among the most expensive, most respected, and most studied instruments in the world.

A single Stradivarius violin is valued in the millions of dollars. This is because only a handful of these instruments still exist and it is impossible to make more. Eventually, one by one, they will become too fragile to be played. With enough time, all of them will fall silent.

[music out]

The sounds of Stradivarius violins are considered so precious that they are preserved in a digital archive. To do this, a group of musicians and sound engineers took over a concert hall. There, they recorded every possible note and note transition a Stradivarius violin can make (or at least every possible sound they could think of). The entire process took 5 weeks.

During that time, the surrounding city of Cremona, Italy had to keep noise to a minimum. This was so other sounds wouldn’t leak into the recordings. It was so important that the city’s mayor diverted traffic [SFX] around the concert hall, some women were asked not to wear stilettos on the cobblestone streets [SFX], even kissing teenagers were shooed away from the vicinity.

But… why such fuss over this kind of violin?

[music in]

Christian: I think for the Stradivarius violins matter hugely on the grand scheme of things. The whole industry of violin making today is built on the legacy of Antonio Stradivari.

That’s Christian Lloyd. He’s the managing editor of The Strad. It’s a magazine that covers news and research about stringed instruments.

Christian: I also take care of the violin making sections of the magazine, which involves the historical, technical and anything to do with the sound of the violin.

[music Out]

Let’s start with the basics.

[music in]

Stradivarius violins are the work of Italian craftsman Antonio Stradivari.

Christian: Antonio Stradivari is generally considered to be the greatest violin maker ever. He was born in the 1630s and he died in 1737, which means that he had a very, very long life and he was working all the way through that life as a violin maker and he finished, probably about 1,100 instruments in his lifetime. That's not only violins but also violas, cellos, harps, mandolins and guitars.

Christian: Of those instruments, probably about 650 have survived until the present day. We have fragments of many others. About 550 of those are violins.

All of Stradivari’s instruments are called “Stradivarius.” So, there are Stradivarius violas, Stradivarius guitars, and so on. Stradivari himself came up with that word. It’s how he labeled his finished instruments.

Christian: People say Stradivarius because if they actually look at a label, then it says Antonio Stradivarius inside. But that's because Stradivari was very respectful of the Roman civilization being Italian himself. And he liked to sign his name in the Roman style, putting a U-S on the end, but his name was actually Stradivari and that's how he was known in his day.

[music out]

Picture the body of a modern-day violin. You’re probably imagining a hollow, kind of pear shaped piece of wood with a crescent cut into either side. Maybe you’re also seeing those thin, squiggly holes on the front. Those are called f-holes.

Well, that shape was pretty much defined by the time Stradivari was born, but he was confident he could make it better.

Christian: He was changing the sizes, the proportions, the width of the top plates and back plates and the thicknesses, just to see whether they would make a difference in the sound quality and in the ability of the musician to create a large range or pallet of tone colors with the instruments.

Christian: Most instrument makers even today, will only use one mold to make their instruments on. Stradivari used at least 12 molds. And probably even more than that.

Stradivari’s interest in acoustics wasn’t unusual for that period. He was living in a time and place of musical innovation.

Christian: There's a romantic myth about Stradivari, there were a few portraits in the Victorian era just based on what they thought Stradivari might look like in his workshop by himself, studying an instrument, deep in thought.

[music in]

Christian: He lived in Cremona, which is a small town now, on the banks of the River Po in northern Italy. It's between Milan and Mantua.

Christian: Cremona had a reputation as a musical hub. In fact, Cremonese musicians, have been known to be performing at the court of Henry VIII in the 1500s, and also in the French court at that time. In fact, Cremona was the birthplace of Claudio Monteverdi, who was known as the father of the opera. And for that reason, we can assume that Cremona had the ability to attract very, very ambitious people who wanted to extend the borders of what music can be and what music can do.

[music out]

[music in]

Stradivari’s experimentation yielded mixed results. His early violins are generally considered to be of lesser quality than the instruments he made later in life. But his craftsmanship was recognized and appreciated.

Christian: The phrase in Cremonese society was, as rich as Stradivari, because he was getting commissions from the courts of James II in England. He was getting commissions from the Pope, which meant that he could not only bring his expertise to bear, but also some of the finest materials and equipments that 18th century Cremona had to bear as well.

Christian: He was a very rich man. What people don't realize is that Stradivari was not just a lone craftsman. He had the biggest workshop in Cremona, and we think that not only was he working, but he was also employing his sons and apprentices in his workshop at the same time.

The violins Stradivari produced later in his career were incredibly influential in the violin world. His design was widely copied. In fact, it’s basically the one we use today.

But this legacy isn’t what Stradivarius violins are best known for.

Christian: So many people have tried to find the secrets of the Stradivari sound.

Christian: You talk about a pallet of tone colors and a Stradivari violin can give you a bright sound, a dark sound, a noble sound and mellifluous sound, anything that you want to express in your playing, you can get out of a Stradivari, which is an ability that you can't get from all violins.

[music out]

Over the years, scientists and academics have put forth a lot of theories as to why Stradivarius violins sound the way that they do. Thousands of dollars and hours have been spent in a quest for answers.

[music in]

Two popular theories center around the instruments’ wood.

Christian: It's believed that he got all his wood from the Val di Fiemme, which is a large forest in the Dolomite mountains of Italy. Recently, it suffered a terrible storm and almost a million trees were felled. And so the wood makers are desperately trying to salvage some of the wood from that, because obviously people are still searching for Stradivari's wood.

Researchers have speculated that the wood was also treated with minerals from local alchemists that somehow led to a superior sound.

Christian: But there's also a theory that Stradivari's wood from the 17th century was particularly dense, and the reason for that was because of what they call the Little Ice Age.

Christian: There were long hot summers and very cold winters, during that certain point of history. And because of that, they say the wood grew to be much more dense because there was so little growth per year, and that was particularly useful for making resonant wood that Stradivari would be able to employ.

Another popular theory points to the varnish Stradivari used.

Christian: He gave it a kind of rich, red golden luster, especially in the later part of his career when he was very successful. So for that reason, his instruments have always stood out among the others. In fact, one of them has the nickname The Red Diamond.

Some researchers have gone as far as to say that it was Stradivari’s chemistry over woodworking that defines the sound and longevity of his violins.

Christian: He was able to use the best materials for his varnish. For instance, the best dye that is red is from the Cochineal Beetle of Mexico. And this was so expensive that people would put thousands upon thousands, in order to get a ship load of Cochineal back to Europe from South America. Stradivari was one of those people and he was able to push the boat out and make the instruments as red as he could.

Christian: For that reason also they've had this mystique attached to them, there must be something in the varnish that makes them extra special.

[music out]

There are plenty of other hypotheses, too. Researchers have studied the glue Stradivari used.

Christian: The quality of the strings.

Solar activity around Stradivari’s lifetime.

Christian: The length of the neck and the fingerboard.

The design of the f-holes. Stradivari’s instruments are routinely studied all the way to the millimeter and beyond.

[music in]

These violins have undergone countless CT scans, X-rays, and chemical analyses. While some theories have become less popular or been disproven entirely, there is still no consensus as to why the sound of Stradivarius violins is so treasured.

Is there actually something special about the sound of Stradivarius violins? Can people even hear the difference between a Stradivarius and another kind of violin? To find out, researchers assembled a group of elite violinists, and they put Stradivari’s instruments to the test. We’ll find out how much truth there is to the lore… after the break.

[music out]

[MIDROLL]

[Music in]

Stradivarius violins are reputed to sound superior to other violins. But what happens to that reputation under scientific scrutiny? Researchers decided to find out.

[music out]

Joseph Curtin: It used to be thought, "Well, if it's an old Italian, it's good. If it's new, it's probably less good. If it's factory violin, it's probably terrible." But those aren't scientifically based.

That’s Joseph Curtin. He’s a violin maker.

Joseph: Like most makers, I grew up with a set of beliefs about violins. That old violins were better than new violins, that violins got better with playing, that Stradivari was the greatest maker of all time, that a lot of old Italian violin makers sounded mellow under the ear and yet still projected in a hall in comparison with new violins, which supposedly sound loud under the ear, but fail to project. There was all these sort of interesting things that were taken for granted.

Joseph also conducts –acoustical research.

Joseph: Most of the research in the violin world has traditionally been historical research. Who made what instrument when. Who influenced who. I became interested in how the violin works and how that might be understood through scientific research.

Joseph: I remember my physicist friend, in response to some theory I was coming up with about why old Italians might be better than new ones. He said, "Before you start inventing theories to explain a phenomenon, you should probably make sure the phenomenon actually exists."

Joseph: That struck me as common sense, but then you think “how could we test that?"

[music in]

Every four years, the city of Indianapolis hosts an international violin competition. Some of the most gifted violinists in the world attend. During one competition year, Joseph teamed up with another researcher named Claudia Fritz. She was also interested in comparing Stradivarius violins to modern violins. Joseph and Claudia rented a hotel room in the city, and they got 21 highly-talented violinists to participate.

Joseph: They would walk into a hotel room, they would be asked to wash their hands, they'd be asked to pick their bow that they're gonna use and stick with it.

Joseph: The protocol would be explained. We're gonna lay out six violins on a bed, and you are going to try each one for a minute or whatever the protocol was. Or you'll be handed violin A and violin B, and asked to compare them.

Three of the violins were new. The other three were made by Stradivari. But the violinists didn’t know which one they were playing… and neither did the researchers, for that matter. This type of test is called a double-blind.

[music out]

Joseph: Blind testing invites you to respect the primacy of your own perceptions, rather than your expectations.

Joseph: The idea of double-blind testing is that the subject is not at all in contact with the researcher or anyone who knows anything about the particular thing being passed back and forth.

Joseph: What blind testing allows us to do is tease out which part of the value has to do with, in this case, the violin's performance as a musical tool versus the part of the violin that's part of cultural history.

Joseph and Claudia were worried blindfolds would make people feel too disoriented. So, they turned to a particular piece of eyewear: welding goggles. Anyone who handled or saw a violin needed to wear a pair.

Joseph: I found at a welding store some relatively inexpensive goggles that kind of wrapped around your eyes like sunglasses, but were darker. And then we put some black tape along the bottom edges, because you could look straight down and as we tend to hold instruments under our chin, that was a little crack in the system.

Joseph: And we also keep the lights in the room low. Violins all look similar enough that even if you can see a darkened silhouette, you're not gonna be able to recognize the violin.

In addition to the violinists’ sight, there was another sense that Claudia and Joseph had to address.

Joseph: We also tried to neutralize the smells. A lot of new violins might smell of varnish solvents and polish, whereas an old violin might smell of eau de cologne of the last player, or stale cigarette smoke. You never know. There's just all these scents, and even unconsciously I think we can tell the difference between things by scent.

Joseph: So we put a dab of an essential oil underneath the chin rest of each violin in hopes that that would neutralize that.

[music in]

Here’s how the test worked: A researcher wearing welding goggles presented violins to the players. Meanwhile, Joseph and Claudia sat behind a partition.

Joseph: In that way, we could truly isolate the researchers from the player, or to the extent that was humanly possible there.

The violinist would be given time to play the instruments. And then Joseph and Claudia would ask him or her a series of questions.

Joseph: Which do you think is better? Which do you think is worse? Which do you think has more tone colors?

Joseph: Which do you think would project better in a hall? Which is easier to play?

It took 3 days to conduct the test, and the results were not what Joseph expected.

[music out]

Joseph: The results were pretty clear. The most favorite violin easily was a new violin. The least favorite was a Stradivari, and no one could tell old from new at better than coin toss statistics.

The results of the blind test immediately made waves–and not just in the music community. Mainstream publications around the world wrote about the test.

Joseph: Stradivari is right up there with Coca Cola and Ferrari in terms of recognition by people who don't know anything about the violin. He's really crossed over into the culture in a way that other violin makers never have.

[music in]

Many people were understandably upset. Joseph and Claudia had called into question a long-standing and deeply-held belief.

Joseph: There was a lot of pushback. One of the main criticisms, and a fair one, was it was in a hotel room not a concert hall. As one famous violist said, "You can't test a Stradivari in a parking lot."

Joseph: We didn't feel this invalidated our results. It meant that we couldn't extend the results to concert halls. More cynically, people said, "Oh, you just got the three worst Strads you could find, and the three best new instruments. I remember reading out one of these criticisms to Claudia, and she laughed and said, "If we wanted to cheat, we don't need to touch the violins. We can just fiddle the numbers."

Joseph and Claudia didn’t stop after that first study. They ran two more double-blind tests in two different cities. But these tests were even more complicated. There were more violins to evaluate. Players were given more time to play them. And instead of being held in hotel rooms, they were conducted in concert halls. Joseph and Claudia also invited more people to listen and give their opinions.

Joseph: We had an audience of some 50 people. Violin makers, musicians, experienced listeners, and we had them judging.

Joseph: As in Indianapolis, the most preferred instrument by a good margin was new. The least preferred happened to be a Strad, but there was also a new instrument which was almost as badly judged.

[music out]

Joseph: Why would we assume that old violins could necessarily do better than new violins? I think what these studies have shown is that on a level playing field, new instruments can do very well.

Joseph: One can't assume because you have a very valuable old Italian instrument, that it will out-perform a new instrument that's valued at a fraction of that in terms of money, at least.

The sound of Stradivarius violins continues to spark debate and scientific questions. Many Stradivarius enthusiasts outright dismiss all of Joseph and Claudia’s studies.

Joseph makes it clear that their work was not a criticism of Antonio Stradivari the man. Instead, they were questioning the mystique attached to the instruments.

Joseph: I think the evolution of old Italian sound is ongoing. It's kind of one of the great constructions of the Western musical imagination.

Joseph: What one needs to remember is first of all, virtually all the Stradivaris used today have been re-engineered over the centuries in incredibly important ways acoustically. If you took a Stradivari straight from his workbench and a bow that was available at the time, most of the standard repertoire would be unplayable.

Joseph: It's as simple as that. It is not the same instrument.

You heard that right: that famous Stradivarius sound might be a more recent development.

[music in]

Of course, there are other reasons to value these instruments–such as Stradivari’s place in violin-making history.

Joseph: Stradivari is, I believe, the greatest violin maker who ever lived. No one of comparable originality and influence has come along since then.

We value objects for all kinds of intangible reasons, and our knowledge of how expensive, or rare, or famous something is can color our perceptions of an item’s true qualities. However, while a famous piece of art, an item owned by a historical figure, or indeed a Stradivarius violin may just be the sum of its parts, these items are infused with something else…

Christian: When you buy a Stradivari, you're also buying into the history and heritage of that particular Stradivari. Every instrument has a provenance to it and you can get to see who's owned it and which famous players have played it in the past. And that's going back a hundred years or 200 years. And when you pick up an instrument, then many violinists tell you that you can feel the soul of Jasha Heifetz or Bronislaw Huberman or any of the great violinists of years gone by and you can feel that you're standing in their footsteps and you're also buying into their heritage and the heritage of the composers who composed great concertos for the great soloists of yesteryear, all inspired by the same colors and tones that they could hear in the instrument that you have in your hand.

Joseph: There's many many layers of narrative, there's a sense of richness, there's all the things about objects or works of art that we value that come into play and these are very important, and it's not as though it's a kind of snobism in that "I only like expensive wines or expensive violins."

Joseph: There's no shame in valuing things because of their history at all.

Joseph: It's something human. I think it's inevitable, part of being human.

[music out]

[music in]

Twenty-Thousand Hertz is produced out of the studios of Defacto Sound, a sound design team dedicated to making television, film, and games sound incredible. Find out more at defactosound.com

This episode was written and produced by Elizabeth Nakano, and me, Dallas Taylor, with help from Sam Schneble. It was sound designed and edited by Soren Begin. And mixed by Jai Berger.

Thanks to Christian Lloyd and Joseph Curtin for speaking with us.

The first piece of music in today’s episode is from a Stradivarius violin owned by The Metropolitan Museum of Art. Many of the Classical music tracks in this episode were from musopen.com. That's m-u-s open.com, check out our website for the full track list. The rest of today’s music is from Musicbed. Which you can find at musicbed.com

If you want to test whether you can hear a difference between a Stradivarius violin and a modern violin go to our website, 20k.org. We have a link to an informal test.

And let us know how you did. You can tweet at us, find us on Facebook, or find us online at 20k.org.

Thanks for listening.

[music out]

Recent Episodes

808: The drum machine that changed music forever

Artwork provided by Roland.

Artwork provided by Roland.

This episode was written and produced by Fil Corbitt.


The 808 is arguably the most iconic drum machine ever made. Even if you’ve never heard of it, you’ve definitely heard it. It’s in dozens of hit songs -- from Usher to Marvin Gaye, Talking Heads to The Beastie Boys -- and its sounds have quietly cemented themselves in the cultural lexicon. In this episode, we try to understand how that happened and follow the unlikely path of the 808. Featuring DJ Jazzy Jeff and Paul McCabe from Roland.

MUSIC FEATURED IN THIS EPISODE

Bus Stop by Red Licorice
Your Own Company by Laxcity
Ventana by Slowblink
Lost Without You by Vesky
I Know (No Oohs and Aahs) By Red Licorice

MUSIC EXAMPLES FEATURED IN THIS EPISODE

He's The DJ, I'm The Rapper by DJ Jazzy Jeff and Fresh Prince
The Robots HQ Audio by Kraftwerk
Heart of Glass by Blondie
In the Air Tonight by Phil Collins
Planet Rock by Afrika Bambaataa & The Soul Sonic Force
Funk Box Party, Part 1 by The Masterdon Committee
Egypt, Egypt by The Egyptian Lover
Just Be Good To Me by S.O.S. Band
Sexual Healing by Marvin Gaye
Raga Bhairav by Charanjit Singh
Scorpio by Grandmaster Flash & The Furious Five
Play At Your Own Risk by Planet Patrol
Just One of Those Days by DJ Jazzy Jeff and Fresh Prince
Cars That Go Boom by L’trimm
Kickdrum by Felix da housecat

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Become a monthly contributor at 20k.org/donate.

If you know what this week's mystery sound is, tell us at mystery.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Consolidate your credit card debt today and get an additional interest rate discount at lightstream.com/20k.

Go to forhims.com/20k for your $5 complete hair kit.

View Transcript ▶︎

You're listening to Twenty Thousand Hertz, I'm Dallas Taylor.

[SFX: Cybertron 808 Beat]

The 808 drum machine is everywhere. And even if you don’t know it by name, you have definitely heard it before.

[Music clip: Usher - Yeah!]

[Music clip: Whitney Houston - I Wanna Dance with Somebody]

[Music clip: New Kids on the Block - Please Don't Go Girl]

[Music clip: Beastie Boys - Brass Monkey]

DJ Jazzy Jeff: I laugh because if I listen to the radio for an hour, there's not one record that you hear that's not an 808.

That’s DJ Jazzy Jeff. He’s a world renowned DJ, musician, and one of the early innovators of Hip Hop.

[Music clip: He's The DJ, I'm The Rapper]

DJ Jazzy Jeff: There was nothing that was more distinctive and more sought after than an 808.

[music out]

[music in]

Paul McCabe: The Roland TR-808 is a drum machine...

This is Paul McCabe from Roland. Roland is a company that makes electronic instruments. When they released the 808 in the early 80s, drum machines weren’t exactly sought after. For 20 or 30 years, they had been used mostly in the home.

Paul McCabe: We have to remember in the '70s, the '60s, the '50s music being played in the home was still a very popular thing. And television hadn't taken over the living room quite yet. So families would often gather around and they would play music, people would play music as a pastime. A high percentage of the population was playing music.

And though families were hanging out in the living room playing music, they typically didn’t have a drum kit laying around.

[music out]

They’d possibly have a guitar [SFX: Guitar strums], maybe a piano [SFX: Quick Piano riff] or a home organ [SFX: Organ riff]. As you can imagine, people wanted a rhythmic instrument that wasn’t as big or loud as a live drum kit.

Paul McCabe: If you see photos of some of the earliest drum machines, in fact you'll even see drum machines that are designed to sit on top of an organ where the music rest would normally be.

[SFX: Roland TR-66 Rhythm Arranger]

Paul McCabe: So they have typically, particularly the earliest drum machines were really working to try to recreate the sound of a small acoustic drum kit. And so there would be a kick drum and a snare drum and cymbals and tom toms.

Drum machines were used for casual purposes and weren’t that useful to professional musicians.

[music out]

But in time, musicians did start to find uses for Drum Machines. By the 1970s, many songwriters would program a drum beat and write to it - a practice Phil Collins used often…

[Music Clip: Phil Collins - One More Night]

But as people found uses for drum machines, early versions of electronic music were starting to go mainstream.

[Music Clip: Kraftwerk - The Robots HQ Audio]

This is “The Robots HQ” by Kraftwerk, a four piece band from Germany...

Paul McCabe: Kraftwerk is one of the founding fathers of techno.

They helped introduce new, weird technology to popular music.

Paul McCabe: They built their own instruments so they were playing some of the earliest electronic rhythm instruments that you could play and strike..

[music out]

It’s here in the 70s when electronic rhythm machines started to catch on. These drum machines slowly morphed from family novelty instruments into something professionals were using.

Paul McCabe: They started to become used more in live performance in a situation where either an acoustic drummer wasn't available or to enhance a rhythm section, and then they started to appear in recordings.

One of the machines that started appearing in recordings was a predecessor to the 808 -- a drum machine called the CR-78.

Here it is in Blondie's Heart of Glass.

[Music Clip: Blondie - Heart of Glass]

And here’s the CR-78 in Phil Collins’ In the Air Tonight.

[Music Clip: Phil Collins - In the Air Tonight]

These songs inspired an early demand for a stage-ready drum machine. That demand ultimately inspired Roland to create the 808. [SFX: 808 clip keeps playing] They wanted to build a machine that was relatively durable, movable, and affordable to the average musician.

Paul McCabe: When one sees a TR-808 it almost looks military in its design. It's kind of a drab olive color and there's a reason why TR 808s are still being used today 'cause you could drive a truck over them and probably many of them would still work. That was what was in our mind at the time.

[music out]

There have been a few instruments in history that changed music forever. The piano revolutionized classical music history... electric guitars defined rock and roll… and the 808 transformed hip and hop and electronic music.

Paul McCabe: When we think about the sound of the 808, and again, we think of it in terms of its influence on hip hop and R&B and when we think of hip hop of course we start with Afrika Bambaataa and Planet Rock.

[Music Clip: Afrika Bambaataa & The Soul Sonic Force - Planet Rock]

It's this other worldly mashup of this kind of east coast New York with Kraftwerk.

You can also hear some funk influence too. This all combined into a sound that felt new... and it blew up.

DJ Jazzy Jeff: In the early '80s, it was so new that you were trying to get your hands on whatever drum machine you could to basically make your beats.

And like a lot of musicians at the time, DJ Jazzy Jeff heard Planet Rock and was captivated by the drum sounds.

DJ Jazzy Jeff: There was no drum machine that had a kick drum that sounded like that. That had a snare that sounded like that. That had a crispness to the hi-hats like an 808. So it was definitely sought after so that you could kind of make these records. We emulated whatever we heard, so you know, when Planet Rock came out, it was kind of like, "I need that machine."

[music out]

Once these DJs got their hands on an 808, they found themselves expanding on its possibilities.

[music clip: The Masterdon Committee - 1982 - Funk Box Party, Part 1]

DJ Jazzy Jeff: There was a record, Funk Box Party by Masterdon Committee, and he was a DJ that was very, very good on an 808.

Musicians were experimenting. Here’s Egyptian Lover, over on the west coast.

[Music clip: The Egyptian Lover - Egypt, Egypt]

And here’s S.O.S. Band. They’re kind of like a pre-hiphop funk thing.

[Music clip: S.O.S. Band- Just Be Good To Me]

Here’s Marvin Gaye’s more minimalist use of the 808.

[Music clip: Marvin Gaye - Sexual Healing]

[music out]

[music in]

As musicians began experimenting with the 808, it wasn’t clear if this sound had staying power. It could just be a flash in the pan that would be replaced by the next version. But it didn’t quite go like that.

Paul McCabe: There was all these moments that were happening, these musical moments that were very serendipitous in New York, in the early '80s. That, ya know, if they'd gone left instead of right, if this guy did this on a Tuesday instead of a Wednesday, we probably wouldn't be talking about the 808 in this context today. It was literally that kind of magical.

And believe it or not a huge factor in that magic, was that when the 808 came out in 1981 it wasn’t a big hit like Roland had hoped. We’ll explain why, and how that ultimately was a good thing, after the break.

[music out]

[MIDROLL]

[music in - 808 beat]

What’s amazing about the 808, is that it seemed so unlikely to succeed. Imagine a Japanese engineer in the late 1970s creating these synthesized drum sounds -- and those drum sounds crossing the ocean and revolutionizing hip hop forever. But before it did all that, it was off to a shaky start.

[music out]

Drum machines at the time were largely meant to replace a live drummer, so it was all about getting it to sound like a real drum set.

Paul McCabe: Right about that same time, 1981, the first drum machine that used recorded sound clips or samples came into being.

At the time, companies were putting out these drum machines that were sample based - which is another way of saying, they played back real recorded drum sounds. [SFX: Sample based drums in] And the 808 was fully synthesized. [SFX: 808 drums in] Meaning, it did not sound like a real drum set.

DJ Jazzy Jeff: To me, this is very Nintendo and Atari-ish. Here's my computer version of what I think a drum kit is supposed to sound, and it doesn't sound anything like a drummer or a drum set at all. It was their interpretation, but their interpretation became the backbone of electronic music.

An Atari/video gamey-sounding drum kit was not at all what people wanted. Well, Initially.

[Music clip: Raga Bhairav - 1982 - SYNTHESIZING: TEN RAGAS TO A DISCO BEAT - Charanjit Singh]

Here is Charanjit Singh, an Indian musician making 808 music in 1982.

[music out]

Bizarrely enough, since the 808 wasn’t that successful in the beginning, they began to show up at pawn shops for super cheap.

DJ Jazzy Jeff: I ended up getting mine from a pawn shop. Because you couldn't really walk into a store and see an 808.

People started picking them up because it was a piece of equipment they could actually afford. Recording studios often had one on a shelf collecting dust, or somebody’s friend might lend them one for a live show. But the jury was still out on whether the 808 was anything more than just a cheap drum machine.

Paul McCabe: The 808 was really facing quite an uphill battle to gain any kind of acceptance. But in a kind of, one of these classic your strength is your weakness paradoxes where the strength of the drum machines that were based on recordings of actual drum sounds was that at first glance they sounded more natural. On the other hand, certainly with the technology available at that time, you couldn't really adjust the sound that much.

DJ Jazzy Jeff: We were used to having a drum machine that you were stuck with basically the sound that came out of it. There wasn't too much manipulation that you can do, so to have this machine that you can take the snappiness out of the snare [SFX: Snare samples with snappiness being removed], and you can add more boom into the kick [SFX: Kick samples with boom increasing]. This one machine could sound a hundred different ways.

Adjustability was the key.

As other machines began to sample recordings of real drums, Roland was doing the exact opposite. Using synthesizers, Roland engineers tried to recreate the essential elements of drum sounds. Instead of recording a kick drum, an engineer figured the kick drum is supposed to be bassy and bottom-heavy. So using synthesized sounds, they created a bassy, bottom-heavy tone.

Paul McCabe: And so with that in mind, you look and you've got these 11 sounds...

Here’s the Kick [SFX]

Snare [SFX]

Closed Hi Hat [SFX]

Open Hi Hat [SFX]

Paul McCabe: crash cymbal [SFX]

Paul McCabe: There's toms [SFX]

Paul McCabe: hand clap [SFX]

Paul McCabe: Rimshot [SFX]

Paul McCabe: cowbell [SFX], you always got to have more cowbell. [SFX]

And finally Clave [SFX]

DJ Jazzy Jeff: When you start getting into the clave and the cowbell, those were two very distinctive sounds that if you put them on anything, you knew they came from an 808. Because it was kind of like an artificial sound, but it had its own texture and it was very distinctive.

The clave, the cowbell, the hand clap -- so many of the 808 sounds were super distinctive. But one of these distinctive sounds seemed to change music forever. That’s the low, bottom-heavy kick drum. [SFX: Kick drum]

DJ Jazzy Jeff: There was a point in time that I felt like people were afraid of kick drums. You couldn't have the kick drum too loud, you couldn't have it too boomy.

[Music clip: Scorpio - Grandmaster Flash & The Furious Five]

Here’s Scorpio by Grand Master Flash and the Furious Five. You can hear that the kick drum is relatively low in the mix.

DJ Jazzy Jeff: Someone had the heart to put an 808 kick drum that it was round, and it was boomy, and it felt really good.

Here’s Planet Patrol, with a rounder, louder kick drum.

[Music clip: Planet Patrol - Play At Your Own Risk]

DJ Jazzy Jeff: Then somebody on a record opened up the decay, and when that kick drum rang out, it was nothing like that that you've ever heard.

Here’s DJ Jazzy Jeff himself opening up that decay, and letting the kick drum drive the song.

[Music clip: DJ Jazzy Jeff and Fresh Prince - Just One of Those Days]

The sound of the 808 kick drum became synonymous with hip hop. The idea of young people driving down the street with a big boomy subwoofers was largely because of the 808 tone. And that connection stuck.

Here’s L’trimm - a Miami Bass hip hop duo -- singing about boomy car stereos in 1988.

[Music clip: Cars That Go Boom]

20 years later - Felix da Housecat released the song “Kick Drum.” Which does the same thing, and pushes the 808 kick drum decay to its absolute limit.

[Music clip: Felix da housecat - Kickdrum]

[music in: 808 beat]

DJ Jazzy Jeff: You're not supposed to have your bass drum driving that much, and it's kind of like, "Why not?" Everybody's riding around in their car playing this music, and it's vibrating their car and they enjoy that. There's no right and wrong in it. I really feel like the 808 kick drum was one of the first things that started shattering the rules of what you could, what you couldn't, or what you should or shouldn't do when it came to recording music.

People didn’t know they wanted a boomy kick drum or a funny cowbell. But once they heard those sounds, it seemed so obvious. It was like a ringing kick drum should have existed all along.

DJ Jazzy Jeff: What made you put a decay on the kick drum? Like, no one ever thought to make a kick drum ring, and what made you think of putting this on there? And did you ever think that it would become this iconic?

[808 beat out]

Paul McCabe: If you've ever been in a recording studio or seen photos of a recording studio where there's an acoustic drum kit, set up, if you're able to have a close look at the kick drum, more often than not you're going to see all kinds of materials, either stuffed into the shell of the kick drum, often it's blankets or towels or things like that. You'll sometimes see things that are taped to the head of the drum as well, and these are all to dampen or muffle the ring of the kick drum because left unmuffled, you strike a kick drum, it's gonna sustain for quite awhile.

What they were trying to achieve was the sound of an acoustic drum set. But since it was a synthesized sound, this rebuilding of a kick drum took on a life of its own.

Paul McCabe: So recognizing that, Roland thought well okay, that's clearly what we have to do to make this thing sound like an acoustic kick drum, so we put a decay control on it.

This essentially turned into a whole new instrument, with new sonic parameters. It was so different that the studios making early hip hop records didn’t even know what to do with it.

[Music clip: He's The DJ, I'm The Rapper]

DJ Jazzy Jeff: When we did He's the DJ, I'm the Rapper, was the first record that I used 808s and 80–8 samples on, that I wanted the kick drum to really resonate. I remember fighting with the engineer, because I wanted to push the envelope on how loud and how deep I wanted the 808. Because I knew there was some hip hop records That you would get in a car and you would play it, and the entire car would vibrate. And I was like, "I want that."

But since that was unheard of at the time, the engineer refused.

DJ Jazzy Jeff: I had to fight with the engineer to turn it up, and he would turn it down and turn it up, and I had to kind of explain to him like, "I understand that there is a technical way that you think you're supposed to do something. I want to push that envelope. I need this to be this loud. I need it to be almost at the brink that it's not distorting and it's not overpowering everything, but I need this to be the focal point of the record."

DJ Jazzy Jeff: Hip hop is something that the drums have to drive the record. I got him to allow me to do it to the point that I loved it, and what I never realized was I never told the mastering engineer that I wanted that. And he thought it was a mistake, and he took all of the 808 out of the album, and I don't think I've ever said this in public. I can't listen to He's the DJ, I'm the Rapper now. That is the biggest record we've ever done, and I absolutely hate the way that it sounds because they sucked all of the bottom end from the 808 out in mastering.

Here’s a clip from He’s the DJ, I’m the Rapper as it is on the record [music clip] and here’s probably what DJ Jazzy Jeff was going for [music clip].

[music in]

With the birth of any genre, there are growing pains. And in a completely unexpected turn, the Roland TR-808, and it’s boomy kick drum became the voice of hip hop and electronic music. The rattling car stereos, the big subwoofers at clubs. They became a new culture. And once it established itself, it spread like wildfire.

Paul McCabe: The 808 is everywhere. Now you'll hear 808s in, I don't want to say every genre of music, there's some styles of music that are so rooted in acoustic, but it's in pop everywhere. And we know just by saying pop, that's such a wide term now, it encompasses world music, it encompasses electronic music and EDM and techno and house and what have you. It's not an understatement to say that the 808 is just everywhere through pop music.

It was a perfect storm of accessibility, adjustable tones, and brand new alien sounds that made people love the 808. The engineers in Japan could never have imagined the way this machine would change the sound of pop music, and hip hop, forever.

DJ Jazzy Jeff: Hip hop is really based off of taking what you have and making it do something that it's not supposed to. We are not supposed to scratch on a turntable. We're not supposed to scratch on records. We're not supposed to drive the kick drum and push things to that level. None of these things make any sense. So as much as it doesn't make sense, it completely makes sense that this Japanese engineer made a drum machine and people started using it in a way that he didn't intend to use. And it works.

Paul McCabe: When we talk about the 808, we talk about a sound and an instrument that has actually defined culture, and so culture is the bigger context within which music fits. So a world without 808, I think it's very reasonable to speculate that fashion would be different, entertainment would be different. I think we wouldn't just be talking about a sonic notch. I think we would be talking about a cultural notch that would be profound.

[music out]

[music in]

The 808, sort of by accident, became the instrument that shaped hip-hop, just like the electric guitar shaped rock and roll. But at the end of the day, no matter how useful and no matter how distinctive, these are tools. Cultural moments have a way of clinging to new tools, which help communicate new ideas… or help say something that hasn’t been said before, or at least... say it in a new voice.

DJ Jazzy Jeff: This is why I love music so much, because there's a thousand different combinations and ways to get to a result.

DJ Jazzy Jeff: At the end of the day, you realize that someone who had a crappy week at work, depending on how you present this music, you can change their day. You can introduce two people together that end up spending the rest of their lives together just by playing music in a certain way to bring people together. I've been blessed to have a thumbprint in music, in making it or playing it, that affects people's moods. That's the coolest job in the world.

[music out]

[music in]

Twenty Thousand Hertz is produced out of the studios of Defacto Sound a sound design team dedicated to making television, film and games sound insanely cool. Find out more at defactosound.com

This episode was produced by Fil Corbitt and me, Dallas Taylor, with help from Sam Schneble. It was sound design and edited by Soren Begin, it was mixed by Jai Berger. Fil Corbitt is the host of Van Sounds, a podcast about movement. It’s a unique blend of music journalism, travel writing and experimental radio. You can find Van Sounds on apple podcasts or wherever you listen.

Thanks to DJ Jazzy Jeff for speaking with us. You can find his work, merch and updates at DJJazzyJeff.com. And thanks so much to Paul McCabe from Roland. If you’d like to play with an 808, Roland has recently reissued it as a smaller machine with a USB connection.

All additional music in this episode was from our friends at musicbed. Check them out at musicbed.com.

Finally, if you have a comment, episode suggestion, or just want to tell us your favorite track featuring the 808… reach out on Twitter, Facebook, or by writing hi at 20k dot org.

Thanks for listening.

[music out]

Recent Episodes

Soundmarks: AT&T, United Airlines, and inventive sonic branding

soundmarks phone.jpg

This episode originally aired on Household Name.

Companies spend a lot of time and effort perfecting the look of their brands. But now what a brand sounds like matters just as much. We trace the history from songs to jingles to what's called sonic branding, following the creative process that led to AT&T’s iconic four-note sound logo. And we'll explore what comes next: multi-sensory marketing. Can sound change how beer tastes?

MUSIC FEATURED IN THIS EPISODE

Prepared by Luke Atencio
Safari by Uncle Skeleton

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Become a monthly contributor at 20k.org/donate.

If you know what this week's mystery sound is, tell us at mystery.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Try ZipRecruiter for free at ziprecruiter.com/20k.

Check out SONOS at sonos.com.

View Transcript ▶︎

[music in]

You’re listening to Twenty Thousand Hertz, I’m Dallas Taylor.

Sonic Branding is the process of creating a short, iconic sound that’s designed to be an audio representation of a company. When they’re done well, they can represent a brand in a way that visuals just can’t. [SFX: sonic brand montage]

Now, it might seem like making a sound so short would be easy right? ...but that couldn’t be further from the truth. The process can take months, and sound design and music companies may go through hundreds of ideas to finally land on one short sound. ...and keep in mind that all of these approvals have to pass through countless layers of corporate red tape, boardrooms, and the personal taste of business people. It’s an intense process where millions and millions of dollars could hang in the balance.

The fabulous podcast Household Name takes us through the process of creating one of these iconic audio logos… and if you live in the US, I’m sure you’ll recognize it. I won’t give it away, but you’ll want to stick around to hear it. Here’s host Dan Bobkoff.

Can you identify a brand from a sound?

[SFX: McDonald’s sonic logo]

McDonald’s

Mickey D’s

McDonald’s

I gathered some colleagues to test something called sonic branding. It’s like logos you can hear.

[SFX: NBC sonic logo]

That’s NBC

Some are easier to identify than others.

[SFX: T-Mobile sonic logo]

Cingular? AT&T? Phones?

It’s definitely a cell phone

company. I want to say Sprint, but I’m not convinced that’s right?

I was gonna say Staples.

That’s T- Mobile.

And the really good ones make you feel something…

[SFX: 20th Century Fox sonic logo]

That is 20th Century Fox.

I felt triggered as soon as the first bit of drumming happened…

I saw the logo.

I started craving popcorn.

I did know that one.

Companies have long spent a lot of money and effort perfecting their logos… like the Nike swoosh or Apple’s… apple. But now more of them are starting to do the same thing with sound.

[SFX: Netflix sonic logo]

Netflix.

I was gonna guess Netflix!

Netflix!

These are not jingles. They’re highly designed collections of sounds created to make you... buy things. So I wanted to know, how do you make one that works?

[SFX: Texaco commercial]

In the beginning, companies wrote whole songs.

Colleen: In the 40s or 50s when they had long commercials 60 second commercials and you could actually create a whole song for that commercial you could have choruses and you could have verses.

[SFX: Chevy commercial (“Performance is sweeter…”)]

Colleen Fahey is with the French sonic branding company Sixieme Son and wrote a book called Audio Branding. And Colleen says when television was new, ads were long.

Colleen: So you had enough time to say “you wonder where the yellow went, when you brush your teeth with pepsodent”.

[SFX: Pepsodent commercial]

[SFX: Rice Krispies, snap, crackle, pop commercial]

Colleen: One of the great ones was Snap Crackle Pop, Rice Krispies where each of the characters got to sing something about his own sound. His snap, his crackle and and then they did a chorus together. They had plenty of time for that. The chorus went snap crackle pop Rice Krispies

[SFX: Rice Krispies, snap, crackle, pop commercial]

Colleen: But it was a really long song. I couldn’t sing the whole thing for you...

[SFX: Rice Krispies, snap, crackle, pop commercial]

As the decades passed, TV ads got shorter… from whole songs, down to 60 seconds, to 30 seconds — sometimes just 15. And these songs turned into jingles — shorter snippets to help you remember the brand.

[SFX: Purina Cat Chow jingle]

The 80s — by the way — were an especially strong time for jingles… like a last gasp for the form.

[SFX: Stouffer’s pizza jingle]

But the 80s were also a period of transition into something new. And it’s partly because of what United Airlines did then. In the early part of the decade, it had its own conventional jingle...

[SFX: United Airlines jingle, “We get you to all the United States. You’re flying the friendly skies...”]

But by the end of the decade, United started using another piece of music.

Colleen: It's the one that goes doo-doo-doo-doo doo-doo-doo-doo doo-doo-doo-doo doo-doo-doo-doo

[SFX: United rhapsody in blue song]

Colleen: Most people would recognize that as United Airlines’ audio brand.

An audio brand. What United is doing with Gershwin’s Rhapsody in Blue goes beyond what advertisers did with songs and jingles.

Colleen: They use Rhapsody In Blue as a system.

A system. This is what makes this different than just a simple jingle.

Colleen: It's a very flexible piece of music. It was not written as a symphony. The symphony came many years after the first piece of music was written and had already been used by Jazz musicians and other improvisers. So it's a piece of music that had been treated flexibly since its inception.

Gershwin became United’s signature. Whatever the company was doing, you’d hear some version of this music. From ads, of course, to even TV weather forecasts…

[SFX: Early 90’s United weather forecast]

Colleen: It's also used in the corridors in Chicago Airport. There's a big corridor that links the terminal, their United terminal, to the main building and people on moving walkways hear this music when they're going into the terminal.

[SFX: O’Hare Gershwinn clip]

And then you get on the plane and there it is again.

Colleen: They have a safety video that's around the world…

[SFX: United Safety Video, “If necessary, an oxygen mask will drop from above your seat”]

Colleen: ...and in France you hear it with a little accordion and then in... I think it's New Jersey you hear it with a jazz sound and they manipulate it so it stays fresh and it feels relevant to the destination.

For United, Rhapsody in Blue isn’t a song or a jingle, it’s a full sonic brand.

Colleen: A very unified audio brand and a very strong, memorable, distinctive brand that conveys something… anticipatory and exciting about travel.

A few companies have had sonic branding down for decades. Like MGM...

[SFX: MGM sonic logo]

...or NBC.

[SFX: NBC sonic logo]

But it’s only been since the 90s that this modern form of sonic branding started to take off.

Colleen: Probably the most famous one is Intel which the idea of Intel Inside was communicated by a piece of music. And it goes like, thun thun thun thun thun.

[SFX: Intel sonic logo]

Colleen: Most people would recognize that and they've been very loyal to that piece of music.

[music in]

The Intel Inside sound was brilliant… a chip is something you don’t see, but it’s crucial to a computer, so the sound gave life to something invisible and got consumers to think about a boring computer part.

And, it’s one of the first true sonic logos.

Let’s get some terms out of the way here. In the modern world of audio branding, there are sonic logos and sonic brands. You can think of the sonic brand as the whole package… just like a company has its own fonts and colors. The logo is the distillation of all that… the centerpiece. Visually, it’s a symbol. In audio, it’s a short, memorable sound that triggers recognition like Pavlov’s dog.

[SFX: Bell sound]

Brands want us to remember them and feel good about them.

More and more companies want sonic brands because we’re increasingly interacting with brands in non-visual ways. Like talking to a smart speaker. Or maybe using Apple Pay or Google Pay instead of a physical credit card. In fact, most of the big credit card companies are developing sounds that will play when you buy something.

So, how do you make a sonic brand that works? We’ll find out what the process was like for one of the world’s biggest brands. After this.

[music out]

[MIDROLL]

We’re back

And I’ve come to the offices of Man Made Music in lower Manhattan because this is one place sonic brands are born.

[SFX: Ambience of Danni on keyboard]

Danni: That’s their logo [SFX: keyboard plays]

This is Danni Venne. She’s the head of creative at Man Made, so she works on a lot of the music that’s in the background of our lives.

Danni: I just like that one… [SFX: keyboard plays]

Man Made makes many of the themes you hear on TV. Like for CBS News...

[SFX: CBS News theme]

...or ESPN.

[SFX: ESPN theme]

Or, sometimes they’ll update iconic themes for new eras.

Danni: We’ve done the… HBO theme… have you heard that before? The [SFX: logo plays] So we’ve done so many versions of that. We didn’t write that one, but that’s kind of our bread and butter is that we take a melody and we know how to like, recontextualize it.

But now it’s not just TV networks calling. Brands want music. Lots of it. They want sonic logos for all sorts of reasons.

Like, take AT&T. It thought a sonic brand might help solve some problems.

AT&T came to Man Made Music in 2010. Back then, the company had been enjoying one big advantage… it was the only cell phone company in the US where you could get an iPhone. But at the time, its customers weren’t too happy with AT&T.

Danni: AT&T became even a bigger punching bag ‘cause it was dropping all the calls.

Customers who had switched to AT&T in order to get the iPhone were complaining about it online. Never mind that the problem was mostly fixed by this point. Reputations can lag reality. One customer had even made a parody video to YouTube that looks like an Apple ad with the white background and the product shots. But then the text is all things like, “It’s a revolutionary device crippled by poor service” and this one “with less bars in more places!”

So AT&T set out to overhaul its image… photos, slogans, fonts, ads and sounds.This was around the time other phone companies were about to sell the iPhone. And it had another problem. Danni said that when AT&T ran expensive ads on TV, few people could remember what the ad was for.

Danni: They'd see it and they say who was that for and then say I don't know Verizon? IBM? You know, MetLife? It wouldn't… They… It would rarely get attributed to AT&T.

Danni: One of the first things we asked AT&T when they were in the room was, why are you interested in a sonic identity?

Danni needed AT&T to articulate exactly how the company wanted to be perceived. Did it want to come across as more reliable? Higher tech? Less corporate? More… likeable?

Danni: If we don't understand that then we're just, you know throwing stuff at the wall. Hoping that it's going to stick. What's the problem you're trying to solve?

After a lot of back and forth, AT&T came back and said… it wanted to come across as… human.

Danni: At the top of the brief, a question: what is the sound of humanity? Which is… very lofty.

Yeah. Sounds… pretty big.

Danni: Very lofty. But, the sound of humanity and that as a question with the additional language that we had in the references at least focused it in a little bit more on what that could be.

If AT&T sounded human, maybe customers would trust it more. And new customers might hear the sound logo and get a better impression of AT&T. A company that sounded friendly, and likeable.

Danni: Of course that can be interpreted a million different ways. But just at the very top how did… where were we shooting? The sound of humanity.

So, to narrow it down, Danni asked AT&T executives some questions. Things like… “what do you hate about your competitors?” Once all that was settled, Danni looked to culture for inspiration. And back in 2010, artisanal products were all the rage. Handmade things that looked authentic, and not mass produced perfection.

Danni: Things like I think Mast chocolate bars head hand wrapped chocolate, right? So, you know craftsman in some warehouse in Brooklyn, you know, making…

Just like AT&T

Danni: Just like AT&T, exactly. But you know someone's in Brooklyn doing their small batch pickles or something, right, with the handcrafted label. And like… but that that sense of like personal touch and humanity was like kind of infusing a lot of culture at the time.

But even that concept was broad. Like… AT&T is artisanal chocolate? That doesn’t make sense!

So, before her team started composing their own tracks, Danni played some music she had on hand—stuff they didn’t compose—but they just wanted to get the client’s reactions. In this case, they wanted a sense of what kind of raw, authentic humanity AT&T wanted. Like, did it want it to feel high-stakes and dramatic? Like, fireman rescues baby from a burning building humanity?

[SFX: Scene tape [DRAMATIC MUSIC PLAYS]]

Danni: This is too “heart on your sleeve…” you know, like..

[Danni laughs]

Or… math genius performs complicated calculus on a chalkboard humanity?

Danni: I do like this one because it feels smart.

So, they’re sitting around, listening and giving their feedback. The first track sounded too lofty and dramatic, with its sweeping crescendos and emotional strings. And the second one, the “math genius” music, was too structured and clean.

Danni: And as the exploration developed, we became more focused on expressing this humanity through imperfection. So instruments and sounds that you could hear real people playing real instruments. Right? And that became the way humanity was manifested, you know? First it sounds lofty, like we're about to have something giant, you know. But it actually became a little more raw.

So, with that in mind, Danni and the team finally started writing their own music for AT&T. A lot of music. And what they were trying to create is something they call an “anthem.”

Danni: All the anthem demos need to be thematic. They need to have a melody or something that you can sing back, or something that you can remember, some sort of hook. Right? And that hook, that melody, that theme, that becomes what eventually gets boiled down to a sonic logo.

The sonic logo might be just a few notes embedded in the larger anthem, which could be anything from 20 seconds to two minutes long.

Danni: But any of these demos that we start writing... and a big brand like AT&T… it’s very conceivable that we might write up to 20 or 30 anthem demos. Not all of them see the light of day, in fact most of them don't get to the client.

Danni played us some of those early tracks and explained why they didn’t make the cut. Like, her first try was almost too human. It sounded too much like the theme song of a kid’s TV show, or the joyful, hoppy ending of a rom com.

[SFX: Danni scene tape [“Hey! Dah dah dah dah dah!”]]

Danni: It’s a really nice sound, song. It’s got vocals in it. What it might not do, is it might not speak to this idea of serious business, right?

The team’s next try went too far in the other direction. The music wasn’t grounded enough. The chord progressions were a little too exciting for AT&T’s taste.

Danni: Um, let me go to another one that did not make it.

[SFX: Scene tape [LORD OF THE RINGS TYPE MUSIC PLAYS]]

Danni: Artful fade! Yeah, like it’s… it’s more dramatic, right?

Sounds a lot like a film score

Danni: Yeah, exactly. So trying to take this humanity things very differently there. And I mean hindsight, I can remember why that doesn’t work. It’s kind of… Maybe it’s kind of obvious, right? It’s… it wears its heart on its sleeve. It’s very Lord of the Rings.

A little ominous too. My call might drop...

At some point, the team hit a creative block. Danni just wasn’t hearing any sonic logos in these anthems.

Then one day, Danni was playing some of the drafts for her boss, Joel. And four notes caught his attention.

Danni: What Joel heard, was this...

[SFX: Scene tape [BAG-PIPES]]

Danni: You hear the melody, and it’s just repeat repeat repeat repeat. And that was like an interesting, iconic sort of melody.

[SFX: Scene tape [MUSIC PLAYS]]

Danni: That became eventually the sonic logo. That kind of idea. Just those four notes.

And those four notes… might sound familiar.

[SFX: Scene tape [REVEAL AT&T SONIC LOGO]]

Danni: Not a very linear process to get there, you know. We heard a theme that we thought was cool, we heard something that had the momentum and the optimism that felt like big business and a melody that we liked and we said, how do we make something that gets a lot of people on board with it being both approachable and friendly and consumer and kind of ragtag, but still feels kind of interesting and big. But at the end of the day, the most important thing is the theme. The melody the melody the melody.

Now that Danni and her team had their melody — their sonic logo — they could start thinking about other things. Like what instruments would make the track sound most “human.” She went to a store in Midtown Manhattan that sold a bunch of vintage instruments. Quirky-sounding things, like clavinet... a wurlitzer. And some others I didn’t expect to hear in an AT&T logo…

Danni: And I, I swear to God we recorded a bagpipe player. I'll show you that…

For AT&T?

Danni: Yes, they there’s a bagpipe on there.

Is that an easter egg? It’s like, hidden in there somewhere?

Danni: Yeah [laughs]

Danni wanted the anthem to sound real. Real people on real instruments. This is not programmed perfection in a computer.

[SFX: Scene tape [MUSIC PLAYING]]

Danni: And it’s interesting, when I listen to this again, you can hear… every so often I can hear a piano chord that’s just a fraction late.

[SFX: Scene tape [MUSIC PLAYING]]

Is that on purpose?

Danni: Just because it’s played… Aaron is playing there…

Man Made

Danni: Yeah, exactly it was very man-made

How the anthem was recorded mattered too.

Danni: You can even hear like we must have recorded these instruments together. Can you hear kind of the drums in the background? Kind of the way records used to be made… you’re all in a room, playing together.

Finally, after weeks of writing, recording, and mixing, Danni and her team had AT&T’s anthem.

[SFX: AT&T anthem -- make sure it’s the original one]

And tying the whole thing together were four notes. The sonic logo.

[SFX: Archival from end of AT&T ad with the sonic logo]

It took 18 months for Man Made to finish the whole AT&T sonic brand. It’s become a case study for the company. Because in the end, variations on those four notes were used as ringtones, hold music, ad themes, even before the CEO got on stage at events. It was a whole system.

A big reason sonic branding works is because of repetition. The more you hear something, the more familiar it becomes, and the more you tend to like it.

And these sounds don’t take long to worm into our minds. One study played a jingle alongside a product just a couple of times. And the next time participants heard that sound, they instinctively started looking for that product.

So on our journey from songs to jingles to sonic brands, that’s the current science. But I called up Charles Spence because he’s working on what comes next.

Charles: I'm an experimental psychologist and a gastro physicist working out of Oxford University. Psychologist interested in the sensors and the application of brain science to the real world.

For a while now, he’s worked on the subtle sounds products make that you might not even realize are engineered to create emotion. Like with Axe deodorant.

Charles: We we worked on the design of a new spraying sound so that it would be perceived as more efficacious.

That's actually the design of the packaging is a sonic experience.

Charles: That's right something that we when we think whenever we interact with or use, open, close anything really it makes a sound. It's always there in the background. Our brain picks it up and uses that to infer what's going on. What are we feeling, what's happening.

Like a car door… our brains interpret sounds as signaling solid, high quality.

[SFX: High quality car door closing sound]

Or maybe tinny, and cheap.

[SFX: tinny, cheap car door closing sound]

But Charles is at the forefront of something even more complex. He’s studying how one sense can affect another. And how that might change how we experience a brand and its products. Like can a sound change the way something tastes?

Charles: To be able to bring out the sweet or sweetness or bitterness on the palette simply through the look of the video the shapes the colors on the video and also the instrumentation of that specially designed track.

And so what you're saying, is that as I drink this beer or drink this coffee if I hear this specially designed sound it actually literally changes my sense of the taste, right?

Charles: Yes. Not always, not for everyone but for many people it just changes the taste and so I've just been back from two weeks getting around Europe. Sort of demonstrating this what we call sort of sonic seasoning. Giving people… my favorite one is giving people kind of sour, sour kid sweets.

Charles: And then we have the some very sweet music which is very tinkling and high-pitched specially designed from a London design student…

[SFX: Sweet music]

Spence: And then we have some world's sourest music.

[SFX: sour music]

Charles: It's kind of mathematically transformed Argentinian Tango…And while people are eating one and the same sweet and sour sweet then as we change as I change the music you can sort of see their faces pucker up as I play the sour music.

Charles has collected lots of music that pairs with certain tastes. Like this one, he says, is spicy.

[SFX: Spicy music]

Charles worked with Starbucks on a piece of music that’d pair with instant coffee in the UK. He worked with Stella Artois and the Roots on this track that was supposed to go with the taste of the beer. It’s called sweet ‘til the bitter end.

[SFX: Stella Artois Roots music]

Charles: We've been working with a… in a chain of Belgian chocolate shops with a kind of completely mad, but brilliant chocolatier from Belgium in his chocolate shop with his amazing Belgian chocolates making his chocolates taste creamier with a kind of creamy track that's been specially created.

Or maybe, he says, sweet music could allow food companies to use less sugar. Charles says he can’t yet use music to turn water into wine, but he’s working on it.

A few years ago, I was in a hotel that had a signature smell. The shampoo smelled just like the lobby. And after talking with Charles, I can imagine a time soon when a brand has coordinated everything… the flavors, the scents, the sounds and music and colors… all to make you buy things and feel better about it.

Or maybe it’ll all just be ASMR.

Charles: These are autonomous sensory meridian response kind of tingle you get down the back of your neck and this kind of is having a relaxing pleasurable experience. Almost a feeling triggered by sound. And we can study the particular kinds of sounds. And it does seem to be sounds that work really well.

Charles: The sounds of whispering gently or rattling of paper. There are particular sort of sounds that trigger these ASMR responses and can we incorporate things like that into sonic logos and jingles in order to kind of broaden the array of what that sonic logo can do.

I don’t know if I’m ready for a world with whispered ASMR sonic logos that have been designed to make my drink taste sweeter in a bottle that has been engineered to sound like refreshment. Where everyone behind me knows what credit card I have because of the sound it made at the register. But I guess we’re pretty much there already aren’t we?

So in the meantime, let’s see if this ASMR thing works…

[whispers] Subscribe to Household Name wherever you get your podcasts.

That was weird.

[music in]

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, a sound design team dedicated to making television, film, and games sound incredible. Find out more at Defacto Sound dot com.

This episode originally aired on Household Name, a podcast that tells the surprising stories behind the biggest household name brands. Go subscribe.

The episode was produced by Dan Bobkoff, with Sarah Wyman, Amy Pedulla, Jennifer Sigl, Gianna Palmer, John DeLore, Casey Holford, and Chris Bannon. Household is a production of Insider Audio.

Thanks to Curtis Perry and Marcus Mendes from twitter for helping us name this episode. If you’d like to help us name future episodes, or want to tell us your favorite sonic logo, tell us on Facebook, Twitter, or by writing hi at 20 kay dot org.

Thanks for listening.

[music out]

Recent Episodes

Deaf Gain: The promise and controversy of cochlear implants

Original Art by Michael Zhang.

Original Art by Michael Zhang.

This episode was written and produced by Leila Battison.

The last few decades have seen amazing improvements in cochlear implant technology. Professor Michael Dorman reveals what they really sound like, and how they can help out with more than just our hearing. But should we be advocating cochlear implants at all? We chat with deaf graphic designer Brandon Edquist about why he chooses not to use his implant, and why the Deaf community is up in arms against them.

MUSIC FEATURED IN THIS EPISODE

A Better World by Instrumental by CHPTRS
Maggie and Bernard by Steven Gutheinz
Lovers or Bruises by Instrumental by Cubby
Drops by Sunshine Recorder
Greylock (with Kyle McEvoy) by Sunshine Recorder
Petite Suite: I. En Bateau by Sunshine Recorder
Tigran by Live Footage
Rubrik (with Blurstem) by Brique a Braq
Bokeh by Luke Atencio
Gaze by Chad Lawson
Reflects Dans l'Eau by SVVN
Lotus by Longlake

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Become a monthly contributor at 20k.org/donate.

If you know what this week's mystery sound is, tell us at mystery.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Consolidate your credit card debt today and get an additional interest rate discount at lightstream.com/20k.

Get a 50% discount off your first purchase with the code “20K” at graphicaudio.net.

View Transcript ▶︎

[music in]

You're listening to Twenty Thousand Hertz... I'm Dallas Taylor.

Our hearing is one of our core senses, and it’s something most of us take for granted.

The laughter of a child [SFX], birdsong at dawn [SFX], or even a well-designed sonic icon can be a feast for our ears. But as much as it’s a joy to listen to the world around us, our hearing is also a protective mechanism. It works hard alongside our other senses to add context, and protect us from danger. [SFX: Tiger roar]

But a lot of people live without their hearing. Worldwide, about one in every thousand babies are born deaf. ...and right now, there are about 1 million people in the US who live with complete hearing loss.

Brandon Edquist is one of those people. He’s deaf now, but wasn’t born that way.

[music out]

Here he is through an interpreter.

Brandon: When I was two, I contracted meningitis. The illness infects the brain lining, and from that I lost my hearing.

[music in]

There are lots of ways that a person can lose their hearing. The biological machine for making sense out of sound waves is incredibly complex. And like all machines, the more complex something is, the more there is to go wrong.

Issues can range from a simple buildup of earwax all the way to a punctured eardrum.

Hearing loss can also be caused by problems in the hearing organ or the nerve that carries sound signals to the brain. They can be damaged by accidents and disease, but problems here can also be genetic, or a result of the natural aging process.

Of course, not all hearing loss is immediate, or total. But when it is, people like Brandon tend to rely more heavily on their other senses.

[music out]

Brandon: I have become more sensitive to what I see. I notice things a little more. Body language, I notice that and catch that a lot more.

But when it comes to interacting with others, it’s not always that straightforward.

Brandon: Some people I talk with, they accept and they understand, and some just walk away and get angry. I've gotten so used to it, so it doesn't bother me that much.

Brandon’s a graphic designer, which means that he can make a living while avoiding many of these awkward face-to-face interactions.

Brandon: I have to communicate with a person through email or texting. We all write so it makes it easy to communicate.

[music in]

Brandon’s experiences might seem extreme, but statistically they aren’t all that rare. And even more people live with more moderate hearing loss, with even simple interactions posing a daily challenge.

Michael: The most common complaint is the inability to function in group settings, in cafeterias when there's noise, in a party... any place where there's competing noise.

That’s Professor Michael Dormon from Arizona State University. Michael has worked with people affected by hearing impairments for the last 40 years.

Michael: If you go back far enough, you have acoustic horns, it was realized that if you created something that looks like a megaphone, and you yell into one end of it, and you put the other end up to your ear, it sounds louder.

These paved the way for the very first hearing aids.

Michael: Electronic devices have been around for a long time after Edison who himself was very deaf. They actually were decently given the electronics, they were very bulky and unwieldy.

[music out]

These days, hearing aids are commonly offered to people with moderate hearing loss. Most consist of a microphone to pick up signals from outside the ear. An amplifier then increases the volume of those signals, and a speaker plays that louder sound into the ear.

Over time, hearing aids have become smaller and more effective. Nowadays, they can even be nearly invisible, with some being placed entirely within the ear canal. You may never know if the person you’re talking to has a hearing impairment.

Michael: I had been working for about a decade with the standard hearing impaired listener. Frankly, I wasn't getting anywhere, and I thought that there had to be something better than this.

So Michael began working with a new, emerging technology. A mysterious innovation called a cochlear implant.

Michael: I remember the director of my laboratory told me “take on a good problem Michael, in life. That's what you want, a good problem.”

Michael: A good problem was a hard problem. I remember him telling me "Michael, cochlear implants are a good problem. Stay with it."

[music in]

Sometimes, hearing aids just can’t cut it. That’s where the cochlear implant comes in. These implants can handle extreme cases of hearing loss, and can even reverse total deafness.

The technology is a bit more involved, but like a hearing aid, it starts with a microphone outside the ear.

Michael: That microphone signal goes to a signal processing device about the size of the hearing aid case and then it is transmitted across the skin to a receiver that is surgically placed under the skin. The receiver then transforms the signal into a series of pulses. The pulses are directed to a set of electrodes, which the surgeon has slipped into the cochlea.

The cochlea is a hollow spiral tube in the inner ear. Normally, sound waves move through the fluid inside the Cochlea, which waves little hairs back and forth. It’s this movement that’s detected and sent as a signal to the brain.

But in a cochlear implant the electrodes deliver the sound signal directly to the auditory nerve.

[music out]

Michael: The cochlea is very handy. It's laid out distance by frequency. We can think of the beginning of the cochlea, the high frequencies live there [SFX: High frequency sine wave], and towards the top of the spiral, the low frequencies live there [SFX: Low frequency sine wave]. If we can slip an electrode most of the way to the top of the cochlea, then we can reproduce sounds from high-frequencies, to mid-frequencies, to low-frequencies.

Now, the signal that the cochlear implant sends to the brain isn’t very high-resolution. It’s filtered into a small number of bands. But it turns out that’s all we really need. The brain manages to fill in the gaps.

Michael: When I tested my first implant patient with a very primitive cochlear implant, I asked him what it sounded like, and he said, "Meh, it sounds all right." I thought, "Well, that's interesting. It should sound awful."

In fact, it probably sounded something like this: [SFX: Early implant sound sample]

And here’s the natural version of that sound: [SFX: Early implant input sample: “The remarkable versatility of the human voice”]

And here’s the cochlear version again. [SFX: Early implant sound sample]

Luckily, since then, cochlear technology has gotten considerably better. ...and every year, more and more people benefit from the implants. Many of us will be familiar with them thanks to countless viral videos that document the moment they’re switched on.

[SFX: Switch on clip 1 start]

You hear my voice?

[Crying]

Aww

[Crying]

Hooray!

It’s hard to comprehend what it would be like to suddenly gain or regain a sense that simply wasn’t there before. But the sounds implant patients hear might not always be what they’re expecting.

[music in]

Michael: Even a very mild hearing loss, very early will over time lead to a reorganization of the brain. If you've had that mild to moderate hearing loss for years and years and years, by the time you'd get to qualify for a cochlear implant, we're putting that implant in a brain that is very differently wired than the wiring of a normal brain.

Our brains are remarkably changeable. If one part stops working, another will adapt to fulfill that function to the best of its ability.

Michael: The auditory cortex becomes reorganized. It responds to tactile stimulation and visual stimulation.

So after enough time without input the part of the brain that normally deals with sound is repurposed to help out with touch and vision. In the brain at least, there might be some truth in the old saying that losing one sense will heighten the others!

But this rewiring isn’t good news for cochlear implant patients.

[music out]

Michael: By the time you put an implant in a congenitally deaf adult, you're implanting into a brain that is massively reorganized, and so it's not at all surprising that the results in terms of speech understanding are very, very, very poor. On the other hand, there are some adults who tell me that they've always wanted to hear. They just want to hear something, and they do hear with the cochlear implant.

The thing about our remarkably plastic brains is that it bends both ways. Once the auditory cortex starts receiving signals through the implant, it can begin to remember how to process them again. Which is good news for Michael’s patients.

Michael: You go from the complaint that I can't function in society because I can't hear to being able to hear and function in society, and go back to work.

[music in]

We know that cochlear implant technology has improved, and we know that the brain can adapt to make sense out of the signal the implant provides. But until recently we’ve not known what it actually sounds like.

Michael: There was no way to check of course. There was no objective measure.

About ten years ago, people with deafness in only one ear started to receive implants and Michael saw the opportunity to try and match the sound in the implanted ear.

Michael: We could inject the signal into the implant, and then I could make up things for the normal hearing ear, and ask any of them sounded like the implant could be like fitting glasses. And so we play a sound to the implant, we play a sound to the normal ear.

In this way, Michael was able to figure out what an implant really sounded like for many of his patients.

[music out]

Michael: The most common difference is that the implant sounds muffled to one degree or another. A very common report from patients is it sounds like you're talking from behind a door, or you have your hand in front of your mouth.

It might sound something like this: [SFX: Muffled sound sample: “The sun is finally shining”]

But it’s also common for the entire pitch of a sound to be shifted up.

Michael: If you remember the movie The Wizard of Oz, there are little characters called, "Munchkins."

[SFX: Munchkin clip audio]

Michael: They used a professional voice actor to produce their lines and they recorded that actor at one speed. Then they played back the recording slightly faster. And what that does is increase the pitch, and moves the whole spectrum up a little [SFX]. That's the munchkin voice.

The same thing can happen with cochlear implants.

[music in]

The last 30 years have seen incredible improvements in cochlear implant tech. And some studies show that there could be dangers of living with hearing loss.

Michael: In quiet, individuals with hearing loss may be perfectly fine. Then as soon as you go to any noisy environment… [SFX: Noisy city] performance falls apart remarkably quickly. Functionally, they just stop going out. They don't interact with others and this brings us to the most recent findings of researchers that if you have a hearing loss, then the odds of developing something awful like Alzheimer's goes up.

Faced with the alternative, Michael hopes that more people will seek out cochlear implants in the future. As the tech improves, so too will the benefits to both quality of life and long term health.

But, there are many in the Deaf community who take an entirely different view. These are people with hearing loss who will choose to reject cochlear implants, regardless of how good they are.

All this time we’ve been trying to cure deafness, but in wondering if we could, did anyone stop to think if we should? Have we got it all backwards? We’ll discuss that, after the break...

[music out]

[MIDROLL]

[music in]

In the last 30 years, improving cochlear implant technology has provided an almost miraculous cure for deafness. But there are some people who don’t see deafness as something that needs a cure. They say that It’s not a disability, and it doesn’t need fixing.

Here’s Brandon Edquist through his interpreter again.

[music out]

Brandon: At about three years old, I was given a cochlear implant.

Brandon: I remember going into the surgery room, I remember the mask and being put to sleep. After the surgery, there was some pain in my head. That's about all I remember.

The implant worked, allowing Brandon to hear sound once again. His parents hoped it would help him to live what they considered a normal life.

[music in]

Brandon: I used the cochlear implant as I went through my education. Many people explained that it would be like a mechanical sound, and it was.

Bradon: My parents really hoped that I would use it a lot, thought I would need to use it to become successful.

But the road to understanding speech was a rocky one, and Brandon worked closely with an audiologist throughout his schooling.

Brandon: The audiologist would sit behind me in a room and that person would talk and I'd try to hear the sound, what they were saying, through my cochlear.

Brandon: I'd go several times a week, but nothing of it really stuck.

Brandon didn’t enjoy using his cochlear implant and when he was a kid, he made every excuse not to use it.

Rather than rely on the noisy, electronic signal through his implant, Brandon found easier ways to communicate with his friends.

Brandon: When I was in Gen Ed school, and the classmates were hearing, but they seemed to understand about my deafness. We would communicate through gestures. They really didn't know any sign, so we used gestures.

[music out]

In 7th grade, he moved to a specialist school for the deaf.

Brandon: When I got to the school for the deaf everything changed. I very rarely had used the cochlear, I had it on, but I used sign. It was my choice to stop using it.

But Brandon wasn’t alone in rejecting his implant. His was just one voice in the growing dissent within the wider Deaf community.

Brandon: That was during a time when the idea of a cochlear implant in the deaf community was not popular. Most of the deaf were rebellious about it.

[music in]

Michael: Early in my career, the radical deaf culture individuals were very active. I remember a meeting in England where they actually chained the doors of our conference hall together, so we couldn't go in to have a conference about cochlear implants.

The message these activists were trying to get across was that cochlear implants are trying to fix something that doesn’t NEED to be fixed. Trying to cure deafness was offensive to the deaf identity. Their message was clear.

It might seem like a bit of an overreaction, but it’s born of real oppression.

Back in the 1880s, the inventor of the telephone Alexander Graham Bell had some seriously controversial views on people who were deaf and chose to remain mute. He claimed that they would choose to intermarry, leading to what he called a defective race. He went as far as to say that deaf-mute intermarriages should be forbidden. They never were of course, but it’s not hard to see why the deaf community felt so threatened, then and now.

[music out]

In reality, many individuals with hearing loss don’t consider themselves in need of fixing. They just belong to a different culture, and like any other culture, Deaf culture has its roots in shared experiences, a common language, and a mutual understanding of what it’s like to live in a soundless world.

It can be a powerful thing to belong to such a community. But, from Brandon’s experience it can sometimes be too closed off.

[music in]

Brandon: The deaf people really are protective of their community. It's a small community. They are very careful about who joins them and who does not join them.

Brandon: There are some who are more open who are willing to accept others and are willing to teach the language and teach the culture. It varies.

Even though he chose not to use the cochlear implant he got when he was very young, he can still find it hard to navigate the deaf community.

Brandon: The deaf community is part of self-identity. I feel part of that, and sometimes it is hard to fit into that. I do identify deaf, but because of my experience in mainstream, sometimes I don't feel I fit in.

The deaf identity is such an important part of the deaf community. So it makes sense that the growing popularity for cochlear implants seemed to threaten that close-knit group by threatening the deaf identity itself.

[music out]

[music in]

Today’s cochlear implants can restore hearing to the deaf, allowing them to integrate seamlessly into the hearing community. But in choosing to live with his deafness, Brandon finds that relatively few adjustments are needed for him to live the life he wants.

Brandon: My parents use sign. My friends are deaf and we use sign. My hearing friends, we are still able to communicate very well through texting, through our phones, so it's been no problem.

But in terms of general accessibility, there’s still some way to go.

Brandon: I know many deaf who wish that sign language was used more and taught more in the school systems so that hearing people can learn more and that way the deaf community can be more part of the community.

Brandon: There's a big inequality of jobs and lack of jobs, lack of employment for the deaf community. Yes. They have the skills. They have the knowledge, but sometimes, the disability may not be clear enough, so it can be hard for deaf people, and hearing people usually try to come up with an excuse to not hire a deaf or find some other way to communicate. There's always an excuse.

Even today, cochlear implants are a touchy subject. There’s been a resurgence in people speaking out to support deaf culture and the deaf identity. But ultimately, the choice will always be down to the individual.

On one hand, Michael believes that implants can improve quality of life, and urges people to seek out the surgery.

Michael: You don't do it for yourself so much as the ones around you. It will help your family just as much as it'll help yourself.

But on the other hand, if given the opportunity to wave a magic wand and gain the ability to hear, would Brandon choose not be deaf anymore?

Brandon: No. No. No. Laughing, no.

[music out]

[music in]

Twenty Thousand Hertz is produced out of the studios of defacto sound, a sound design team dedicated to making the world sound amazing. Find out more at defacto sound dot com.

This episode was written and produced by Leila Battison, and me Dallas Taylor. With help from Sam Schneble. It was sound edited by Soren Begin and sound designed and mixed by Colin DeVarney.

Thanks to our guests, Professor Michael Dorman, and Brandon Edquist.

Michael is a Professor Emeritus at Arizona State University. He continues to research speech understanding with cochlear implants, and hopes someday to see implants that can reproduce sound perfectly.

Brandon is a graphic designer. You can check out his work on his website, at brandonedquist.com.

You’ll have noticed by now that this season we’re sometimes asking people what their favourite sound is. This episode is a little different, so I’ve asked Brandon what his favourite sensation is.

Brandon: Visual. I like watching TVs and movies, and feeling the vibration with all the action, that's my favorite feeling, sensation, of all.

All of the music in this episode was from our friends at Musicbed. Check them out at musicbed.com.

A special thanks to Esparanza Garibay for naming this episode. Esparanza chimed in on a request for show titles over on Facebook and suggested we use the title Deaf Gain. She’s deaf and said that Deaf Gain a pretty common phrase in the deaf community. An example of when they might use it would be in a super noisy environment like a party. They’ll their hearing aid or Cochlear Implant and sign “deaf gain” to each other. They might also sign the phrase when they’re able to talk to each other across rooms. Stuff that us hearing people just can’t do. It kinda gives them a super cool superpower. Everyone on Facebook including myself fell in love with the phrase because it’s also the opposite of the term Hearing Loss. Hearing Loss, Deaf Gain. It completely changes the framing of deafness. Anyway, that’s one of the many reasons you should follow us on social… to find incredible stories like that that pop up spontaneously. You can find us on Twitter or Facebook by simply searching for Twenty Thousand Hertz. And when you’re there, be sure to say hi.

Thanks for listening.

[music out]

Recent Episodes