Scott Martin Gershin on Sound Design and Movies

Scott Martin Gershin

Sound Designer & Mixer

Nightcrawler, Book of Life, Pacific Rim, Riddick

Scott Martin Gershin has worked as a Sound Designer and sound supervisor on some of the most popular films of recent years: Star Trek, Hellboy - The Golden Army, American Beauty, Shrek, Chronicles of Riddick, Blade II, Underworld Evolution, Team America, and many more. He has also worked extensively as a designer and mixer for interactive successes such as Gears of War 2, Fable 2 and 3, Final Fantasy XIV, and Lost Planet-Colonies. In this interview, Scott shares some of his insights into the art of Sound Design for motion pictures and games.

How did you get started in Sound Design?

I guess through music. I went to Berklee College of Music, and then made my way to California, to get into the record business. I used to do a lot of live sound and lighting design over at Disneyland while going to college— I did that for 5 years, then I worked my way into the recording studio system; I did a stint at Cherokee Studios and other places around town. I actually did two things in those days—I worked on both sides of the glass as a recording engineer as well as a synth programmer for several session players. One day I realized that instead of making French Horn patches on a Jupiter 8 or DX7, it was more fun to create lasers and exotic sounds. I think I was bitten by the bug when I was a kid after seeing Star Wars and Apocalypse Now.

What would you say to someone who wants to become a Sound Designer?

One of the things you need to do is learn the technology; you’ve got to learn your “instrument” and the jargon in the industry you want work in. Like a technically proficient musician, you’ve got to learn your tools. But just because you know how to use your tools doesn’t mean you have anything to say with them.

If you want to be in post production, Pro Tools is the best thing to learn, and Nuendo is not a bad thing to learn as well. If you want to get into gaming I would recommend learning Wiise. You’ve got to learn to listen, to hear all the sounds around you. You need to get an audio vocabulary of the sounds of life and fantasy. You need to listen to all styles and genres of movies, like a musician understanding different types of musical compositions. Then what happens is, you’ll be playing with your Pro Tools one day, you’ll put a flanger or a vocoder on something, and all of a sudden you’ll go, “Hey, I know that sound!” Listen to the old “Chop Suey” movies, listen to the different tent pole movies of each year, listen to those films or games that have defined a specific era.

So, what’s your main instrument as a Sound Designer?

I’ve used so many DAWs over the course of my career, but right now it’s Pro Tools. I have an amazing room at Soundelux. It is a 30’ X 22’ room with a tall ceiling. It has a full screen projection system with three sets of 5.1 monitoring, for the different industries I work in. I am surrounded by computers, monitors, and outboard gear controlled by a D-C ommand and an assortment of keyboards and controllers. My main DAW is a Pro Tools HD-3 with an option of adding two more HD-3s as I need them when I mix. For location recording, I have numerous recorders and a list of mics that I have collected over my career that, combined with Soundelux’s collection, offers me a lot of options.

How do you go about manufacturing sounds for sci-fi movies?

I will go out and start recording sounds that I’ll think I’ll need as elements. I’ll work with Foley artists, use a wide array of synths and/or start recording my voice through processing to add to the collection. I’ll spend days searching through the existing library till I create a goody bag of sounds that I can draw upon, whether it’s using NASCAR car-bys as a sweeteners for weapons, or manipulating sounds from the Foley stage to create something waaaay different, or adding detail to a designed element, such as body armor or the wings of a creature. On one movie, we modified the spark plugs with switches in the vehicle to turn on/off a different number of plugs to change the characteristics of an engine and create cool fire explos out of the tail pipe. You can go to your local hardware store and find odd tubes and such to vocalize or blow air through. You’re only limited by your imagination.

Then off to the studio to manipulate, combine and generally mangle the elements you’ve collected… Because the CG your designing to usually comes at the very end of the project, you want to have your ducks in a row, know your library so you’ll be able to move fast, that combined with some serious late nights.

So you take samples of sounds from the real world and manipulate them?

Absolutely. An example of something fun is when you go on an airplane, the airplane toilets have a great ferocity to them, very aggressive; it’s kind of a cool sound. With a little pitch and time manipulation you can use that sample to create a giant sinkhole or giant wave; it could even be an element for a laser. Or you could grab an electric golf cart, record it at a high sample rate doing cart-bys, then use three or four versions of them, pitch them in different pitches, manipulate them in such a way, pitch bending, Dopplers, and it can become a flying aircraft. You can grab a tiger roar or an elephant scream and use that as part of an explosion or part of a weapon. Sprinkle in a bit of Doppler or MondoMod and you’ve got a masterpiece. There’s almost an infinite amount of ways to manipulate sound. That’s the fun.

There are many schools of thought of what constitutes Sound Design. Some people just create cool sounds, and that’s an important part of it. But for me it about the story, I like to think of myself as an audio storyteller. While I try to approach each film with a fresh sonic landscape and unique signature. I don’t mind choosing an older sound if that’s what’s called for, because I am more interested in how the scene is going to play. Are we hitting the beats of the scene or story? Are we using our audio talents to effectively push the emotional buttons of our audience? I’ve just got to make sure that we are effectively helping to tell the story to the audience; that’s really the main goal.

Outside of our sound community, it’s not about how big the sound is, how loud it is, or how cool it is, because the moment we draw attention to ourselves, we’re breaking the illusion. So when you see something like Hellboy 2, you’ve just got to believe that that world is real, that those people exist, that the creatures exist. And if you can do that, then all of a sudden the audience walks away going, “What a great bit of entertainment that was.” Then I know I’ve done my job correctly.

You discuss music and sound design techniques, can you talk about the similarities?

The parallels between music and sound effects techniques are really close. For instance, compare a kick drum or a snare with an explosion or a gunshot. They have lots of similarities, so when recording them, you might grab some of your drum mics like a D112, or a RE20, or you’ll grab a 57 or a 421, as well as an array of mics used for overheads (distant gunfire). After you have your recordings in the studio, you might investigate bass guitar techniques and call up an 1176 compressor style plugin or a multiband compressor. You might want to do some form of peak transient enhancement or do multiple compression techniques. I get my inspiration by some of the many talented engineers and synthesists in music.

How heavy is the processing aspect when you’re Sound Designing?

Sometimes I use lots of plugins for pitch manipulation, ambient convolutions, delays, dopplers, and different types of modulation. For some sounds, I use a bunch of different EQs. I’ve become so used to having access to many characters of EQ, I’ve gained a good vocabulary of the different colors that each EQ can have on a sound. I can hear significant differences between a Neve, an SSL, an API, a Ren, or Q10 compared to a linear phase EQ. And everything has a different use. Sometimes I’ll use an EQ to fix a sound, taking out an offending frequency, or another to add the right amount of fat. Other times I use an EQ to create something new, to mangle a sound. Often I’ll use multiple types of EQs on a specific sound to make it sound like something different then what it originally was.

You mentioned you use your voice as a Sound Design tool. Tell us more about that.

Yes. I used my voice to create the sound of many character and inanimate objects, such as the animals in Tarzan, especially Kirchek, Tarzan’s gorilla father, Godzilla and the babies, the reapers in Blade II, as well as a bunch of the characters in Hellboy 2. I’ve voiced a lot of Disney characters: I’m the voice of Flubber and the voice of Herbie.

For Herbie Fully Loaded, I needed to bring emotion to a Volkswagen beetle, so I manipulated my voice with plugins to make it sound like the engine of the car. I created my session where at different moments Herbie was moving back and forth between my voice and the recordings of the actual car, for which we had Disney automotive retro-fit the engines for us to achieve some very wild sounds. The best compliment I got for Herbie was “Oh, I didn’t know Herbie had a voice.” And I go, “Let me ask you this. Did you find him cute?” And they went, “Oh yeah, he was really cute.” Bingo! If the audience would have heard or distinguished “my voice” as human, then I would have failed, the man seen behind the curtain. My job is to make inanimate objects have a personality, bring it to life—that’s kind of one of my trademarks.

I’ve heard you do some sound effects with your voice. I was blown away by how realistic it sounded.

That’s the goal. I’ve also used guitars to create different sounds such as a slo-mo motorcycle, eerie tones or spaceships. I used my guitars to create the aircraft in The Chronicles of Riddick. Most of the spaceships were me playing guitar. I used an Eventide, playing the wheel, and lots of plugins like Doppler and MondoMod to give the sound the feeling of movement.

What does it mean to do Sound Design for a film like American Beauty?

It’s like playing jazz. It’s about negative space—not always what you put in, but what you choose to leave out.

When a film is captured on set, the job of the location sound mixer is to get the cleanest dialog he can. What this means is, there’s not a lot of sound effects or backgrounds being recorded.

In American Beauty, something as simple as the family eating at the table, there were no other sounds other than their dialog in a very ambient room. At that time, they IR type of reverbs didn’t exist; Lexicon 480s were pretty much it, and it is hard get that early reflection roomy sound out of the Lexicon, so I had to find ways of creating that ambient element in my Foley andFX to match dialog. Foley and ADR stages are really dead, quiet, and almost a little sterile sounding, so I had to find bizarre places to put microphones while the Foley walkers or walla group were performing to better match dialog and production, The goal was that you would never know that it was “Foley’d or ADR’d”, that it was manufactured. It’s important to make sure that the audience doesn’t get “taken out” of the movie, distracted from the illusion.

Any other stories on how you matched ambiences?

When I did The Doors, we needed to do a sing-along and we needed to make it fit into the crowd the auditorium. Unfortunately, they didn’t record it when they were shooting that scene. So I got some fraternities and sororities together from some of the local universities, and I threw a beer bash at one of the scoring stages. Everyone got a little drunk, and we started having people cheer, sing along with the band as well as call-outs like “Jim, I love you” etc. They weren’t actors, they were just college kids, it was real, and I wanted to get that honesty. Then when you hear it in the movie, you never, ever think it’s a bunch of people on a scoring stage after the fact—it feels like it’s in the stadium. We recorded it in an environment with a Decca Tree and multiple mics where you heard the depth of field. You can’t get that with reverb, or at least couldn’t at the time. So again, it’s just trying to create the best illusions you can.

So back then it was about recording creatively, whereas now it’s more about processing creatively?

We get a little less time on many of the projects now, so time is always a factor. I’m still a huge advocate of going out and recording; I don’t think anything can replace it. The problem is when you go out and record, you’re not always guaranteed getting great stuff; it’s hit or miss. Did you get the right mics? Did you hit it right? Sometimes you only get one or two takes. Did you overload your preamps or not? And did you just capture that right sound at the right time?

Then, when you take it back you need to enhance it. You need to sometimes get rid of the noise. I’m a huge WNS fan for that. Then you need to make it pop a little bit, maybe do a little EQing, maybe pitch it down a little bit because it just feels bigger and cooler that way, maybe add reverbs to it. Then, let’s say, if it’s not even “real” you can reverse it or you can vocode it. Or just add in a little something to make it sound a little unique or a little tonal.

How does Sound Design relate to dialog and music during mixing?

With many Sound Designers, they’re thinking about the realm of sound effects. They’re thinking about that one or group of sounds, thinking about how that sound, will work, will cut through, relate to the other sound effects in the scene but it is only a third of the puzzle. The three parts of the puzzle are dialog, music, and effects. So a lot of times, I’ll design something a very specific way, then when I go to the mix I’ll finally get to hear the music. At that point, we have to make choices: Is it best to feature the music, or portions of the music? Is it better to feature the design? There’s many times, even though I spent a lot of time on the design, I will play it lower than I initially thought and sometimes even get rid of it, because I feel at that specific moment the music is carrying the emotion most effectively. And then there’s the dialog, which is always king and has to be easily understood; it is key to storytelling and connecting with the characters. There are times when dialog needs to help keep the energy of the scene alive but doesn’t need to be as articulate, such as a battle scene where you are experiencing the agony of the characters, but the emotion is coming from the music and the reality from the sound effects. A mix is a weaving of different sounds at any given moment.

How has the Sound Designer’s job changed over the last 15 years?

For one, we’ve got more horsepower. We’re getting access to equipment that either was too expensive or you just couldn’t get, or get enough of, whether it’s modular synthesizers or vintage outboard gear.

I feel very fortunate that I’ve got a very respectable collection of plugins. What I like about it is that it’s enabled me is to get a vocabulary of what things sound like. Even if I try a plugin and I don’t like it, I now understand why I don’t like it, or why I do like it, or why I like things for different circumstances, and that’s important. There was one plugin that I bought, everyone was recommending it, and I tried it on many of my sound effects. I didn’t particular like the sound of it, so even though I bought it, I stopped using it. Then one day I decided to try it out again out of guilt—I hate buying gear that I later realize I won’t use—but instead of sound effects, I used it on music and I went, “Wow!” I liked it a lot for music, for those things that have tone, but for non-tonal Sound Design, not so much. But it always teaches me to experiment with all my plugins and see how they or combination of them will react to a given sound.

I’ve got Eventides and Lexicons and all this outboard gear that I’ve collected, but now I can do a lot inside the box. Plugins let me automate everything, which allows me to come back to my designs and mixes that I haven’t touched in a long time, to update them; it’s fast, it’s easy. So while I still enjoy a lot of my vintage outboard gear, I have to admit, I do a lot of stuff inside the box, because I feel that the technology is getting to the point where they’re able to create the sounds that I’m used to and that I want to create.

As a driving force behind the creation of the Waves Sound Design Suite, together with Charles Deenen, what makes it powerful in your opinion?

These are the basic building blocks that all Sound Designers should have. A lot of times, when you bought previous waves bundles you also ended up getting plugins that were kind of interesting but not very applicable because they weren’t really focused on the needs of the Sound Designers. What I’m excited about, is now I can just go to a place I’m going to mix, or any other facility or rental company, and say, “Give me this Pro Tools system and make sure it has the Sound Design Suite on it.” And that’s all I need to say, because I know that within the Sound Design Suite, it’s going to give me the essential building blocks.

Here’s what Scott has to say about some of the plugins in the Sound Design Suite:

C4 Multiband Compressor – I use C4 for so many things, to help shape sounds dynamically. I like to use it as a broad-stroke threshold-based EQ, to contain 3kHz to 5kHz transients while expanding low end frequencies, or to control the area around 400Hz to clear up a design when summing numerous elements.

Doppler – It adds a feeling of movement to a design. Whether it's an incoming scream or the dialogue of a passing ghost, I use it to create a sense of motion or speed.

GTR3 – It’s the secret sauce. Using guitar stomp boxes as part of your Sound Design adds another level to the possibilities.

L1 Ultramaximizer – I like the edge it creates and the way it cuts through a mix. It adds a bite to anything it's used on-it's a real workhorse.

L2 Ultramaximizer – is fantastic at containing dynamics without adding any color to the sound. A little goes a long way.

L3 Ultramaximizer – Another plug-in to make it come alive, to add “immediacy” to a sound, to add that special “something.”

Linear Phase EQ - After recording on location, I like to use the LinEQ to master my recordings before I use them in the editing process. It allows me to clean up the sound surgically, without adding any color, just getting rid of those frequencies that I know I won't need later.

LoAir – When I need to rock the room & take the audience on an e-ticket ride, I turn to LoAir. It’s the power plug-in.

MondoMod / Enigma / MetaFlanger / SuperTap / Doubler – These plug-ins make up what I like to call my "Audio Mangler Gang of Toys." I use all of them because of their ability to modulate an element or sound. For me, they really are the heart of any Sound Design.

Morphoder – It is great for Vocoder-type effects. It's quick and easy to use: When I want to add an element that needs to combine tightly with another sound, adding just a hint of something, it works great.

Renaissance Bass – It adds “body & punch” to a sound. I use it to beef up sounds without depending on the LFE channel.

SoundShifter – A true essential part of the Sound Designer’s tool box. It sounds great, and I like that it has a feature that focuses on maintaining transients, or punch, or keeping it smooth, and it works in real-time, allowing automated pitch bending.

TransX – This plug-in is huge for giving that “pop” or punch to a sound, to make it stand out in a mix. Great when combined with RenBass.

UltraPitch – It’s all about the formants, giving new life, expression, and personality to creature design.

See Specials

Recent Videos

Recent Posts

Related Products