Sep 11, 2012

Waiting On Sound

microphone by Brandon Buck (brandonthebuck) on 500px.com

Once again, Stu Maschwitz wrote an excellent blog about something I'm sure is way more common than the industry would like to admit: production sound has not evolved the way motion picture has.

He argues that setting up his sound devices is still a mystery, and post-production fixing isn't any easier. He points out how the iPhone is a perfectly capable tool for assisting in sound production- every actor and crew member he knows has one, and its interface can so clearly tell you whether you've set things up right or not. I agree with everything he says, and it appears a lot of others agree in the comments.

Digital sound was ahead of digital picture by more than a decade in major motion pictures (by my calculation, Dick Tracy (1990) was the first film to have "a completely digital sound track," whereas Star Wars II: Attack Of The Clones (2002) was the first major motion picture released that tried to emulate regular film). And digital sound was adopted into theatres very quickly because, compared to picture, sound files are smaller file sizes, and difficult to discern in quality past a simple threshold (ie how many people can tell the difference between a 2MB compressed MP3 and a 20MB uncompressed WAV file?)

Jump forward a decade, when fewer movies are being captured on film in exchange for the newest flavor of digital cameras. DSLR's capable of capturing 1080p HD video enter the hands of the public by the millions; yes, they are very flawed, but compare the quality coming out of them and the quality coming out of the typical $2,000 camera 10 years ago, and there's no contest. Sound may have had its shit together, and was as good as how you treated it, but I'm surprised it's essentially the same.

When I was in school, High Dynamic Range Images were getting really popular and prevalent, even though I thought they looked like mud. However because microphones are traditionally mono, and you can cheaply purchase a cable splitter, I wondered if it would make sense to set one input to, say, -3dB and the other to -12dB and be able to combine them to make HDR Audio. This way a person's vocals would not peak if they were shouting, nor would we lose them if they spoke softly. I still don't know if this is possible, nor have I heard any system capable of working with files like this.

Stu mentions a few things he'd like to see, like a visual display of if the mic's been placed correctly. I've heard of "sweet spots" in rooms, where the shape and material of the room and placement of speakers render a single spot of perfect sound creation. JBL's MS-2 seems to do something like this to figure out how to best set its playback. And then there's noise-canceling headphones that play back sound waves that counter-balance what it hears outside. Can live mics do this? When we record "room tone," can the mics simply counter-balance this, so they'll only pick up our character's dialog? Or do this in post? Has Bluetooth evolved to be a viable mic? Can lav transmitters locally record and overwrite the distortions when we're having to deal with frequency static? When I have two interview subjects, mic'ed separately, can I tell one mic to cancel out the other so I don't get a soft echo from each person? The list goes on and on.

And maybe these tools do exist, but why aren't I seeing them at my pro video retailer? Stu's got a point. Sound doesn't seem to have been on the innovation highway imaging's been on for the past decade. Or at least it certainly isn't any easier to understand.

No comments:

Post a Comment