• Skip to primary navigation
  • Skip to main content
  • Login

IATSE Local 695

Production Sound, Video Engineers & Studio Projectionists

  • About
    • About Local 695
    • Why & How to Join 695
    • Labor News & Info
    • IATSE Resolution on Abuse
    • IATSE Equality Statement
    • In Memoriam
    • Contact Us
  • Magazine
    • CURRENT and Past Issues
    • About the Magazine
    • Contact the Editors
    • How to Advertise
    • Subscribe
  • Resources
    • COVID-19 Info
    • Safety Hotlines
    • Health & Safety Info
    • FCC Licensing
    • IATSE Holiday Calendar
    • Assistance Programs
    • Photo Gallery
    • Organizing
    • Do Buy / Don’t Buy
    • Retiree Info & Resources
    • Industry Links
    • Film & TV Downloads
    • E-Waste & Recycling
    • Online Store
  • Show Search
Hide Search

Features

As Productions Go Online

by James Delhauer

The evolution of communication technology since the turn of the century has revolutionized the way that filmmakers approach their craft. A short twenty years ago, productions made nightly phone calls and distributed paper call sheets each day to ensure that cast and crew were aware of the correct location and call time for each day’s work. Widespread access to personal email accounts rendered this manual process obsolete and saw it replaced with mass mailing lists and digital attachment files. This is just one example that scratches the surface of how sending files over the internet can make production workflows simpler and more efficient. As we move toward a more globalized world of film production, the ability to communicate via the web has become an integral part of day-to-day life. More and more assets can be shared instantaneously, saving countless hours or the cost of constantly transporting assets back-and-forth. The most recent developments in file transfer protocol technology are allowing for entire productions to be uploaded to the internet and sent to multiple destinations across the globe in real time.

A file transfer protocol, or FTP, is simply a network protocol used to transfer files between a computer client and a server. On a small scale, every email attachment makes use of an FTP in order to move the attached file from a device, onto an email provider’s server, and then send it to the recipient’s device. They have become a common, albeit nearly invisible part of the daily routine in production. More and more offices are adopting browser-based FTP services like Google Drive, Dropbox, and Amazon Cloud Storage in order to make sharing and communication channels uniform across the team. In cases such as these, the user need only enter the address of the FTP server into their web browser in order to access data that has been stored there by another member of the team. Username, password, and sharing credentials are often added as a measure of security.

Unfortunately, commonly known platforms such as these have their drawbacks. Most web browsers, such as Google Chrome, Mozilla Firefox, Microsoft Edge, and Apple Safari are not optimized for large or automated transfer tasks. Similarly, most consumer computers are not outfitted for transfer speeds beyond one gigabit per second. Additionally, these sorts of services can also become quite costly as both the expense of cloud-based storage and slow upload speeds make the time commitment impractical. Moreover, FTP clients that utilize third-party servers present a security risk. If a production were to place all of its assets on a non-private server, those files would be vulnerable to theft should anyone obtain the correct login credentials. There is also the remote but still present threat that the server’s provider (Amazon, Google, etc.) may undergo some sort of catastrophe and data loss could occur.

Recent developments are removing these limiting factors and large-scale digital delivery is becoming more commonplace. Ten-gigabit internet connection pipelines have become more prevalent and cost-effective with time, which have in turn made ten-gig connectivity on consumer machines such as Apple’s Mac Mini and recently announced Mac Pro, far more common. This allows for larger amounts of data to be uploaded to the cloud and then sent to network servers. There are also more FTP workflows that involve using a specific client software, eliminating the inherent flaws of browser-based FTPs. Private servers and network-attached storage devices are more prevalent, meaning that production companies can build or purchase their own server solutions, which eliminates the vulnerabilities of third-party storage options.

Fortunately, these improvements are occurring at an ideal time within our industry.

As productions have left the traditional filmmaking hubs of Los Angeles and New York in favor of exotic locations around the globe, production activities have become decentralized. Rich tax incentives offer unique opportunities that incentivize productions to shoot in new hubs that may not be where the footage is ultimately processed and undergoes post production. A project may shoot in Atlanta, undergo general editing in New York, and source out visual effects, color grading, and post-production sound to companies based anywhere else in the world. In complicated workflows such as these, every entity involved with the project needs access to the digital assets. Transporting physical drives can be costly and time-consuming. But simply uploading the entirety of the project’s assets to a server where any relevant parties can download the information is an ideal solution. This allows post-production teams to begin processing footage almost immediately, regardless of whether the shoot is occurring a few blocks away or on another continent. The expediency of this process also allows for near real-time feedback from the editors and dailies, which can eliminate the need for costly reshoots later in the production.

“For Local 695 video engineers…
this emerging technology presents an opportunity.”

For Local 695 video engineers, whose responsibilities include media playback, off-camera recording, copying files from camera media to external storage devices, backup and redundancy creation, transcoding, quality control, syncing and recording copies for the purpose of dailies creation, this emerging technology presents an opportunity. Those who become well versed in its finer points will be ideally suited to jobs looking to take advantage of improved online distribution. While the need for traditional media managers who offload cards to hard drives is not going to disappear in the foreseeable future, a new breed of FTP media managers will become necessary going forward. Media managers who possess a basic understanding of network-attached storage (such as Avid’s Nexis or Pronology’s rNAS m.3) and common FTP client programs (such as Signiant’s Media Shuttle or Aspera’s Client) are ideally suited to take on these new roles as they become more prevalent.

In an ideal setting, a media manager is able to ingest/offload multiple streams of 4K media, perform transcodes if necessary, and then drop completed footage into what is called a “watch folder.” From there, the FTP client can parse the folder for any file containing a specific string of characters, such as the “.r3d” file extension of a Red camera or the identifying label of a particular camera. When it finds files that meet its criteria, the FTP client picks up the file and begins duplicating a copy of it to its web-based server. When the file has finished being copied to the cloud, the client software moves the original copy to a “transferred” folder in order to avoid sending the file multiple times. Signiant’s Media Shuttle is even capable of replicating folder structures, meaning that all of the organization and offline to online directories created on set by the media manager are retained when they are sent to post production.

This sort of workflow has already been successfully carried out on shows such as MTV’s Wild ’N Out, MTV’s Video Music Awards, TED Conferences, and NBC red carpet shows. Wild ’N Out, in particular, is noteworthy for sending multiple episodes’ worth of content from Atlanta to New York on a daily basis during the production of its recent fourteenth season. In total, the show’s video engineers successfully transmitted fourteen cameras’ worth of video, amounting to more than twenty-five terabytes of data.

Nonetheless, this technology is not without its limits. In April of 2019, a team of astronomers successfully captured the first photograph of a black hole—a revolutionary feat that required more than five petabytes (or five thousand terabytes) of data to accomplish. Unfortunately, limitations in bandwidth and storage costs still meant that it was faster and more cost-effective to mail the physical drives back-and-forth across the globe than it would have been to upload the data to any online server. So for the time being, if a production has a few million gigabytes of data that they need to send out, it may be more efficient to physically transport the data back-and-forth.

Even limitations such as these are a temporary matter at best. As the nightly phone call was replaced by a single mass email, internet limitations will also disappear. Ten years ago, uploading even a terabyte of data to the cloud was a monumental task. As our industry continues to evolve and demand more efficient workflows for bulkier and higher resolution files, FTP clients will rise to the occasion. Ten years from now, it is likely that the threshold for transmitting a petabyte’s worth of data will be crossed. When that time comes, 695 video engineers should be prepared to jump at the opportunity for the new work created by this ever-evolving technology.

Using an Exoskeleton

Using an Exoskeleton in Real-World Settings

by Ken Strain and Bryan Cahill

As Bryan Cahill wrote in the 2019 spring edition of Production Sound & Video, Brandon Frees, Ekso Bionics VP of Sales, provided him with a loaner EksoVest, thus allowing working Microphone Boom Operators to give it a test drive in real-world conditions for a week or two at a time. As part of that trial, Ken Strain and Corey Woods were able to use the vest on the Apple TV+ series Mythic Quest. Bryan then brought the vest to Mike Anderson on Goliath.

In this article, Ken, Corey and Mike share their thoughts about using the vest on actual shoots.

Ken Strain: I’m six feet, five inches tall so, I tend to hold my arms over my head less than shorter boom ops, as I favor keeping the boom at shoulder level and resting my elbow on my hips for long takes. On this new comedy however, there are many resets and pickups of alt lines, and we get into very long takes. I often find myself having to hold the boom overhead for 15 to 30 minutes, in order to get around practical lights and reflections. So I jumped at the chance to give this vest a try to see if it would help.

This vest was not made for our industry. It is designed for the automotive industry where the line workers raise and lower their arms with tools thousands of times per day, and wear the vest all day long. It’s built for that type of intense daily usage, with very strong joints and bearings. The autoworkers work in their own stations, with clear space all around them, so the elbows (link assemblies) that stick out in the back are not an issue like they are for us on cramped film sets.

The vest can be set up for various body types, and I’m sharing it with my Second Boom, Corey Woods, who is around five feet, eight inches tall. He tends to hold his arms overhead more often than I do, and he has shoulder issues, which makes it more difficult for him to boom. I do my best to keep him off the stick, but we are an ensemble comedy, so he has to work on many slots.

The vest has four spring options, and we are using the strongest spring, labeled “4,” which gives about fifteen pounds of upward assistance once your elbow reaches a horizontal height. When at rest with arms by your side, the system is not engaged. The buckles and straps that attach around the bicep part of the body are comfortable and easy to use. The angle of assistance can be tweaked as well, to provide the max assistance at a higher engagement point if you happen to reach higher normally.

Ours is set up neutral, so its “shelf” of support is right at shoulder level. I only engage the spring in the leading arm, and keep the trailing arm turned off, so that my arm and the vest itself acts as the counterweight, and it works great this way. It is very easy to lean against the back of the vest, with your arms overhead holding and guiding the boom pole. It looks like work, but instead of the forces going into the shoulders, they are routed down to the hip belt. So all you do is guide the pole, and the nasty heaviness of it is painlessly transferred to your hips. It’s a great feeling when the situation is right.

I tried it on several different types of shots, the first was a crane shot that was just wildly shooting around the room. That was just to get the feel of it, as it wasn’t a major dialog scene. This is where I figured out the max length of boom pole extension before it overwhelms the spring. For my particular setup, that was an 18 foot K-Tek boom pole with internal coiled cable and a CMIT mic on a PSC mount, extended one half section short of full extension. My setup is already well balanced, if you are using a plug-on transmitter on the mic end, you’ll have less extension for sure. If you were to use this outside with a zeppelin, then you will only have assistance, not complete support unless you’re on a relatively short pole.

“Walking backward while booming is a very dynamic situation, and having the assistance change while I was sort of bouncing around behind camera made for an awkward feel.” –Ken Strain

The next shot was a fairly straightforward Steadicam walk and talk through our bullpen, and that did not work so well for me. Walking backward while boomi​ng is a very dynamic situation, and having the assistance change while I was sort of bouncing around behind camera made for an awkward feel; the spring engaging on and off unnaturally. Also, the way the vest fits snug around the hips makes it feel as if it is not designed for backpedaling. I didn’t feel free or comfortable. When I mentioned this to Bryan Cahill, he suggested I might try less assistance or lighter springs which I’ll try if I get another opportunity.In general, Steadicam walk and talks don’t tend to be ridiculously long with endless resets, as everyone seems to be aware of the Steadicam Op’s fatigue. And, I think you run a higher risk of hitting something with this vest on a walk and talk which would be a total party foul.

My next opportunity was exactly what I envisioned the vest to be for: a three-page dialog scene among six actors in a conference room. I had to keep the boom working over a long LED practical that hangs over the table. There was no other way to pull this off without having my arms overhead. I also had to stay mobile because the three cameras were doing moves on the dance floor around me and my actor had a back-and- forth move that went deep. Since I couldn’t use a ladder, this was the perfect scenario for the vest.  

 I had ample space behind me to allow for the link assemblies that stick out from the vest. It worked like a charm. I had total mobility like normal, yet my arm was completely supported at the elbow. I did notice over the duration of the take, that support would start to sag a little,   meaning I was right at the limit of full support for this spring. I could have used less pole and experienced less sag over time. Basically, the spring needs to be stronger. For autoworkers, they don’t need a super strong spring to provide a continuous solid shelf of support, so this is one limitation that needs to be overcome. EksoVest is prototyping a stronger, number “5” spring which could solve the problem.

 The other limitation is the most obvious one, and that is the space it takes up behind the back. You need to place yourself carefully on the set to avoid hitting anything or anyone. It alters the decision making of where you place your self or even walking around the set, you need to be conscious of your space. It definitely makes you feel like Robocop, and that’s exactly what people on set were calling the vest (it does draw an enormous amount of attention and curious questions). The articulating link assemblies limit you to working only in areas which have at least an extra eight inches free behind you. This is a deal killer if someone needed to use the vest all of the time.

One advantage of this design over the Shoulder X vest, which has a much trimmer back profile, is that this system of articulating link assemblies behind the back does not interfere with your arms and shoulders when you find yourself raising your arms straight up. There is nothing above your shoulders, like the Shoulder X, which has a frame that hinges a few inches above the shoulder. This is how they get around the problem of maintaining a tight profile. When I tried that vest on at our union meeting, it felt great until I raised my arms straight up, and basically contacted the metal frame, and it felt like now I was fighting it. Other than THAT, it seems really well designed. As Boom Operators, we have some pretty specific and unique requirements, and exoskeleton manufacturers are going to have their hands full designing something that works with our range of motion and in the tight quarters that we find ourselves. I’m sure it will be possible.

The EksoVest feels good and very high quality. The new stronger spring will be a welcome addition. The stronger the better, but not so strong that we can’t bring our arm back down!

Corey Woods: I concur with everything that Ken wrote. I would only add that the level of adjustment is quick and the ease of putting the suit on when needed can be as quick as thirty seconds. We don’t always need the vest, but when we do, it might be at the last minute. A consideration for Boom Operators who find it hard to leave the set for fear of missing a rehearsal, lighting change, etc.

Mike Anderson: Recently, I had the opportunity to try one of the exoskeletons being tested for use by microphone Boom Operators. The EksoVest exoskeleton system was loaned to me by for a ten-day stretch during production of the third season of Goliath, the Amazon series. Once Bryan Cahill gave me a quick run-through of how the vest works and we got it fitted, I put it to the test.

I found it had a surprisingly comfortable fit coupled with a very good range of motion. The vest has interchangeable springs that add and/or reduce the vest’s lifting power. I started with the heaviest load lift thinking, “Hey, why not?” Once I really started to get used to it, I found myself almost fighting it.

When I switched the springs to a lower lift tension, I found the lower tension made all the difference and the system was extremely helpful. I suggest everyone booming give it a try. It can’t hurt, literally! I want to thank Bryan for all of the work he has done trying to find ways for us Fishpole Boom Operators to minimize the abuse we do to our bodies on a daily basis. If you think I’m wrong, you are kidding yourself. Maybe we can make these devices a standard on set. After all, when was the last set you were on where the camera operators didn’t have an Easy Rig on standby?

71st Emmy Nominations

71st EMMY Award Nominations for Sound Production

Nominations for the 71st Primetime Emmy Awards were announced Tuesday, July 16, 2019. The awards show will be held on Sunday, September 22, at the Microsoft Theater in Los Angeles.

Local 695 congratulates all the nominees!

Outstanding Sound Mixing for a Comedy or Drama Series (Half-Hour) and Animation

Barry  
“ronny/lily”
Nominees:
Elmo Ponsdomenech CAS, Jason “Frenchie” Gaya, Aaron Hasson,
Benjamin Patrick CAS
Production Sound Team:
Jacques Pienaar, Corey Woods, Kraig Kishi, Scott Harber, Christopher Walmer, Erik Altstadt, Srdjan Popovic, Dan Lipe

Modern Family
“A Year of Birthdays”    
 
Nominees:
Dean Okrand CAS, Brian R. Harman CAS, Stephen Alan Tibbo CAS
Production Sound Team:
Srdjan Popovic, William Munroe, Daniel Lipe

Russian Doll
“The Way Out” 
  
Nominees:
Lewis Goldstein, Phil Rosati
Production Sound Team:
Chris Fondulas, Bret Scheinfeld,
Patricia Brolsma​

The Kominsky Method
“An Actor Avoids”

Nominees: Yuri Reese, Bill Smith,
Michael Hoffman CAS
Production Sound Team:
Rob Cunningham, Tim Song Jones, Jennifer Winslow, Sara Glaser

Veep
“Veep”  

Nominees: John W. Cook II, William Freesh, William MacPherson CAS
Production Sound Team:
Doug Shamburger, Michael Nicastro, Rob Cunningham, Glenn Berkovitz, Matt Taylor

Outstanding Sound Mixing for a Comedy or Drama Series (One Hour)

Better Call Saul
“Talk”

Nominees:
Larry Benjamin, Kevin Valentine, Phillip W. Palmer
Production Sound Team:
Mitchell Gebhard, Steven Willer

Game of Thrones
“The Long Night”

Nominees: Onnalee Blank CAS, Mathew Waters CAS, Simon Kerr, Danny Crowley, Ronan Hill
Production Sound Team: Guillaume Beauron, Andrew McNeill, Jonathan Riddell, Sean O’Toole, Andrew McArthur, Joe Furness

Ozark  
“The Badger” 
 
Nominees:
Larry Benjamin, Kevin Valentine, Felipe “Flip” Borrero, Dave Torres
Production Sound Team:
Jared Watt, Akira Fukasawa​

The Handmaid’s Tale  
“Holly”

Nominees: Lou Solakofski, Joe Morrow, Sylvain Arseneault
Production Sound Team:
Michael Kearns, Erik Southey,
Joseph Siracusa

The Marvelous Mrs. Maisel  
“Vote for Kennedy,Vote for Kennedy”

Nominees: Ron Bochar CAS, Mathew Price CAS, David Bolton, George A. Lara  
Production Sound Team:
Carmine Picarello, Spyros Poulos, Egor Pachenko, Tammy Douglas

Outstanding Sound Mixing for a Limited Series or Movie

Chernobyl
“1:23:45”

Nominees:
Stuart Hilliker, Vincent Piponnier
Production Sound Team:
Nicolas Fejoz, Margaux Peyre​

Deadwood
Nominees:
John W. Cook II, William Freesh, Geoffrey Patterson CAS
Production Sound Team:
Jeffrey A. Humphreys, Chris Cooper

Fosse/Verdon
“All I Care About Is Love”  

Nominees: Joseph White Jr. CAS, Tony Volante, Robert Johanson, Derik Lee
Production Sound Team:
Jason Benjamin, Timothy R. Boyce Jr., Derek Pacuk​

True Detective
“The Great War and Modern Memory”

Nominees:
Tateum Kohut, Greg Orloff, Geoffrey Patterson CAS, Biff Dawes
Production Sound Team:
Jeffrey A. Humphreys, Chris Cooper

When They See Us
“Part Four”

Nominees: Joe DeAngelis,
Chris Carpenter, Jan McLaughlin
Production Sound Team:
Brendan J. O’Brien, Joe Origlieri,
Matthew Manson

Outstanding Sound Mixing for Nonfiction Programming
(Single- or Multi-Camera)

Anthony Bourdain: Parts Unknown  
“Kenya”

Nominee:
Brian Bracken

Free Solo
Nominees:
Tom Fleischman CAS, Ric Schnupp, Tyson Lozensky, Jim Hurst

FYRE: The Greatest Party That Never Happened  
Nominee:
Tom Paul

Leaving Neverland
Nominees:
Matt Skilton, Marguerite Gaudin

Our Planet
“One Planet”

Nominee:
Graham Wild

Outstanding Sound Mixing for A Variety Series or Special

Aretha: A Grammy Celebration for the Queen of Soul
Nominees: Paul Wittman, Josh Morton, Paul Sandweiss, Kristian Pedregon, Christian Schrader, Michael Parker, Patrick Baltzell
Production Sound Team: Juan Pablo Velasco,Ric Teller, Tom Banghart, Michael Faustino, Ray Lindsey

Carpool Karaoke
“When Corden Met McCartney
Live From Liverpool”

Nominee: Conner Moore
Production Sound Team:
Renato Ferrari, Adam Fletcher, Mick Haydock​

Last Week Tonight With John Oliver
“Authoritarianism”

Nominees: Steve Watson, Charlie Jones,
Max Perez, Steve Lettie​

The 61st Grammy Awards
Nominees: Thomas Holmes, Mikael Stuart, John Harris, Erik Schilling, Ron Reaves, Thomas Pesa, Michael Parker, Eric Johnston, Pablo Mungia,
Juan Pablo Velasco, 
Josh Morton, Paul Sandweiss, Kristian Pedregon, Bob LaMasney
Production Sound Team: Michael Abbott, Rick Bramlette, Jeff Peterson, Andrew Fletcher, Barry Warwick, Andres Arango, Jason Sears, Billy McKarge, Peter Gary, Doug Mountain, Robert Wartenbee, Brian Vibberts, Brian T. Flanzbaum, Jimmy Goldsmith, Steven Anderson, Craig Rovello, Bill Kappelman, Kirk Donovan, Peter San Filipo, Ric Teller, Michael Faustino, Mike Cruz, Phil Valdivia, Damon Andres, Eddie McKarge, Paul Chapman, Alex Hoyo, Bruce Arledge, Hugh Healy, Steve Vaughn, Corey Dodd, Michael Hahn, Roderick Sigmon, Christopher Nakamura, John Arenas, Niles Buckner, Trevor Arenas, Bob Milligan

The Oscars
Nominees: Paul Sandweiss, Tommy Vicari, Pablo Mungia, Kristian Pedregon, Patrick Baltzell, Michael Parker,  Christian Schrader, Jonn Perez, Tom Pesa, Mark Repp, Biff Dawes


BAFTA Television 2019

Winners of the British Academy Television Craft Awards, May 12, 2019

Sound: Fiction

Winner: Killing Eve
Sound Mixer: Steve Phillips
First Assistant Sound: John Lewis Aschenbrenner
Second Assistant Sound: Jack Woods

A Very English Scandal
Sound Mixer: Alistair Crocker
Boom Operator: Jo Vale
Second Assistant Sound: Dave Thacker
Second Assistant Sound: Emma Chilton

The Little Drummer Girl
Sound Mixer: Martin Beresford
First Assistant Sound: Lee James
Second Assistant Sound: Julian Bale
Sound Trainee: Rob Scown

Bodyguard
Sound Mixer: Simon Farmer
Boom Operator: Andrew Jones
Assistant Sound: Craig Conybeare
Sound Trainee: Ross McGowan


Sound: Factual

Winner: Later … Live with Jools Holland
Sound Supervisor/Mixer: Mike Felton
Assistant Sound Supervisor/Dubbing Editor: Tudor Davies
Floor Crew: Ian Turner
Floor Crew: Chris Healey
Floor Crew: Carol Clay
FOH Mixer: Gafyn Owen
Michael Palin in North Korea
Sound Mixer: Doug Dreger
Dubbing Mixer: Rowan Jennings

Classic Albums: Amy Winehouse Back to Black
Sound Recordist: Steve Onopa
Dubbing Mixer: Kate Davis

Dynasties: Chimpanzee
V.O. Recordist: James O’Brien
Sound Editors: Tim Owens, Kate Hopkins
Dubbing Mixer: Graham Wild

Names in bold are Local 695 members

The Way We Were: Sound Mixing Equipment (Part 4)

Overview

As centuries past have served to define advances in technology, the year 2000 has seen a decided shift in the approach taken toward film sound recording. Analog tape is gone, along with the traditional concept of mixers and recorders functioning as separate devices. Gone as well are mixers that actually have a signal path and control section that carries audio, replaced instead by DSP technology. Similar to larger consoles used in music recording, broadcast and live sound, portable film mixers now function primarily as control surfaces, with the actual audio section being part of a separate I/O rack, or in the recorder itself. And with the proliferation of AES, TDIF, and DANTE interfaces, in many cases, there is very little analog audio to be found at all.

The advantages of this approach for film sound recording are significant. No longer do consoles require a dedicated channel strip, with numerous controls and associated components for signal processing functions such as EQ, filtering, and limiting. All signal routing (buss assigns, solo, panning) are similarly handled by DSP. All that is required of each input is a chip set that allows these various functions to be controlled by an external signal that is tied to a primary data buss. And since it isn’t always necessary to have all of those controls individually accessible for every input channel, space requirements (as well as cost) can be reduced by using a common set of controls to address the individual channels via a selector switch. While some sound mixers prefer the dedicated controls that are the hallmark of analog mixers, it can’t be denied that the DSP approach provides for a range of features that would be difficult to implement in a compact footprint. Additionally, the ability to save primary settings is a huge plus when changing setups.

As noted in our previous installment, one of the first portable mixers for film use that adopted this approach was the Zaxcom Cameo, introduced in late 1999. Since that time, there has been a steady stream of developments by Zaxcom, Sound Devices, Sonosax, Allen & Heath, Zoom, and other manufacturers, all of which take a similar approach when it comes to treating the mixer as an adjunct control device to the recorder.

Here is a look at what is currently on the market, and some thoughts about where it might be headed:

Aaton Cantaress

Aaton Cantaress mixing surface. Note inline meters located above channel strips. (Courtesy Aaton)


Conceived by Jean Pierre Beauviala, the Aaton Cantaress is a 12-input mixer designed to work in conjunction with the Aaton Cantar X3 and Cantar Mini recorders. In a departure from the approach used on many other mixer surfaces, the Cantaress employs magnetic faders, which help to keep dirt and debris from the fader mechanism and resistive element. Additionally, the connection to the recorders is handled over an Ethernet connection as opposed to a USB port. Similar to the Zaxcom Mix-16, it also sports dedicated LED metering for each input channel, which provides the user with a ready display of levels. Power (12-17 VDC) is provided separately via an XLR-4 connector.

Sound Devices CL-9

Sound Devices CL-9. Now discontinued, this was the first mixing surface introduced by Sound Devices.


After the Zaxcom Cameo, one of the second entries in the realm of mixing surfaces was the (now discontinued) CL-9, introduced by Sound Devices, and designed as a dedicated mixing surface for the 788T series recorders. Connected to the 788 via a USB cable, the CL-9 included many features found on traditional analog mixers, including 100mm linear faders, dedicated gain trim controls, two aux sends, parametric EQ, and two bi-directional boom comm channels. The power was provided via the USB port on the recorder, and the EQ functions mimicked the EQ features already included on the 788.

Zaxcom Mix-16

Zaxcom Mix-16, a 16-channel mixing surface intended to interface with the Deva 16 recorder.


Introduced in 2018, the Zaxcom Mix-16 is a 16-channel mixing surface intended to interface with the Deva 16 recorder. With linear faders, dedicated input channel metering, buss assignment, PFL and gain trim controls, the Mix-16 provides many features found on traditional analog mixers, but also has some capabilities that can’t be found on analog consoles, such as individual input channel delay, grouping functions, plus the ability to control the primary functions of the Deva 16 recorder. Additionally, provision is made to control the gain of Zaxcom radio mic systems via the Zaxnet control interface, enabling the mixer to instantaneously control transmitter gains for the corresponding inputs. Power for the mixer is supplied by a separate power source of 8-18 VDC.

Sound Devices CL-6

Sound Devices CL-6. While technically not a mixing surface, this add-on unit provided expansion capabilities for the 664 and 668 series recorders.


While not really intended as a fully featured mixing surface, the CL-6 functions as an expansion device for the 664 and 668 series recorders, allowing for additional input for control capabilities for inputs seven to twelve. Equipped with rotary faders, it allows the user to control the input gain with dedicated controls, as opposed to small trim knobs on the recorder.

Sound Devices CL-12

Sound Devices CL-12. This full-function mixing surface provides additional capabilities for the 6 series recorders.


Conceptually similar to the Sound Devices CL-9, the CL-12 mixer is intended as an adjunct to the Sound Devices 6 series recorders, and likewise functions primarily as a control surface. The mixer provides independent fader control for twelve inputs, giving the user control of level, input gain, phase, HP filters (and when used with the 688 recorder), 3-band equalization. It also provides control of recorder functions, along with monitoring and talkback communication functions for two boom operators. When used with a 688 recorder equipped with the SL-6 SuperSlot wireless receiver option, it is also possible to control the features of the wireless receivers. While the mixer interfaces with all the 6 series recorders via the USB port, power needs to be supplied externally for 633 and 664 series recorders.

Sonosax SX-ST

Sonosax SX-ST mixer/recorder. A unique combination of an analog mixer with digital recorder.


Although the Sonosax SX-ST mixer is technically not a mixer surface, the inclusion of an onboard recorder as part of analog mixer is unique approach to integrated mixer/recorder products. The recording system can be provided as part of the standard SX-ST mixer line, and can be ordered with eight to twelve inputs. The SX-ST mixer utilizes eight program mix busses, which can be assigned to both the recorder and the analog or digital outputs. The input channels are equipped with separate limiters, 3-band equalization, and direct outs (channel inserts are optional). It can be powered via internal D batteries, or an external 10.5 VDC to 20 VDC power source. Prized by many production mixers for its sonic quality, the Sonosax is the only mixer/recorder combo to provide an analog mixer with a digital recorder.

Zoom LiveTrack L-12 Mixer/Recorder

Zoom LiveTrack L-12 mixer/recorder. Aimed at the music recording market, the L-12 is a 12-input mixer with 24-bit 96 kHz recording capability.


While the Zoom LiveTrack L-12 would not typically be found on a film set (due to lack of timecode, Scene/Take/Note metadata functions, and other limitations), I am including it here as an example of how far this technology has come. Sporting twelve tracks, twelve inputs, and 24-bit 96 kHz recording capability, along with basic equalization, at $600 street price, it certainly is something to be reckoned with. Thirty years ago, one would have needed at least a one-inch 8-track recorder and separate mixer to duplicate the basic functions that this unit provides (of course, mic preamps and other features may likely not be up to the same standards one would expect). Still, it’s hard to ignore what can be accomplished at this price point.  

The Future

So, in the span of about six decades, we have gone from analog mixing consoles equipped with large rotary faders, requiring significant amounts of power (and with very few features), down to small digital control surfaces that have virtually no analog signal path, whose primarily function is as an outboard control for the recorder. While the concept of a standalone mixer has not completely disappeared from the landscape (especially for larger channel counts), it is clear that when it comes to portable systems destined for location shooting of film and TV productions, integrated mixer/recorder systems will continue to be the primary path of development.

With the ability now to integrate wireless mic systems into the control path, we will undoubtedly see an expansion of the features related to RF systems (for example, the ability to load a full set of channel presets for both transmitters and receivers, and have real-time monitoring of the system as part of the control surface). As the trend toward miniaturization continues, the primary limitation will likely be that of the human interface, not technology itself. A sound mixer still has only ten fingers and two feet, so until the “direct brain interface” comes along, where one can control a mixer by thought, we are probably close to the limit as to how many channels can be packed into a given footprint. But I could be wrong…

©2019 by Scott D. Smith CAS, all rights reserved.

My Godlike Reputation Part 1

HOW I GOT MY GODLIKE REPUTATION PART 1

(A tutorial for those without half a century in the business, and a few with)

by Jim Tanenbaum CAS

There is more than one way to record good production sound, but there are millions of ways not to. Over the years, many fine production mixers have written articles about their guiding philosophies and recording methods.

After rereading mixer Bruce Bisenz’s story in the 695 Quarterly Winter 2015 Issue, now Production Sound & Video, I finally decided to add my 2 dB’s worth. Many good production mixers have elements of their modus operandi in common, and others that are unique to the particular individual. So do I.

Whatever I’m recording: dialog, effects, music, ambience, wild lines—I consider them all just noise. Different kinds of noise to be sure, but when all is said and done, just noise. When I started out back in the late ’60s, I thought my job was to record these noises as accurately as I could so that their playback would sound exactly like the original. Do you remember: “Is it real or is it Memorex?”

I soon learned that that wasn’t such a good idea.

WHAT I HAVE IN COMMON WITH OTHER GOOD MIXERS is (or should be) obvious:

EQUIPMENT

Think about what kind of equipment you’re going to buy when setting up your cart. First, rent one of each possibility and play with them for a week or so. Which unit feels “right”?
In the good old (analog) days, there was basically only one recorder (Nagra), just a few mix panels (Cooper, SELA, Sonosax, Stellavox), and a few radio mikes (Audio Limited, Micron, Swintek, Vega). They were simple, and similar enough that if I got a last-minute call to replace someone, I didn’t have to think twice about using their gear. (Though I did make up and carry adapter cables so I could use my favorite lavs with their transmitters for actors or plant-mike situations.)  Now, I have to ask what’s in their package—I would be hopelessly lost with a Cantar.

Sound Devices and Zaxcom both make top-notch recorders. I prefer Zaxcom’s touch-screen Fusion-12 and Deva-24 to any of the scroll-menu CF/SD-card flash-memory recorders by Sound Devices or even Zaxcom’s Nomad and Maxx, but other first-rate mixers feel just the opposite. My brain’s wiring finds the touch screen’s layout more intuitive, and helpful if I suddenly need to do something I don’t do often (or ever).

After you’ve acquired all your gear, you need to spend a great deal of time familiarizing yourself with it. Your hands need to learn how to operate everything without your head having to think about it. Likewise, the connection between your ears and your fingers needs to work without conscious intervention (most of the time). You need to calibrate your ears so you don’t have to watch the level meters constantly because with the new digital or digital-hybrid radio mikes, you can’t tell just by listening when the transmitter battery is getting low, or an actor is getting almost out of range. You have to scan all the receivers’ displays instead, to see the transmitter-battery-life remaining or the RF signal strength. You also need to keep an eye on your video monitors to warn your boom op when he or she is getting too close to the frame line.

Speaking (or writing, in this case) of your ears, you need to protect them—you can’t do good work without them. For most of my career, I used the old Koss PRO-4A and then PRO-4AA headphones because of their superior isolation of outside noises, so I know if something is on the track or just bleeding through the cups, without having to raise the headphone volume. You should lock off that control at a fixed level, and only change it under very unusual circumstances. If you find yourself straining to hear toward the end of the day and turn up the level, that is a sign of aural fatigue, and an indication that your regular listening level is too high. “Ringing” in your ears is a far more serious warning sign—it may go away, but the damage has already been done. The insidious nature of the damage is that it won’t manifest itself for decades. The PRO-4AA’s air-filled pads fail after a time, and need to be replaced periodically—they either leak or become too stiff. I’ve gone through almost a gross of them and don’t know if I can get any more, but fortunately, there are new headphones available from Remote Audio, which are even more isolating, and I have recently started using them.

Carry a spare for everything, and for “mission critical” items, carry a spare spare. If the original unit fails from an external cause, you may not discover the problem until it happens again while you are investigating. While it is nice to have an exact duplicate for the spare, some very expensive items can be backed up with a lesser device that will do until a proper replacement can be obtained. (The Zoom F8n recorder is a prime example: ten tracks, eight  mike/line inputs, timecode, metadata entry … all for less than $1,000. The TASCAM HS-P82 at twice the price is even better if you can afford it.) IMPORTANT: You may need adapter cables to patch the different backup unit into your cart—always pack them with the unit! (And have a spare set of cables.)

Check out all your gear before shooting begins. Were batteries left in a seldom-used unit and have corroded the contacts? Or worse, the circuitry? Is a hard-to-get cable that is “always” stored in the case with a particular piece of equipment missing? This is even more important with rental items.

Have manuals for every unit available on the set at all times. Not only for problems that arise, but also if you need some arcane function you have never used before. PDFs on your laptop are extremely convenient, but a hard copy under the foam lining in the carrying case can be a lifesaver if a problem arises when you can’t get to your computer. (If not the original printed version, be sure that any copy is Xerox or laser-printed, not a water-soluble inkjet copy.)

Don’t forget to research sound carts as well, and at least look at all the different styles at the various dealers’ showrooms. There are vertical and horizontal layouts, enclosed and open construction, different wheel options, etc. Over the years, my preferences have changed several times, first, because of the larger productions I recorded, and later, because of shortcomings I discovered in new situations.

My first cart, a Sears & Roebuck tea cart, and no more room


I started with a folding Sears & Roebuck tea cart. It was light and folded flat, which made it easy to store and transport, and set up and wrap quickly. It also let me work in small spaces. Unfortunately, it wasn’t designed for the rigors of production, and the plastic casters broke early on, followed by failure of the spot-welded joints. I replaced the casters with industrial ones, and brazed all the joints. I still use it for some one-man-band shoots. You can see its major deficiency: lack of real estate.

Giant tea cart on The Stunt Man with director Richard Rush


But I liked the concept, so I had a custom cart of the same design manufactured by a company that made airline food-service carts. This solved the lack of space, but created the problem of needing a lot of room to set up shop. It was so big, it had to lay flat on top of all my other gear in the back of my 1976 International Harvester Scout II (an SUV before they were called that). There were also the same problems that I had with my first folding cart: stuff bounced off when moving over rough surfaces; rain, especially coming on suddenly; and having to unpack all the gear in the morning and interconnect it, then having to put everything away at wrap.

My current shipping-case cart, built in 1979, and no more cables!


My final cart solved all these issues. In 1979, I designed a tall shipping case that had all the equipment built in and permanently connected. All I had to do to set up was remove the front cover and attach the antenna mast assembly. It was narrow enough to fit easily through a 24-inch-wide doorway. The height of the pull-out mix panel allows me to mix while standing (a good idea when car stunts are involved) or seated in a custom-made chair. And I can stand up on the chair’s specially-reinforced foot rest to see over anyone standing in front of me. The cart is completely self-contained, with 105 A-H of SLA batteries. The extra set of tires lets me travel horizontally over rough ground, and the dolly can be unbolted to use separately if needed. The only problem remaining is the 325-pound weight.

Here is a lightweight alternative vertical design belonging to Chinese mixer Cloud Wang. Whatever cart style you choose initially, be prepared to replace it after you have gained some experience, and perhaps again … and again … and…

Chinese mixer Cloud Wang’s vertical lightweight cart


If you’re going to work out of a bag, rent various rigs, fill the pouch with thirty pounds of exercise weights, and wear it for many hours. Experiment with changing the strap tension, bag mounting height, and all the other variables, including different size carbineers to attach the loads. A hip belt is a necessity to distribute the weight and reduce the pressure on your shoulders. Nothing available worked perfectly for me, so I wound up buying both Porta-Brace and Versa-Flex rigs and creating a Frankenstein from the top of one and the bottom of the other.

Wireless transmitters both numbered and color-coded


The organization of your stuff is Paramount (or Warner or Disney or…) for efficient operation and avoiding errors, especially in stressful situations. I use both colors and numbers, according to the RETMA Standard (which is used for the colored bands on electronic components): 0=Black, 1=Brown, 2=Red, 3=Orange, 4=Yellow, 5=Green, 6=Blue, 7=Violet, 8=Grey, 9=White. My radio mikes and mixer pots are all color-coded, as are all my same-sized equipment cases, which allow my brain’s right side to take some of the load off its left side. Note that I have permanently taped the unused faders on the mixer’s right side, and temporarily taped off the Channel-4 fader of an actor who’s not in the shot, using his label from his radio mike receiver. I also taped over his radio mike receiver’s screen for good measure. I can tape off the faders at the top, full-open position, to keep them out of the way because the Cooper mixer can power off individual channels. I also label the faders with the character names on the front edge of the mixer. I usually don’t have to highlight individual character’s dialog on my script sides, but if I do, I use the same color as their channel. “Constant Consistency Continually” is my motto.

In addition to numbering cases, you need to label their contents somehow, either by category (e.g., “CABLES”) or more detailed contents. Whatever system you decide on, it needs to be intuitive and quickly learned because you may be using different crews from job to job.

When brand-new gear doesn’t work out of the box


I do a lot of my business with one major equipment dealer and rental house, but I make it a point to buy a fair amount of stuff from another company as well. Not only does this keep both of them competitive on prices, but if I’m in the middle of the Sahara Desert and the camel kicks over my sound cart, I’m not limited to what one of them has in stock for immediate replacements. (Not an unrealistic example—in Morocco, my local third person accidently dumped over the sound cart in the street. Fortunately, it was rental gear.) Have you ever tried to set up a high-limit credit account on the spot, over the phone, with a company you’ve never done business with before? (Besides, I get twice as many free T-shirts, hoodies, and baseball caps.)

Don’t neglect to visit smaller dealer/rental houses as well. They may be willing to take more time to help you learn the equipment, or open the store at 3 a.m. Sunday to handle a last-minute emergency.

If you’re just starting out and can’t afford to buy everything at once, rent the recorder and radio mikes. They rent for a smaller fraction of the purchase price and evolve the most rapidly. WARNING: Don’t buy a new model as soon as it comes out—the first few production runs sometimes have problems that require a hardware fix that is expensive or impossible! This happened to me twice—I didn’t learn the first time. My first-run Vega diversity radios were in the shop more than on the set for several years. My first-run StellaDAT was so unreliable, I was happy when it got stolen. Also, if a well-established product suddenly is offered by the manufacturer at a discount, it may be about to be replaced by a new model—this happened to me recently after I bought $8,000-plus of name-brand lavs.

Equipment insurance is as important as the gear itself, but takes an entire article to do the subject justice.

PRE-PRODUCTION

If you don’t know the director, research their shows on IMDb, and watch some of them to get a feel for the director’s style and technique. Talk to people who have worked with her or him.
Read the script as soon as possible, looking for scenes that might have challenges for the Sound Department, or an opportunity for you to make an esthetic contribution to the project.

Speak with the director at the earliest opportunity to discover what his or her feelings are regarding sound and its relation to the project. The director may have sound ideas that are impractical, if not outright impossible, but saying: “I can’t do this,” is always a bad idea. I prefer saying: “It would be even better if you did this instead.” If I can convince the director it was her or his own idea, so much the better, because they will be less likely to fight me later on.

The most important question to ask the director is: “What do you want me to do if there is a sound problem during a take?” (It probably won’t be “Jump up and yell ‘Cut!’”)
The second most important question is: “What do you want me to do if I need an actor to speak up?”
Believe it or not, the “Don’t bother me with sound problems—I’ll loop it,” is by far NOT the worst attitude. If the director doesn’t care about the production sound, that leaves me free to do whatever

I want, so long as I stay out of their way.
The worst type is the director that looks over my shoulder and tells me what to do. Or the producer—fortunately, I found out about him before I accepted the job (to replace their “bad” mixer) and turned it down. The show lasted five more miserable (for sound) episodes before it was cancelled—I talked to the replacement mixer afterward. If I find myself stuck with one of these shoots, I ask the director either: “Why did you hire me if you wanted to mix the show yourself?” or as many questions as I can think of about every shot, even when the director doesn’t come over to my cart first. This results in either: A, my being fired; or B, being left alone for the rest of the shoot. So far, I haven’t been fired.

If you are not familiar with the DP, gaffer, and/or key cast members, research their attitudes toward sound by interviewing other mixers who have worked with them. (Search IMDb for the info.)

If you can’t get any of your regular crew people, be careful about accepting recommendations from other mixers. Be particularly skeptical if they won’t or can’t tell you what the person’s weaknesses or shortcomings are—everyone has some. And personalities are important—a detail-oriented utility person may be perfectly compatible with one mixer but annoying to another. (Again, this isn’t a made-up situation. I did a TV pilot with a “highly- recommended” 3rd person that turned out to not know how to do anything “my way,” and took a very long learning curve to get up to speed.

Even crew you have used before need to be vetted if they haven’t worked for you recently. They may have changed their styles from working for other mixers, or just been away from the business for several years.

Go on all the location scouts. (Of course, you should make every effort to convince the production company that your presence there will be worth far more than what they pay you.)
I know that when I see a practical location next to an automotive tire and brake shop and under the LAX flight pattern, the UPM will respond to my request for an alternate venue with: “The director likes the look, it’s easy to get the trucks in and out, and the rent is cheap.”


Why I do go is to get a head start on solving the sound problems I find, so that on the day I will have what I need. For example, a courtyard with a dozen splashing fountains may need two-hundred square- feet of “hog’s hair” and a hundred bricks to support it just above the water’s surface, and this is not likely to be available at a moment’s notice from the Special Effects Department.

PRODUCTION

If they are being used on the show, go to the dailies (“rushes” for those of you on the East Coast) whenever possible. Besides the possibility of getting you a free meal, you will have a chance to judge your work without the distractions of recording it. I find it usually sounds better than I remember it—if it sounds worse (though still “good enough”), I need to find out why. Also, if someone questions some aspect of the production track, I am there to explain it before the Sound Department gets blamed for something that wasn’t its fault—a bad transfer, for example.


Sadly, the pace of modern production often eliminates screening the footage in a proper theater for the director and keys on a regular basis—the director and DP have to look at a DVD or flash drive of the shots on a video monitor between setups—when they can spare the time. Still, attend this if you can.

Make friends with the Teamsters early on. When they come around to get used batteries for their kids’ toys, give them a box of new ones instead. Then, if you need the genny moved farther away, they will be more cooperative, especially if you tell them as soon as you get to the set.

Make friends with the electricians early on. When they come around to get used batteries for their kids’ toys, give them a box of new ones instead. Then, if you need the genny moved farther away, they will be more cooperative in stringing the additional cable, especially if you tell them as soon as you get to the set.

Make friends with the grips early on. When they come around to get used batteries for their kids’ toys, give them a box of new ones instead. Then, when you need a ladder for your boom op, or a courtesy flag to shade your sound cart… Ditto for props, wardrobe, and all the other departments.

WHAT I DON’T HAVE IN COMMON WITH OTHER MIXERS isn’t obvious:

EQUIPMENT

Many mixers require their equipment to “earn its keep.” They won’t buy a piece of gear that they may never (or seldom) use. I have a different philosophy: if there is a gadget that will do something that nothing else I have will do, that is reason enough to acquire it. (And one element of my godlike reputation.) Some examples:

1. I have several bi-directional (Figure-8) boom mikes and lavaliers, even though they are not commonly used in production dialog recording (except for M-S, which itself is rarely needed). But their direction of minimum sensitivity (at 90º off-axis) has a much deeper notch than cardioids, super-/hyper-cardioid, or shotguns. On just two occasions in over half a century, they have allowed me to get “good enough” sound under seemingly impossible conditions. The US Postmaster General was standing on the loading dock of the Los Angeles Main Post Office while surrounded by swarming trucks and forklifts and shouting employees, which completely drowned out his voice on the omni lav. I replaced it with a Countryman Isomax bi-directional lavalier, oriented with the lobes pointing up and down. This aimed the null between them horizontally 360° and reduced the vehicle noise to the correct proportion to match the visuals. Since we didn’t see his feet, I was able to have him stand on a pile of sound blankets to help deaden the pickup from the rear, downward-facing lobe. (Of course, afterward, the director asked, “Why don’t you use that mike all the time?” Then I had to explain about all types of directional mikes’ sensitivity to clothing and handling noise and wind.)

2. I also have a small, battery-operated noise gate. While not suitable for use during production recording because the adjustment of the multiple parameters requires repeated trials, it has enabled me to make “field-expedient” modifications to an already-recorded track. I cleaned up a Q-track so a foreign actor wouldn’t be distracted by boom-box music and birds in the background while he looped it on location before flying back to his home country. I removed some low-level traffic that was disturbing a “know-nothing, worry-wort” client on a commercial shoot and earned the undying gratitude of the director, who knew that it wasn’t a problem but couldn’t convince the client. (I also was able to close-mike some birds in the back yard and add them to cover the “dead air” between the words.)

3. Every time I find bulk mike cable in a color I don’t already have, I buy 50 feet and make up a 3-pin XLR cable. This allows me to hide them “in plain sight” by snaking them through grass (various shades of green), or running them along the side of a house where the wall meets the ground (various shades of brown for dirt and fifty shades of gray for concrete or asphalt). Wireless links have all but eliminated the need for cables, but in the rare case where they are needed…

Cindy Gess booming with the Cuemaster on Babylon 5


I read many trade magazines, and investigate any new piece of gear to see what it will do. I offer to beta-test equipment, like the Zaxcom Deva I or the Lightwave Cuemaster. I soon bought production models of both of them, and still use the Deva I (upgraded to a Deva II) for playback of music and the prerecorded side of telephone conversations. I had Rabbit Audio upgrade my Cuemaster, too. Boom op Cindy Gess used the beta on Babylon 5: The Gathering, where a walk-and-talk in a narrow aisle reversed direction while she was behind the camera the entire shot. The mike had to be almost horizontal to avoid footsteps on the plywood-supposed-to-be-metal floor, and swivel 180° to track the actors. I don’t need it often, but when I do…

I know that the mike doesn’t always have to be pointed directly at the actor’s mouth. Good cardioid (and super-/hyper-cardioid) mikes have an acceptance angle of about ±45° from the front where the sound of dialog won’t be audibly affected when shown in the theater. This allows my boom op to orient the mike so that its minimum sensitivity direction is aimed at a noise source and still gets the actor’s voice acceptably. Have the boom op adjust the mike to minimize the noise and then see if they can get the actor within its 90° acceptance cone.

Text and pictures ©2019 by James Tanenbaum, all rights reserved.
Editors’ note: Article continues with Part 2 of 3 in our Summer edition.

Exoskeletons

The Boominator:
ARE EXOSKELETONS PART OF OUR FUTURE?

by Bryan Cahill

Ken Strain on Mythic Quest


Boominator, Roboboom, Boomborg … these are just a few of the names crew members are using to try and make sense of a microphone boom operator wearing an exoskeleton. The boom operators who have had the chance to use one on a set are using terms like “game changer.” For any of you who keep up with all the sound-related pages on Facebook, you have probably read some posting on my work with exoskeleton manufacturers and their products. If Facebook isn’t your thing, this article should bring you up to speed on the conversation.

What is an exoskeleton? According to Levitate Technologies, one of many exoskeleton manufacturers, “Exoskeletons are wearable machines that enhance the abilities of the people who use them… An exoskeleton contains a frame that goes around a user’s body or part of the user’s body. They can provide support and reduce fatigue. They even enable people in wheelchairs to stand up and walk again.” Exoskeletons can be powered in a number of ways but, all of the units I have been working with are powered by springs similar to a Steadicam rig. Until now, exoskeleton manufacturers have considered their markets to be industrial, medical, and military. For our purposes, an exoskeleton can provide support and reduce fatigue when holding a boom overhead for extended takes.

Are they needed in our industry? Those of us who started booming when most productions were shooting on film, remember when a “long” take ran three minutes and the longest takes were around ten minutes, as that was how long it took to roll out on a thousand-foot mag of 35mm. On top of that, any TV director who was constantly rolling ten-minute takes would have been tossed out on his ear by the line producer, as the price of film and development made it cost-prohibitive. Back then, an exoskeleton would have seemed like overkill.

Now, in the Digital Era, cutting is seen as a great inconvenience. Takes often run twenty minutes and longer. Quite frankly, the human body was not built to perform the task of holding a fishpole overhead for these extended takes. Whether it is an exoskeleton or a Fisher boom or a four-person sound crew, something is needed to provide relief for microphone boom operators.

In January, after almost thirty years of booming, I had a full tear of my left rotator cuff tendon and a partial tear of the biceps tendon. While I finished off the rotator cuff in the gym, all the medical professionals I’ve seen since the injury tell me that it was caused by many years of wear and tear. Dr. Thomas Knapp, the surgeon who performed my surgery, told me that he operates on a lot of microphone boom operators. When asked by them what they can do to prevent similar injuries in the future, he replies, “become an accountant.” As Chair of the Injury Prevention Committee at Local 695, this is one type of injury I’m trying to prevent. Exoskeletons have the ability to support the weight of the boom on the hips while causing only a little restriction in mobility and are possibly the least expensive and most practical solution in most cases for excessive takes.

I have met with some skepticism regarding any need for change. There are those who believe that the correct diet and exercise can prevent injuries. It is true that a better health regimen could benefit many of us, but, we are not highly paid athletes with personal trainers. Most of us have families. Making time to work out after a twelve-hour day and a commute could only be accomplished by reducing needed sleep. I try to find time to work out on set. A large part of a boom operator’s job involves being on set during setups to make sure they’re not lit out, etc., leaving little time to use a restroom, let alone work out. Even so, I try. I keep a set of elastic bands and dumbbells with me as part of my kit. During setups and at lunch, I try to slip away briefly to run flights of stairs, do core work, pushups, jump rope… In the end, no amount of exercise nor any specialized diet will prevent injuries when we are tasked with doing repetitive, excessive takes.

Booming in the Digital Era is like being the only pitcher on the Dodgers. There are no other starters and no relievers. Every day you go out and pitch a complete game regardless of how many innings or hours. How long do you think Clayton Kershaw could perform before his body broke down? This is what is happening to boom operators. Our bodies are breaking down.  

I know for many in the production sound community, this seems like new information. Why is that? Many microphone boom operators when speaking candidly, will tell you that they have hidden injuries received on the job and continued to work through them. Boom operators are afraid (and I believe rightly so) that mixers are less likely to call if aware that someone has been injured even if the injury is completely healed. So, boom operators silently work through pain and injuries up to the time when an injury becomes so severe that working becomes impossible without corrective surgery. If you’ve been booming as long as I have, you know many boom operators who have been put in this position.

I’ve hidden injuries too, until last year when I was hired by Loyola Marymount University to run its Production Sound Department. Just like that, I didn’t have to rely on holding a microphone over my head to pay the mortgage and felt I could be more open about the situation. So I posted this question on Facebook: “Have you been injured while booming or know someone who has?” It got a lot of people talking and eventually led to the formation of the Injury Prevention Committee with me as Chairperson at the July 2018 Local 695 membership meeting.

One area of interest for the committee is in technological advances, including exoskeletons. I began contacting manufacturers in the fall of last year, and I was able to get demos from two; SuitX and Ekso Bionics. Both companies are located in Emeryville, CA, and were founded by UC Berkeley mechanical engineering professor Homayoon Kazerooni, although I believe he is working only with SuitX at this time.

Bryan Cahill EksoVest Demo


First up was the EksoVest with Brandon Frees, Ekso Bionics VP of Sales. He and I had a number of sound professionals meet us in a park just north of LAX. The EksoVest sells for $4,000 and uses interchangeable spring cartridges with four tension levels. The $4,000 price tag comes with one set of springs. Each extra set of springs is another $400. The heaviest set of springs creates a “shelf effect,” providing fifteen pounds of lift assist to each arm. Once you raise your arms, the vest is doing most of the work in keeping the boom up. The nine-and-a-half-pound vest is well balanced and the weight is distributed evenly so that wearing it does not seem burdensome in any way. One of the drawbacks of this unit are the link assemblies that stick out from the back adding width that could make the vest hard to use in narrow spaces. The manufacturer claims that it is the link assemblies that allow almost unrestricted movement. Even with the link assemblies, the reactions of the people who came out to the demo were mostly positive and they provided Brandon with some valuable feedback.

Then, with the help of Thomas Popp and Video Mantis, we held a live stream demo of SuitX’s ShoulderX Version 2 (V2) with sales director Mark Criscuolo that has received more than four thousand views. At seven pounds, the ShoulderX is a little lighter with a narrower profile than the EksoVest. Rather than spring cartridges, the ShoulderX has a dial which can be set from three to twelve pounds of lift assist which adds some flexibility and convenience. Again, the reaction in general was positive but, I couldn’t get it dialed in to a place where I felt comfortable and that the vest was actually helping. One reason for this was that with arms fully extended, Version 2 was providing no support at all.  

SuitX asked me to visit their factory in Emeryville to help set up the ShoulderX Version 3 (V3) for our needs as they had already agreed to demo the new unit after the Local 695 membership meeting (scheduled for January 26, 2019). The V3 sells for $5,000 but is a one-size-fits-all unit. Also, there is no need to buy extra spring cartridges. Lighter and allowing support at full arm extension, the V3 is a vast improvement over the V2.

Max Osadchenko ShoulderX Demo


The demo in January went really well. Mark and I stayed as long as there were Local 695 members wanting to try the vest and in return, Mark got some terrific feedback. If there is one thing that I have learned from these demos, it is that the vests can take quite a bit of tweaking to find each individual’s sweet spot, but once found, they can really assist in supporting a boom.

Subsequent to that meeting, Ekso Bionics provided me with a loaner EksoVest which has allowed working microphone boom operators to give it a test drive in real-world conditions for a week or two at a time. Ken Strain, Mike Anderson, and Brandon Loulias have all had the chance to use it on their shows and all have had positive reactions. I have to say, I am somewhat surprised by the amount of interest the vest has received from both below-the-line and above-the-line folks working on these shows. It really creates a buzz wherever it goes.  
On April 18, I traveled to San Diego with Chris Walmer, Ken Strain, and Paul Buscemi to check out the Airframe exoskeleton built by Levitate Technologies. Due to my recent surgery, I was unable to try on the unit but Levitate CEO Mark Doyle went through many possible configurations and everyone else was able to give it a go. We were quite impressed by the unit’s versatility and ease of use. The basic unit currently retails for $4,300 but some microphone boom operators have found that they only want assist on their front arm, using their back arm as a counter balance. Under that scenario, the price can be reduced by more than a thousand dollars. And yes, the assist can be easily switched from one arm to the other. Currently, Levitate Technologies is not set up to sell to individuals, although with enough interest, that may change soon. It is really interesting to see how the three companies have gone about solving the same problem in different ways.  

Brandon Loulis Station 19


Now, with the EksoVest, the ShoulderX V3, and the Airframe, I feel we have three viable exoskeletons ready for use in production. But, don’t take my word for it, ask one of the boom operators who have had a chance to get their hands on one or more. Ken Strain will be writing an article about his experiences using the EksoVest for the summer edition of Production Sound & Video. Exoskeletons are not a panacea. They are one of many tools we will need to keep people from being injured. In my case, exoskeletons have also become a tool I use to keep people informed about the injuries being suffered by microphone boom operators in the Digital Era.

I think we’ll be seeing more and more of these devices on sets in the coming years.

Modern Family

My Decade with Modern Family

by Stephen A. Tibbo CAS

Cast of Modern Family on location


It’s hard to believe that I have been so fortunate to have mixed Modern Family for ten years. It’s been a great ride and I’m very humbled by the three Emmys from nine nominations and similarly, five CAS Awards among eight nominations. Of course, I couldn’t do it without my crew and it’s a long list; Preston Connor was the main Boom Op for the first three seasons. “Serge” Popovic took over and is still with me. Second Boom/Utility was Dan Lipe for the first five seasons, and then William Munroe took over. We regularly have an additional Boom, so thanks to Dan Lipe (who came out of retirement the last two years), Peter Hansen, Jacques Pienaar, Richard Geerts, Corey Woods, Ken Strain, Noel Espinosa, Craig Dollinger, Brian Wittle, John Hays, Danny Greenwald, John Sheridan, and Mark van Kool. My regular Pro Tools Playback Operator is Mark Agostino.

Modern Family is a two- to three-camera show where we approach every scene as a oner. It certainly presents challenges when you’re trying to capture good dialog because one camera is going to do a wide, another a single, and the third, maybe a two-shot. We developed communication with all of the operators to understand how each scene is going to be shot. “We’re wide for the first two lines, then we’re in tighter, and then on a specific line, we’re going to pop wide again. But here are the wides and you guys should be able to boom everything.”

sound crew for “Connection Lost,” day one
a scene from “Australia” with Eric Stonestreet and Jesse Tyler Ferguson


When I began the series, I realized that there was at least one scene in an episode with as many as twelve to fourteen actors! I started with a Deva and a SD 788, and a Yamaha 01V96. Initially, I broke the ISO’s out on two recorders. The first nine ISO’s were on the Deva, and the remainder of the ISO’s on the 788. I recently upgraded to the O1V96i and record to two Sound Devices 970’s with Dante and it’s just made my life easier.

I have the two 970’s next to each other and my monitor out of the recorders is fed into an A/B box. On Recorder A, I have it set to listen to a solo of the mix. Recorder B is also in solo mode and I can quickly switch over to the B machine and hear any ISO. I can track down problems. I got used to doing it on the Deva where I just put my finger on the screen and solo a track, and it’s almost as fast.

Last month on “Can’t Elope,” I recorded twenty-five tracks for several days. I had four boom operators, nineteen actors, and a playback track. I brought my other O1V96i and cascaded both consoles using ADAT and midi.

mixing on the medium cart in Australia
Srdjan Popovic booming a scene.


I wire everybody all the time, just so I don’t miss any comedy, and then we boom everything. I spent time early on really trying to get the wires to match the boom, and I have EQ settings for each actor. I have a mixture of Countryman B6’s and Sennheiser MKE 1’s. I use B6’s just because they don’t give me clothing rustle and I can place them through a buttonhole when we’re on stage. When I’m outside or when a character will be yelling I use the Sennheiser MKE 1’s.

There are three Lectrosonics Venues on the cart; my boom operators get UM400s and a Denecke power supply. I have HM plug on transmitters for plants or some hard-to-reach places. We have a mixture of UM’s, SM’s, SMQ’s, MM’s, and SSM’s for the actors. I have so many different transmitters because I have collected the gear over so many years, I wanted to maximize your investment.

Our job always starts with reading the script and I always attend the production meetings. At the meetings, I listen and create a budget for what’s needed for the following week. I consider whether I need an additional boom operator, help to rig a car, or if we need a playback operator. Next, I figure out the additional gear I will need for the episode. Typically I need additional boom mics, an extra Venue, wires, earwigs with a base station, and Pro Tools playback. I submit my list before production locks the budget.

Mixing “Connection Lost”
three booms working it out, with Srdjan Popovic, William Munroe, and Dan Lipe


Our primary boom microphone is the Schoeps CMIT 5U and for small reverberant sets, the Schoeps CMC and even the Sennheiser MKH 50. We also use the Sennheiser 8060’s when there is lots of fan noise like a practical restaurant kitchen set.

Ninety-nine percent of the time we are using two booms and then anytime we have a big scene, I get a third boom operator. I usually bring Dan Lipe in three days a week. We’ll watch the scene and then I write ones, twos, threes, and ‘W’—that’s my road map for the scene. The boom operators usually have to play zones and for those areas a boom can’t reach, we will use a wire.

In the “Connection Lost” episode, everyone was face timing. The producers and director wanted to shoot the whole thing live on the stage at the same time and on all the sets. I needed seven boom operators and an IFB person for that day. I set up in the middle of the stage with four separate channels of earwigs so that each actor could hear the other actors on the different sets but not themselves. I used eleven earwigs in all and had an additional ten as backups. We also set up the director and script supervisor with handheld mics to talk to the cast over the earwigs. 

Julie Bowen was in front of a green screen, stuck at the airport. Phil, Alex, and Haley were in the Dunphy house, where I had two boom operators. Then at Mitch and Cam’s house, I had two boom operators. In the Pritchett household, we had Jay, Gloria, Manny, and Luke and two more boom operators in there. I’m very happy that it all worked out!

William Munroe booming Season 10
Steve in the front seat mixing a scene on Disneyland’s Thunder Mountain.


Most of the driving shots you see on the show are on an insert car. Fortunately, the majority of the time, the windows are up because we’re shooting through the front windows. I’m able to use the Schoeps BLM03’s. If the windows are down, I’ll switch to a Schoeps 41 on a Collette or a Sennheiser MKE 1, which ends up being quite nice. However, I’ll also try out the BLM’s to see what they sound like. I like to put in more mics than I need and then see what works. If the director changes something, I’m ready for it. It may have taken a bit more time to rig the additional mics but it works.

Chindha (RIP) built my minicart years ago. It was actually created for our Dude Ranch episode. I wanted the cart to hold a Zaxcom Mix-8, a Deva 5, and a Lectrosonics Field Venue. It became kind of a ‘lunch box’ that can come off the cart and go into an insert car. Its has power distribution, two monitors, and it fits on the trays of all of the insert cars. It’s my fast mini-rig that I can pop off, walk across a field, go in a four-wheel drive, or up stairs allowing me to have faders anywhere I go.

I adapt the gear based on the episode and what we’re doing. When we did our episode in Australia, I had to send everything out early but I needed my studio cart in Los Angeles because we were doing a big episode. In Australia, I used my medium cart, a Deva Fusion, a Nomad, a Zaxcom Mix-12, and a Mix-8. I rented another Venue there to augment what I sent. Frequency coordination was a problem since Blocks 25-29 worked well in Australia, but were no longer available here.

Santa Anita Park with Preston Connor and Dan Lipe
insert car setup


I don’t like the Sound Department being the center of attention. I just like production noticing, “wow, it sounds good.” But if I need to speak up, I speak up. I try to address an issue by asking a question. For example, “I’m not hearing this line or not understanding it. What is she trying to say?” Or, “Is there music going on, is there a Walla, how do you intend to play it?” or “Do you want them speaking up over this stuff?” Many times I’ll ask for a wild line.

If there is ADR needed, we’ll do it on the stage. We’ll play back the scene that needs to be looped on the set that we actually recorded the scene on.

We try to keep the actors from spending extra time commuting to the ADR stage. The show is mixed over at Smart Post in Burbank and we are on Stage 5 on the Fox Lot. I also have to say, the post team is stellar and it is great to have open lines of communication with them. The Supervising Sound Editor is Penny Harold and Dialog Editor is Lisa Varetakis. Dean Okrand CAS and Brian R. Harman CAS are the Re-recording Mixers.

Does it ever get stale? No, and we are going on to our eleventh season! I really enjoy doing the show and it has become really fun. We still have moments where I think, “how are we going to pull this off.” But when we do, it feels amazing!

Steve Tibbo CAS mixing on his sound cart.

Jean Pierre Beauviala

A Tribute to Jean Pierre Beauviala

by Agamemnon Andrianos

I met JP Beauviala at a CAS seminar in 2004. He had this special-looking Cantar recorder on display to show its functions and unique form. Wow, impressive and innovative for the times, he was ahead of everyone else in design and construction. We had several discussions about the preamps, ergonomics, and features of the recorder that had to step into replacing the traditional Nagra years.

His passioniate enthusiasm and insights were remarkable, he helped me choose the Cantar as my recorder for the Digital Age. I was curious why a camera manufacturer would design an audio recorder but I realized JP’s Cantar was truly a work of artistic genius. JP had a sense of the recorder and its function as an extension of your body and your ear! The faceplate actually has ears engraved on the surface.

How we hear was built into the features of the Cantar. It is his perception and custom engineering to flow as part of the filmmaking process. The rotary triple crown access control for preamps, metering, headphone amps, audio routing, timecode, powering, line ouputs are all hands-on intuitive. You have to spend time with your Cantar, know its functions with confidence, and work effortlessly every day.

JP knew how to design something unique and construct a digital recorder that had robust longevity! As I used the recorder, I kept in email communication and had direct access to Aaton & JP. He always answered my questions, I felt that his artistic sense and personality were special as a manufacturer.

He will be missed, he is one of a kind. JP Beauviala’s contributions to camera and sound technology are legendary. As an engineer, he had that dogged persistence and determination to creatively innovate and design incredible instruments for our industry.

When I started in production sound in 1972, the cameras were tied to a sync cable into the Nagra 3. JP was responsible for developing the electronics of crystal sync for the camera motors and time base to the recorders. He later innovated timecode on the cameras, his forming Aaton as a camera company was so innovative to documentary filmmaking.

All the DPs and camermen of that era were Aaton-centric and made great films because of JP’s innovation. We are all so grateful to his engineering accomplishments and especially the advancements to digital sound recording!

In 2004, I bought my X1 from LSC with Michael Paul and my second X2 in 2006, they are still working and are incredible recorders!

The Cantar X3 is the current model and improves eveything that you could want in a production recorder.

JP passed away on April 9 at 81 years old.

Cinema Audio Society Awards

The 55th Annual CAS Awards ceremony was held on Saturday, February 16, 2019, at the Intercontinental Los Angeles Downtown Hotel—Wilshire Grand Ballroom, Los Angeles, CA.

We congratulate all the winners!

Motion Pictures – Live Action

Bohemian Rhapsody
Production Mixer – John Casali
Re-recording Mixer – Paul Massey
Re-recording Mixer – Tim Cavagin
Re-recording Mixer – Niv Adiri CAS
Production Sound Team: Chris Murphy, Joe Nattrass

Motion Pictures – Animated

Motion Pictures – Animated winners Simon Rhodes, Peter Persaud CAS

Isle of Dogs
Original Dialogue Mixer – Darrin Moore
Re-recording Mixer – Christopher Scarabosio
Re-recording Mixer – Wayne Lemmer
Scoring Mixer – Xavier Forcioli
Scoring Mixer – Simon Rhodes
Foley Mixer – Peter Persaud CAS

Motion Pictures – Documentary

Motion Pictures – Documentary winners Joana Niza Braga, Ric Schnupp, Tom Fleischman CAS, Jim Hurst, and Tyson Lozensky


Free Solo
Production Mixer – Jim Hurst
Re-recording Mixer – Tom Fleischman CAS
Re-recording Mixer – Ric Schnupp
Scoring Mixer – Tyson Lozensky
ADR Mixer – David Boulton
Foley Mixer – Joana Niza Braga

Television SERIES – One Hour

Television Series – One Hour winners George A. Lara CAS, Ron Bochar CAS, Mathew Price CAS, David Boulton


The Marvelous Mrs. Maisel
“Vote for Kennedy, Vote for Kennedy”

Production Mixer – Mathew Price CAS
Re-recording Mixer – Ron Bochar CAS
Re-recording Mixer – Michael Miller CAS
ADR Mixer – David Boulton
Foley Mixer – George A. Lara CAS
Production Sound Team: Carmine Picarello

Television SERIES – Half-Hour

Television Series – Half-Hour winners Andy D’Addario, Chris Jacobson CAS


Mozart in the Jungle “Domo Arigato”
Production Mixer – Ryotaro Harada
Re-recording Mixer – Andy D’Addario
Re-recording Mixer – Chris Jacobson CAS
ADR Mixer – Patrick Christensen
Foley Mixer – Gary DeLeone

Television Movies or Limited Series

American Crime Story: The Assassination of Gianni Versace (Part 1)
“The Man Who Would Be Vogue”

Production Mixer – John Bauman CAS
Re-recording Mixer – Joe Earle CAS
Re-recording Mixer – Doug Andham CAS
ADR Mixer – Judah Getz CAS
Foley Mixer – Arno Stephanian
Production Sound Team: Kevin Cerchiai

Television Non-Fiction, Variety, Music Series or Specials

Anthony Bourdain: Parts Unknown “Bhutan”
Re-recording Mixer – Benny Mouthon CAS

Student Recognition Award

Student Recognition Award winner Anna Wozniewicz with presenters Xiang “Lisa” Li, Sherry Klein, Sam Fan


Anna Wozniewicz
Chapman University – Orange, CA

Steven Spielberg with his 2019 CAS Filmmaker Award, flanked by Ron Judkins CAS, Andy Nelson, Bradley Cooper & Gary Rydstrom CAS
MaryJo Lang recipient of the President’s Award at the 55th CAS Awards

CAS Career Achievement Award winner Lee Orloff CAS
Kishore Patel, Ed Capp, Dan Dugan, Jon Tatooles

Academy Awards

The 91st Academy Awards ceremony, presented by the Academy of Motion Picture Arts and Sciences (AMPAS), honored the best films of 2018 and took place at the Dolby Theatre in Hollywood, Los Angeles, CA.

Best Sound Mixing

L to R: Paul Massey, John Casali, and Tim Cavagin pose backstage with the Oscar® for achievement in sound mixing. ©A.M.P.A.S.


Bohemian Rhapsody

Paul Massey, Tim Cavagin, and John Casali
Production Sound Team: Chris Murphy, Joe Nattrass
 

BAFTAS

The 72nd British Academy Film Awards, the BAFTAs, were held on February 10, 2019, at the Royal Albert Hall in London, and honored the best national and foreign films of 2018.

Sound

L to R: Paul Massey, Tim Cavagin, Nina Hartstone, John Casali, and John Warhurst. Sound – Bohemian Rhapsody pose with their awards at the 72nd British Academy Film Awards, Press Room, Royal Albert Hall, London, UK. Photo by David Fisher/BAFTA/REX/Shutters


Bohemian Rhapsody
John Casali, Tim Cavagin, Nina Hartstone, Paul Massey,
John Warhurst
Production Sound Team: Chris Murphy, Joe Nattrass,
Jerome McCann, Dash Mason-Malik

AMPS

The winners of the Sixth Annual AMPS Awards were announced on February 11, 2019.

For Excellence in Sound for a Feature Film

L to R: Paul Massey, John Casali AMPS, Nina Hartstone AMPS, Chris Murphy, and Tim Cavagin


Bohemian Rhapsody
John Casali AMPS
Nina Hartstone AMPS

John Warhurst
Paul Massey
Production Sound Team: Chris Murphy, Joe Nattrass

Names in Bold are Local 695 members

The Way We Were: Mixers Past & Present (Part 3)

The Nineties
While the 1980’s saw some changes in mixer technologies, it would remain until the nineties, and the transition to digital recording, before any further development would take place.

Despite the introduction of the Sony PCM-F1 format in 1981 (adopted by a few brave mixers for use in film production), it would be another six years until DAT was introduced as a medium for the consumer, and yet another five years for it to begin to be adapted for professional use. While DAT saw fairly ready acceptance in the music world (being a huge improvement over cassette tapes and cheaper than ¼” tapes), it remained for the introduction of the Fostex PD-2 in 1992 before being taken seriously as a production recording format. However, being a two-channel format (with the exception of the Stelladat II, which boasted four channels), there was really no change in terms of how films were recorded and edited. Sound was still transferred to 35mm mag stock for cutting, and mixed in a traditional manner. It could be argued however, that it offered a better recording medium than typical 7.5 IPS non-Dolby analog recording, so that some issues relating to noise and other flaws in source material might be more readily apparent compared to analog. Still, there was no compelling reason to change mixing equipment. At the end of the day, it was still just two channels of audio. (Yes, there were 4-track, 8-track, 16- and 24-track analog recorders in regular use during the decade, but these were primarily employed on music projects, and not for typical production recording.)

1. PSC M8 console. One of the first portable analog mixers to be introduced with four output busses.

Changes were in the wind however. The first of these was the introduction of the Nagra D recorder in 1992 (the same year that Fostex debuted the PD-2). While pricey and rather large, it was the first machine to be able to record four channels of high-quality digital in the field. While this offered some advantages in certain production situations, it was never adopted to any significant degree due to the fact that it was a proprietary format, expensive technology, and required transfers to 4-track fullcoat mag to be able to take advantage of all four channels. It did, however, cause both sound mixers and equipment manufacturers to begin to rethink the approaches to production mixing boards. Although there were relatively small consoles available which could be configured for four (or even eight) busses, most of these were limited to studio production.

2. Audio Developments AD146 18-input 4-buss console.

The next in line in the four-channel sweepstakes was the Deva recorder, introduced in 1996. This marked a wholesale shift in production recording technology from tape to file-based recording, with an attendant change in workflow. If there was ever a reason to doubt that production recording was headed for higher track counts, the introduction of the Deva would put them to rest. While still slow to see ready acceptance, the fact that the recorder was capable of generating sound files that could readily be imported into digital audio workstations (such as Pro Tools and Fairlight) meant a huge savings in transfer costs. At this point, the industry began to sit up and take notice.

3. Allen & Heath WZD3 16:2 console, a basic AC-powered board sporting 16-input and two busses, in a 19”-wide frame.

With four channels now at their disposal, production sound mixers started to look at ways to take advantage of them. Four channels didn’t allow an isolated channel for every input, so some sort of mixing was still required. However, most portable mixing consoles of this period were designed with dedicated master sections with two output busses, and perhaps one aux buss. As a stop-gap measure, some manufacturers (such as Cooper and Sonosax ) developed  an auxiliary module for their existing consoles which would provide two extra outputs.

4. Audio Developments AD149 14-input 2-buss console. (Photo courtesy Scott Smith CAS)

Other manufacturers had already anticipated this demand. In 1992, Professional Sound Corp released its model M8 mixer, a well thought out design that incorporated
eight inputs and four busses.

5. Sonosax SX-ST 8-input 8-buss console.

Audio Developments followed up with a variation to their AD245 mixers, introducing the AD146 and AD147 in the late 1990’s. While equipped with only two buss meters, these consoles actually had four assignable busses for each input module. An additional 15-pin connector allowed for full metering of all outputs.

Thus began the race for more channels…

6. Studer 269 12-input 4-buss console.

2000
With file-based recording now becoming  firmly entrenched as the primary medium for production recording, the ability to record multiple channels of high-quality audio without having to resort to multiple recorders or wide-format analog tape was becoming a reality. Of course, the logical use for the additional channels was to be able to isolate various mics during production (in the same manner that multitrack had been used for music recording for years).

The only hindrance to this approach was that not all portable production mixers of the era had facilities to provide a direct output from each input channel. So, it was once again back to the drawing board for some manufacturers to provide this capability. Notable among the mixing consoles that were intended to address the issue were the Cooper CS-208 (introduced in 2000), the Audio Developments AD149, and Sonosax SX-ST series (introduced in 2008). In addition, the Audio Developments AD149 and Sonosax SX-ST could be provided in configurations up to 12-input channels, a significant departure from the days when four or six inputs were the norm.

7. Two Cooper CS208 consoles shown ganged together for sixteen inputs. (Photo courtesy Alex Riordan)

Another interesting entry to the fray, and setting the stage for the future, was the introduction of the Zaxcom Cameo mixer in late 1999. While there were other digital consoles on the market (such as the Yamaha 02R, introduced in 1995), Zaxcom was the first to market with a portable digital board designed specifically for production recording. It sported all the features one would expect of a film-mixing console, including communication channels, extensive routing, plus individual channel delay. It was also equipped with both analog and AES I/O. However, despite its 6-mix busses, it was still limited to an 8-input configuration. While this wasn’t too much of an issue at the time it was introduced, the day was coming soon where eight inputs would be deemed insufficient for the needs of multi-camera shows with large casts.

8. Studer 962 14-input 4-buss console.

While some mixers employed tape-based recorders such as the Tascam DA-88 for multi-channel recording, these were cumbersome rack-mounted recorders requiring an AC supply. File-based recording was of course, available at this juncture, but it would remain for the introduction of the Fostex PD-6 recorder in 2002 to up the channel count. Still, recording time was limited.

2004 brought the introduction of the Zaxcom Deva IV & V recorders, followed by the Sound Devices 788T in 2008, both capable of eight channels of analog or AES inputs, and ten tracks of recording. For the first time, production crews had portable, battery-operated recorders capable of a significant track count at their disposal.

9. Zaxcom Cameo. One of the first portable digital mixers intended for film production.

Not to be outdone, in March of 2008, Zaxcom introduced the Deva 16 recorder, which, when paired with its digital mixer, could record sixteen tracks of audio.

At this point, the traditional approach used for portable analog consoles was reaching the limit in terms of input capabilities. Except for custom (and pricey) versions of the Sonosax consoles, the only other alternatives for analog consoles with an input count over twelve channels was the Allen & Heath WZD 16:2 console (a no-frills AC powered 16-input portable board aimed primarily at the live sound market), and the Studer 169 and 961 series consoles. Beyond that, the only solution was to gang two consoles together to achieve the input count needed. While this was fairly easily achieved with the Cooper and Audio Development consoles, it was hardly an elegant solution. It took up a wider footprint than what was really needed for sixteen channels, and also required power for two consoles. Still, many mixers took advantage of this approach.

10. Yahama 01V, one of the earlier entries into small-format digital consoles, the forerunner to the 01V96.

Clearly, the time had come for some different solutions to the increasing track counts demanded of some productions.

Next up, “Where we stand today.”
 –Scott D. Smith CAS

Failed Storage

RECOVERING
a Failed Storage Unit

by James Delhauer

In 2005, Hurricane Katrina made landfall upon the Southeastern Seaboard of the United States, cutting a swath of devastation in its path and inflicting incalculable damage to those who were left in its wake. Though we continue to mourn the loss of life and livelihood that this natural disaster caused, the residents of New Orleans and other affected territories have spent more than a decade rebuilding. In that time, hundreds of personal hard drives have been sent to data recovery centers across the nation in the faint hope that the data contained within could be salvaged. Devices that were battered by the storm and then submerged in murky waters for days, weeks, or even months would be deemed a lost cause to almost anybody. But they weren’t. In what can only be described as the miracle of technology, survivors began to see their personal data recovered and returned to them intact. This proved quite definitively that digital data is more robust than most would have suspected.

Digital storage devices are bigger and faster than ever before but the risk of failure and data loss is just as daunting as it was when the first hard disk drive was invented in 1954. Today, digital storage mediums exist along every link in the chain of the film and television business.

Petabytes of information are created, acquired, distributed, and archived every year. We rely on digital storage almost as much as we rely on craft services but there is a very real danger of drive failure, data corruption, and loss of work. An estimated 0.3 percent of flash storage devices sold each year will suffer some sort of fault or accidental damage resulting in data loss. For mechanical devices that still utilize moving parts, that number increases to approximately 1.7 percent. While these numbers may sound comfortingly low, the sheer number of storage devices used within the industry would suggest that drive failure comes up more often than one might think. While it is always advised that media be backed up to multiple storage units as soon as possible, mistakes can happen or failure can occur before that is possible. So, what should a Local 695 sound mixer or video engineer do if they find themselves holding a faulty memory card or storage drive?

The first and most important step is to stop using the faulty unit immediately. Attempted use could exacerbate problems and make data recovery more difficult. When not attempting troubleshooting, the device should remain powered off and unplugged. The next step is to attempt to deduce the sort of problem that has caused a drive to fail. Broadly speaking, issues can be divided into the categories of logical failure, mechanical failure, and complex failure. Each of these groups presents its own symptoms and has its own set of troubleshooting steps. So correctly assessing the type of problem is essential.

Logical failure is the most prevalent and is the result of digital damage to the device’s partition—the file system that a computer uses in order to communicate with a storage medium. When a drive ceases to function as a result of logical failure, it remains physically sound and viable but cannot be read or written to by the computer’s operating system. More often than not, this means that all of the files that a user has on the device are safe and sound but simply cannot be accessed until the partition is repaired. Reasons for logical failure include malware, bad or degraded software sectors, overworking the drive, improper ejection during data transfer, or the deletion of necessary system files. Prior to a complete partition crash, users may notice sluggish behavior from their device, a high number of read/write errors, and frequent unprompted mounting and un-mounting of the drive. If the problem drive is acting as a computer’s primary boot drive, regular lockups and computer crashes are another warning sign. When logical failure occurs, connected storage devices will usually still power on and light up but will not mount and will appear to be absent from the computer’s Finder (macOS) or Explorer (Windows). If the user opens the macOS Disk Utility or Windows Disk Management system, the problematic unit will still appear in the list of connected devices.

Before attempting any direct troubleshooting steps, users should check a device’s manual or product support page and make sure that any necessary firmware or drivers have been installed on their computer. Failing that, macOS users can open the Disk Utility application and use it to attempt partition repairs. Find the storage device that will not mount and look to see if any partitions are listed. If available, select it and click “First Aid.” The software will assess the unit’s file system and attempt to make repairs. If it is successful, the computer will automatically mount the repaired storage device, allowing the user to access their files. Similarly, Windows users can make use of the Windows Partition Recovery Wizard. This program will scan the storage device for any corrupted or lost partitions and, if found, will attempt repairs. If partition recovery is successful, it is highly advised that all data on the drive or card be copied to another storage medium immediately so as to avoid the risk of data loss again in the future.

In more complex cases where more substantial damage to the partition has occurred, repair may not be possible. Simply creating a new partition will not recover the files contained within the original and could, in fact, overwrite valuable data that has become inaccessible. At this point, if recovery is essential, it becomes necessary to bypass the partition altogether. There are several pieces of software available that can perform this task. The two that I have personally used to the best results are Stellar Data Recovery ($79.99 USD) and EaseUS Data Recovery Wizard ($89.95 USD). Both applications can scan storage devices sector by sector and locate files within the damaged partition. Once located, said files can be recovered and transferred to a second external storage device. Due to the fact that the software works around standard operating system to partition communication systems, scans and recovery periods can be quite time-consuming. Larger capacity drives containing multiple terabytes of information can require scans of more than twenty-four hours. On a more positive note, both companies allow users to try before they buy. A free trial is available for both, which will allow users to scan and preview their recoverable files before spending money—eliminating the concern of spending without any guarantee that data will be found. This method can also be used to recover files that were accidentally deleted by a user—a mistake that occurs far more frequently than actual device faults.

Mechanical drive failure occurs when there is a physical issue with a storage device. It can occur due to manufacturer error, physical degradation, or damage. When plugged into a computer, devices suffering from mechanical failure may not be discoverable at all. Though less prevalent than logical failure, mechanical failure is far more difficult to troubleshoot and best practice is to take steps to prevent it altogether. There are two subsets of mechanical failure: electrical failure and bad sector failure.

Electrical failure occurs when the drive does not receive the necessary power to run properly. Oftentimes, the device will not power on at all, though it may generate heat if it remains plugged in. If this is the case, remove all power cables immediately as heat buildup can result in further damage and, in extreme cases, fire. Impact damage, such as a fall or drop can disrupt electrical flow, resulting in electrical failure. It can also happen as the result of a power surge, which can burn out the circuitry of the device in a manner that prevents electricity from reaching the whole of the unit. To avoid this, it is best to always run devices in conjunction with a surge-protected uninterruptable power supply, such as APC’s Backup Battery ($169.99 USD). When using memory cards or external hard drives, damage to the connector cables, card readers, or drive enclosures can present as electrical failure. For memory cards, it is always advisable to try using a second card reader before assuming electrical failure. In the case of external drives, users with the correct tools can open a drive’s enclosure, extract the unit inside, and attempt to mount it using another enclosure or mounting system, such as the iDsonix Hard Drive Docking Station ($20.99 USD).

The second subset of mechanical failure, bad sector failure, is a worst-case scenario. It is what occurs when the portion of the drive where information is written cannot be accessed at all by the unit. It is most common in spinning disk drives, where the actuator arm inside of the drive is used to retrieve information from rapidly spinning platters. If the arm is knocked out of alignment, it may be unable to access part of the platter in order to retrieve its contents and send it to the computer. Or, if dust settles on the platter, it can act as a barrier between the platter and the arm, also interrupting communication. In extreme cases, the actuator arm may make direct contact with a platter, scratching it and permanently damaging the data in the same manner as a scratched DVD or Blu-ray. In this case, recovery of damaged sectors may be impossible. If this occurs, users may hear a distinct clicking or scraping sound coming from the drive when it is powered on. This is a screaming red flag and the device should be powered off immediately as each clicking or scraping noise is the sound of data being permanently destroyed. In the case of solid-state media, bad sectors can occur when memory cells age and fail as a result of constant use, similar to the lithium-ion batteries found in cellphones.

Unlike logical drive failure, where a variety of consumer options exist to resolve the issue and recover media, mechanical hardware failure is almost always beyond the means of a user to fix on their own. Advanced technicians utilize sterile clean room environments to perform surgery on damaged drives. Functional components are removed from the damaged devices and transplanted into new units. Dirty or corroded mechanical platters need to be chemically treated in order to clean them. The entire process is incredibly delicate, as dust or fingerprints on a physical disk is more than enough to ruin the entire transplant procedure. As a result, this process can be expensive and estimates can vary from a couple of hundred dollars up to several thousand. Fortunately, unless the actuator arm has actually scratched a drive’s platter, spinning disk drives currently have an estimated ninety nine percent successful recovery rate, with a success being defined as a recovery of at least ninety seven percent of a user’s data.

The last category, complex failure, is simply a combination of any of the above errors. A drive can fall from a table, yanking it out of a computer during transfer in a manner that damages the partition and creates logical failure before it crashes to the floor, knocking its actuator arm out of alignment and causing bad sector mechanical failure. At this point, the unit would require multiple troubleshooting steps in order to recover the information within. Unfortunately for most users, the outcome is the same as if the device had simply suffered mechanical damage and the unit will almost certainly need to be sent to data recovery experts for repair.
 
In the event of a drive failure on set, Local 695 members should never attempt troubleshooting or repair procedures without first discussing the matter with their head of department or a producer and informing them of the potential cost and the risks involved. In the event of logical failure, it is possible to salvage a production’s data and save both time and money—always a good thing when negotiating your next rate. However, if mechanical or complex failure is the suspected culprit, it is probably best to turn the faulty drive over to someone with decision-making power and recommend that they consult advanced recovery specialists.

VICE

Power Play

Meticulous prep allows sound to track Writer-Director Adam McKay’s Vice

by Daron James

Adam McKay with Sam Rockwell and Christian Bale

When Production Sound Mixer Ed Novick got the call about Adam McKay’s film Vice, a fictional drama uncovering the unwavering power of Dick Cheney (Christian Bale) who served as the Vice President to George W. Bush from 2001 to 2009, the decision to say yes was an easy one.

Novick previously wrapped the pilot for McKay’s Succession; HBO’s must-watch series about a filthy rich and dysfunctional family trying to keep its media empire alive. In Vice, the writer-director follows his 2015 film The Big Short, another Bale-starring allegory focusing on the 2008 financial crisis, with a biopic of Cheney from adolescence to his rise in the political ranks.

Cast and crew between takes inside the Oval Office

Sitting in the sound recordist’s Los Angeles home, Novick admits he enjoys the “free -wheeling” directing style of McKay. “Adam has a way with actors where he lets them explore. As long as I have enough mics and tracks, I’m good to go,” says Novick, who’s been sliding faders since the early eighties and won an Oscar for Christopher Nolan’s Inception.

Pre-production is where Novick puts in the brunt of the work to give sound its best chance during filming. “The tech scout is the most important day of prept,” he says. “Going to look at the physical locations and finding out what the problems are in advance is going to allow you to solve them much better than you would on the day.” Besides the locations’ natural sound elements to contend with, the conversation involves other departments, especially grip and electric and deciding on where to station things like generators and cables.

McKay directs a scene as Steve Carell and Christian Bale look on

Tapping Boom Operator Randall Johnson and Sound Utility Ryan Farris, the dialog-driven script was shot roughly over sixty days with production ramping up in the Jefferson Park neighborhood of Los Angeles to stand in for 1950s Casper, Wyoming, where Cheney grew up. It’s during this time, we learn how influential then girlfriend and future wife Lynne (Amy Adams) was on Cheney.

Dick Cheney (Bale) and Bush (Rockwell) at the President’s desk.

In a scene filmed in Newhall Orchard west of Santa Clarita, Cheney is driving drunk, singing along to the Hank Locklin song “Send Me the Pillow You Dream On.” As the camera gets closer, sound drives the performance using an earwig, recording the live vocals sung by Bale with a plant mic and lav. His eventual crash and arrests forces Lynne’s hand, telling Cheney over the phone she doesn’t want to marry a nobody; sending him on a completely different path.

Sound Utility Ryan Farris, Boom Operator Randall Johnson and Sound Mixer Ed Novick (photo courtesy of Ed Novick)

Phone conversations are a reoccurring theme in the film and Novick used different methods to record performances. The sound mixer will use a JK Audio BlueKeeper to connect cellphones, a Viking Ringdown and Genter box to connect landlines, and for playback, Soundplant, an application that allows the user to load audio into the program and trigger playback through
a QWERTY keyboard. “It’s important for actors to have someone to talk to and I think they benefit more when they can have the other actor on the line rather than an AD or a script supervisor reading the material,” he admits.

Bush and Cheney at the Bush residence

As Cheney starts his political path, his first stop is the Congressional Internship Program where Rumsfeld makes a speech to the inductees at a podium inside a large echoic room—something that doesn’t bother the sound mixer. “We’re making sound for picture,” says Novick. “The most important tool I have on my cart is the video monitor. It tells me what the shot is. If we’re in a big echoic room, we try to make it sound like what it looks like.” The sound team also took the time to make every on-camera microphone practical; getting a lot of help from Property Master Matthew Cavaliero. In addition to the practical microphone recordings, an added lav and/or overhead mic provided multiple options for post.

Cheney with wife Lynne (Amy Adams)

For most of the show, overhead wireless booms danced between multiple cameras shooting multiple angles and lavs were placed on everyone. For Cheney, Bale’s performance touted a low voice coupled by him burying his chin into his neck. The actor also went through nearly one hundred costume changes. Sound used either a Sanken COS-11D underneath a necktie or a Countryman B6 through a button hole in the absence of one.

Cheney giving a speech to the media

When George W. Bush (Sam Rockwell) entered the picture, a similar mic’ing strategy was put in place. One scene that did challenge sound was a walk and talk between Bush and Cheney at the Bush family home. The short stroll takes place right after Cheney accepts the VP position. Bush’s costume entailed a very scratchy shirt and a tight-fitting hat. Needing to place a lav on Bush as the frame was very wide, Rockwell suggested putting the mic underneath the hat. “It’s something we normally don’t do. Not only because of the weight of the transmitter on the actor’s head but the lav can affect how that hat fits,” says Novick. “But since Sam suggested it, we didn’t think twice about it.”

Cheney walks the halls of the White House; Cheney and Lynne wait to be introduced as VP nominee

Another challenge was recreating the interview between ABC News journalist Martha Raddatz and Cheney that takes place near the end of the movie. The scene was recorded simultaneously in different formats; 35mm and NTSC video. “Because they were going to have video cameras as props, they went ahead and made them working cameras,” says Novick. To record audio, two Deva 16 recorders were used. One slated at 24fps for the film cameras, the other slated at 29.97fps for the video cameras. Identical audio was then passed through AES to both Deva 16’s and two iPads were used for notetaking. A 12-channel Sonosax served as the mixer.  

While the majority of scenes made the final cut, one in particular did not and it just so happened to be sound’s biggest day. It was a musical dance number set inside a cafeteria. Filmed at Santa Anita Park, the scene was between Cheney and Rumsfeld. It starts off in a cafeteria line and the two are talking about how things work in Washington. As they are about to sit down, Brittany Howard, from the musical group Alabama Shakes, stands up and starts singing about how things are supposed to work in Washington. The sequence served as a cutaway element that folded back into the story as if it never happened.

The cast looks on as George H.W. Bush loses out on his second term.

After a weekend of rehearsal and two shooting days that included dozens of extras and choreographed dances, sound provided live vocal recording via a lav hidden in Howard’s hair, playback speakers, thumpers, earwigs, and brought in a separate Pro Tools operator to record the sequence. “It’s the best scene. They just couldn’t find a place for it in the movie,” notes Novick.

For the sound team Novick says, “We’re there to exclude the sounds that don’t belong and include the ones that do belong. We know we’re creating sound for picture and every piece of audio will be manipulated in some way. It’s our job to capture the best performance and I think we did a pretty good job.”

All Photos: Matt Kennedy/Annapurna Pictures, except as noted.

First Man

Mission Critical

Ryan Gosling as Neil Armstrong in First Man.

Sound Mixes an Emotional Journey for Damien Chazelle’s First Man

by Daron James

On July 16, 1969, Apollo 11 launched from Florida’s Kennedy Space Center carrying three astronauts—Neil Armstrong, Edwin Aldrin, and Michael Collins—their destination; the moon, a mere 240,000 miles away. Four days later at 10:56 PM ET, Neil Armstrong stepped onto the lunar surface uttering his now famous words to a billion people listening at home. 

Inside the Apollo 11 capsule

That’s one small step for man, one giant leap for mankind.

In First Man, Director Damien Chazelle (Whiplash, La La Land) viscerally explores the story behind the mission to the moon, immersing us in the life of Neil Armstrong (Ryan Gosling)—his marriage to Janet (Claire Foy), being a father of three, and the tribulations leading up to the historic event.

Visually, Chazelle and Cinematographer Linus Sandgren leaned on a dynamic style tapping three different film formats to distinguish story elements. 16mm emphasized Armstrong’s early life and spacecraft interiors. 35mm captured their El Lago, Texas, home, NASA, and shuttle exteriors. When the Apollo 11 door opens up the moon, it shifts from 16mm to 70mm IMAX.

Damien Chazelle and Costume Designer Mary Zophres review mock-ups

“The film was broken into two halves,” says Production Sound Mixer Mary H. Ellis CAS, known her work on Prisoners and Baby Driver. “The first half was Armstrong’s life on the ground and the second half was the spacecraft work and moon landing.” Along with Ellis were Boom Operator James Peterson, Sound Utility Nikki Dengel and Sound Playback Alexander Lowe and Raegan Wexler.

An early rehearsal before production introduced the shooting style to the sound crew. Chazelle, wanting a realistic portrayal, proposed that all camera movement—except when on the moon—be handheld, cinéma vérité style.

One rehearsed scene intimately placed Armstrong’s two-year-old daughter Karen (Lucy Stafford) in his arms, hugging her as he circled. Only the two actors, director, cinematographer and Peterson were allowed in the room. To record audio, Peterson was given a Sound Devices 788T to place around his neck to track the rehearsal, which ended up in the final version of the film. “We put a lav pretty much on everyone all day every day, but we never wired Lucy. Damien didn’t want her being aware of any of us,” notes Ellis. “The rehearsal helped a lot. It allowed James to get used to Linus’s body language operating the camera as he would spin 180° and widen out the lens often.”

Cinematographer Linus Sandgren on a catwalk with actor Ryan Gosling

Shot primarily in Atlanta, Georgia, on practical locations and stages, production did travel to Edwards Air Force Base in California to recreate Armstrong’s X-15 flight take-off and landing that opens the film and another day at the Johnson Space Center in Houston.

The busiest days for sound took place on the mission control set, a vast replica of the Johnson Space Center by Production Designer Nathan Crawley. The complex scene brought us inside the command center as the cosmonauts rocket toward the moon. As many as twenty-three actors needed to be wired at once in order to cover the dialog. Ellis brought in Production Sound Mixer Michael P. Clark (Stranger Things, The Walking Dead) to help head the task.

The crew filming the Apollo 11 walk

“I insisted on a rehearsal two days prior to shooting as there would be a limited amount of time on production days,” says Ellis. “We wanted to just slap the mic on the actors and find out how everything was.” It allowed the sound mixer to create a seating chart of the actors where Ellis mixed the top eight and Clark took the other fifteen.

“I knew once Damien got going, he would upgrade non-speaking parts to speaking ones, so I warned everyone about it. We had to be careful when stealing a microphone from someone to be sure they were not going to play in a specific part,” Ellis continues. “All the actors were fitted with a Sanken COS-11D. Each wire had its own ISO track and the mix was kept consistent no matter what happened on the page.”

Filming outside the famous X-15 flight

To record the dialog and for communication between the director and actors, an intricate setup was configured that included off-camera readers. “Alex [Lowe] was fundamental in all of this,” says Ellis. Lowe created three different mix options to route through. Sound also accounted for each actor’s preference in terms of who they wanted to hear and what they wanted for playback. For instance, Ben Owen, who played John Hodge, wanted to hear all twenty-three microphones at once in his earwig to feel the sense of urgency and chaos in the room.

Recording dialog inside each spacecraft was a different technical story. An early concern for sound was the multitude of spacesuits and helmets as wardrobe. The film moves from 1961 through 1969 and details five missions, including the X-15 flight, Gemini V, Gemini VIII, Apollo 1, and Apollo 11. Costumer Mary Zophres researched and duplicated each look, even creating two suits for Apollo 11, one for each actor and the other for their stunt double.

Damien Chazelle inside mission control set

“In prep I spoke with Whit Norris and Mark Weingarten, people who had done helmet movies before to find out what they’ve accomplished, but I learned they didn’t have to worry about the period piece part of it like we did. We didn’t have as many wiring options so we planned different strategies for when we could get our hands on the helmets,” says Ellis. “We ended up buying four new mics and had a quick release made right at their neckline because the minute the actors could, they would take off their helmets.”

The spacecraft modules were built for actual size. They were tiny, and once an actor was inside, it was impossible to adjust the wireless microphone. “Our other concern was about airflow and how loud it was going to be inside the helmet. You have to have enough air for the actors so they don’t pass out but it can lead to condensation,” says Ellis.

“Instead of a regulated system, they had an air compressor pushing air into the helmet. It was all or nothing and very loud,” notes Lowe. “Mary sent me separate feeds that I gated open when they talked or reduced air noise. I sent the actors a feed of their own off-camera reader, the feed of the other actors, but not themselves, and mission control comms to all. When Damien talked, it shut down every feed, including their own so they could hear his direction. I also routed the First ADs voice of god to any one of them if needed. All this was done each day. I had to break it all down every night and set it up again the next day. It took two hours.”

Earwigs were not used inside the helmets because if they went out, 108 dB of white noise would blast into the actor’s ear. Instead, Comteks were hardwired inside a prop earwig and set to the earwig frequency and surveillance systems for sound to have complete control over. “The great thing about this was the batteries lasted all day as the actors could be in the capsules for up to seven hours. Also, I could change a battery without taking off the suit in case one failed, though that never happened,” says Lowe.

Additionally, the original launch day recordings from NASA came into play on set when actors wanted to listen to the delivery. “Ryan was very particular about mimicking Neil’s inflections, specifically when we were on the moon,” notes Lowe. “I fed Ryan a recording of Neil and he would work out his moves with the dialog.”

On set with Sound Playback Alexander Lowe

To find the right mic placement for Gosling, the actor was all about experimenting and finding the right levels. “Ryan doesn’t like to do any ADR, so we needed to find the right balance between the air level and audio level so he wasn’t looping two months of capsule work,” says Ellis. Another point of emphasis was placing plant mics as ISO tracks outside the gimbals as they got creakier for post.

For its moon landing, production took over the Vulcan Rock Quarry, a rock quarry south of Atlanta. The shoot took place outdoors in December and at night. Sound approached the work utilizing wires instead of booms to give the actors solitude. “It was a real internal moment for them so we wanted to give them as much space as possible,” says Ellis.

Reflecting on the show, Ellis admits Sandgren gave them some challenging situations. “He would always come over to say sorry but he didn’t have to. He had an amazing team and we were able to have this really great dance together thanks to the crew I had around me.”

All photos: Daniel McFadden/Universal Pictures and DreamWorks Pictures

The Way We Were: Mixers Past & Present (Part 2)

The 1970’s

While the 1960’s saw some further advances in the techniques of both production sound recording and re-recording, it wasn’t until the 1970’s that some of the nascent technologies developed for music recording began to make inroads into the film industry. Although stereo and surround sound were nothing new (going all the way back to the early 1950’s), the films released in either four-track 35mm Cinemascope mag or six-track 70mm mag were limited to major roadshow titles like 2001: A Space Odyssey and Woodstock. Prints were extremely expensive, and the number of theaters equipped to run either 35mm mag or 70mm were typically limited to major cities. And even with the advent of these technologies, theater loudspeaker systems hadn’t really evolved much past the technologies of the late 1940’s and early 1950’s. Despite the extraordinary quality of 70mm magnetic, the Academy curve was still the norm, with its severe rolloff of high frequencies.

Other changes were beginning to take place in the 1970’s as well. Audiences had become more sophisticated in relation to sound. A new generation of music listeners had become accustom to high-quality home sound systems, FM radio began to take off, the quality of the compact cassette improved, and those with the means invested in recorders to listen to four-track reel-to-reel releases. Audiences of this generation were not going to be satisfied with the sound of a theater system developed two decades ago. Commensurately, theater attendance was in decline, and studios were looking for ways to attract a younger audience.

It was against this backdrop that a number of advances in film sound took place. Most notable among these was the introduction of Dolby noise reduction in the post-production stages (first used on Stanley Kubrick’s A Clockwork Orange in 1971). While the film was originally released in Academy mono (due primarily to Kubrick’s concern regarding how many theaters would be able to play stereo optical), it was clear from the tests done at Elstree Studios that the quality of sound could be markedly improved if the process could be applied to the optical track itself. Further development was done at Dolby Labs over the next few years, which culminated with the release of Lisztomania in 1975 and Star Wars in 1977.

Also notable was the introduction of Sensurround by Universal Studios, which was first used for the movie Earthquake in 1974.
And perhaps most important in the realm of production sound recording, 1975 marked the year that Robert Altman’s movie Nashville was released, significant both for its use of multitrack dialog (with stellar work by mixer Jim Webb), in addition to live multitrack music recording (utilizing a remote truck built by the author).    

While multitrack dialog recording was not exactly new per se (having been used for the production of three-channel Cinemascope films), the use of multitrack for production sound would mostly be limited to Robert Altman films for nearly two decades. It did, however, help to spur a move to a more sophisticated approach to production sound, which was still largely done on mono Nagra recorders (despite the introduction of the stereo Nagra 1970).

With the introduction of op-amp technologies, mixer designs began to take on a significant change in design philosophy during the 1970’s. These advances, along with more sophisticated printed circuit board designs and smaller components, made possible more compact mixers with less current draw than their predecessors. It also heralded the adoption of a modular approach to console design, with components separated into input modules, master modules, buss assignment modules, and monitor modules. While these approaches were at first destined for the music and broadcast world, it wasn’t long before they were adopted by manufacturers engaged in designing mixers for the film industry. This was due in no small part to the increase in the channel counts of film dubbing stages, which were beginning to increase with the advent of Dolby stereo in 1975.

The same approach was also used for smaller production sound mixers, with more limited facilities. The 1970’s would also mark an era that would see a more ready adoption of European film sound equipment by US sound mixers. Although companies such as Sennheiser and Neumann had made inroads into the United States with their microphones (primarily for music recording), and Nagra with portable recorders, up until the seventies, if you walked into most film sound operations, nearly everything you saw was of US manufacture.

The Sela 2880-BT mixer, introduced in 1967, paired with a Nagra III recorder. The industry standard for many years. Photo courtesy Film Sound Sweden

In the early seventies, there were still not many choices when it came to lightweight production mixers (the Nagra BM-T and Sela 2880-BT not withstanding). For stage work, it was still common in Hollywood to see mixers made by both Westrex and RCA dating back at least a decade (with many custom variants) used on set. As the move to location shooting became more prevalent, sound mixers started looking for alternatives to the bulky production boards typically used for stage work.

However, there were some alternatives for those who wanted to take a bit different approach, which in many cases involved doing a bit of customizing. Notable among these were the following:

  • The Sennheiser M101 mixer, a four-input, mono-output mixer with built-in battery supply and T power, which was first introduced in 1969, but took a little time to catch on in the US market. Some enterprising individuals would also customize these boards into a six-input configuration.
  • The Stellavox AMI mixer, a five-input, two-output mixer introduced in 1971. Designed by former Nagra engineer Georges Quellet, this was intended as a companion piece to the Stellavox SP7 recorder.
  • The Audio Developments AD031 “Pico” mixer, which could be supplied in a few different configurations, and utilized a 24-volt power supply.
  • The Neve 5422 “suitcase mixer” brought to market in 1977, and intended primarily for use in location music recording and broadcast.
  • The Studer 169/269 series mixers, introduced circa 1978, and which could be ordered in a variety of configurations. Intended primarily as a location broadcast console for the European market, this console could be either AC- or battery-powered. While prized for its sonics by music engineers, it was only used by a handful of production mixers in the States (due in no small part to its size and weight).       
The Neve 5422 “suitcase mixer.” This mixer was the first entry that Rupert Neve made into the “portable” market. Featuring classic Rupert Neve mic preamps and EQ, it was prized by many music and production mixers for its sound.

As amplifier technology evolved and components became smaller, it allowed designers the luxury of adding more features, including three-band equalization, better high-pass filters, better mic preamps, and more sophisticated signal routing. It also marked the move away from the traditional four-input mixer, which had dominated production sound recording for nearly four decades. Still, production sound equipment had to be portable, which limited the sort of features that would be found standard on even fairly rudimentary re-recording consoles of the period.

A custom fifteen-input eight-buss mixer conceived by Jim Webb, and built by Jack Cashin. Designed specifically for eight-track recording on Robert Altman films. Note the individual VU meters for the iso outputs. Photo courtesy James Webb

A (very few) ambitious sound mixers also took it upon themselves to build or commission mixers to their liking from scratch or perform significant modifications to mixers that were designed for other purposes.

The 1970’s also saw an extensive adoption of straight-line faders, which had moved from wire-wound designs to carbon composition resistive elements. While early straight-line faders were prone to problems when used under unfavorable conditions, the new faders were both smaller and more reliable. In addition, sound mixers who began their careers in music recording or post production were more receptive to using them for production work. By the end of the decade, nobody except Sela were manufacturing location mixers with rotary faders.

The 1980’s


Despite the fact that the seventies saw a host of developments in film sound recording, it didn’t translate into very many changes in the sound mixing techniques and equipment used for production work. Most of the advances made in the previous decade were in the area of re-recording, as well as the advent of Dolby Stereo on optical tracks (Stereo Variable Area), which allowed studios to release titles in L/C/R/S stereo without the need for four-track magnetic release prints. Since the optical tracks could be printed and processed on standard laboratory equipment, it greatly reduced the costs associated with making a stereo release.

As such (with the notable exception of Robert Altman), most production sound packages still consisted of a four- or six-input mixer, perhaps mated with a Nagra stereo recorder with Dolby noise reduction, and four channels of wireless. And for most productions, this was sufficient. Even with the introduction of the Sony PCM-F1 in 1981 and DAT in 1987 (both being two-channel formats), there was no compelling reason to change the basic approach used for production recording.

Sonosax SX-S mixer, designed by Jacques Sax, and introduced in 1983. Available in six, eight, or ten inputs, this mixer became a favorite of sound mixers who needed a small, lightweight mixer. Photo courtesy Sonosax

While some sound mixers (including this author) opted to use somewhat larger consoles intended for broadcast and remote music recording, there weren’t really many options available to the industry until the introduction of the Sonosax SX-S in 1983, and the Cooper CS-106 mixer in 1989. Like most equipment destined for the highly specialized film market, these mixers were designed by individuals who had a dedication to producing high-quality sound recording equipment specifically for the film industry.

The Sonosax SX-S mixer was the brainchild of Swiss engineer Jacques Sax, who had begun his career as a live sound mixer. Frustrated with what was available on the market at the time, he took it upon himself to design something that was more to his liking, beginning with the SX-B mixer in 1980, and culminating with the current SX-ST series consoles.

The Cooper CS-106 mixer, designed by Andy Cooper, and introduced in 1989. Could run off of internal batteries. Featured both 12-volt T power and Phantom power for mics. Two-stage high-pass filter. Comprehensive talkback and monitoring facilities. Still i

The Cooper 106 was designed and built by Andy Cooper, who besides being a bright designer, was also cognizant of the particular needs of the film production market. So instead of designing something that he “thought” represented the needs of production mixers, he actually went out and took the time to talk with notable mixers of the era (a lesson that some manufacturers still have yet to learn).

The Cooper CS-106 marked a fairly significant departure from anything else available at the time. With straight-line faders, the option for seven inputs, three-band EQ, a lightweight chassis, DC powering, sophisticated monitoring and signal routing functions, the Cooper mixer embodied much of what production mixers had been looking for at the time.

Film sound being a very small slice of the overall worldwide audio market, larger manufacturers simply weren’t interested in developing a highly labor- and design-intensive console for a small market segment, when there were much bigger rewards to be reaped in the studio, sound reinforcement, and broadcast markets. Many of the consoles built by Sonosax and Cooper Sound are still in use nearly three decades later, which attests to Jacques and Andy’s strengths as careful designers who understood the rigors of film production.

The Audio Developments AD031 mixer was one of the first products introduced by this venerable British firm. Available in a variety of input configurations, it became very popular in the UK.

There were of course, other options available in the eighties. Audio Developments continued with their line of portable mixers, which included the AD 062 and AD 075 series. Sony actually introduced a  twelve-input mixer, the MXP-61, which had some features such as 12-volt T-powered mic inputs, which were clearly aimed at the film production market, but didn’t generate a lot of sales.

There were also some entries in the portable “bag rig” market, most notably by the British company SQN, which introduced the SQN-4S mixer.

Being the highly individual craft that production recording is, many sound mixers weren’t content with what was offered on the commercial market, and opted to design something that suited their personal approach to production recording, or make extensive modifications to stock consoles. Not everyone who sat at a mixing board had the kind of electronic background to undertake this sort of task however.

Highly customized mixer designed and built by Bruce Bisenz in the 1980’s, utilizing Nagra mic preamps. Note the modified Altec graphic equalizer, with one octave band intended for dialog EQ, and the group buss assignments. Photo courtesy Bruce Bisenz

Highly customized mixer designed and built by Bruce Bisenz in the 1980’s, utilizing Nagra mic preamps. Note the modified Altec graphic equalizer, with one octave band intended for dialog EQ, and the group buss assignments. Photo courtesy Bruce Bisenz

An example of the Studer 169/269 series consoles, introduced in 1977. Conceived originally as an all-around broadcast mixer for European radio and television broadcasters, it also became a favorite of many music recording mixers, and eventually found its

Among the few who took on this challenge during the seventies and eighties include David Ronne, an Academy Award-winning production mixer (who also designed the RollLogic remote control). Bruce Bisenz, who built a highly customized console from the ground up, and Jim Webb, who commissioned a console to his liking that was built by Jack Cashin. The list goes on…

There were also sound mixers such as Nelson Stoll, Ray Cymoszinski, Michael Evje, and others who decided they loved the big sound of the Neve consoles, and took it upon themselves to modify the boards to their liking for film work. Others (including the author) opted for the modular configurations offered by the Studer 169/269 series consoles.

The important thing to note in this regard is that every one of these sound mixers had a particular approach to the challenges of doing production sound under all kinds of conditions, and wanted a console that would give them the most flexibility and best sound quality for their style. In a world that has now become defined by the stock offering of various manufacturers, the “signature sound” that many mixers had sought to achieve during this period has now become lost.

Next up, “The Nineties.”
 –Scott D. Smith CAS

With sincere appreciation to Jeff Wexler CAS for invaluable contributions in style and content.

The Rise of Server-Based Recording

by James Delhauer

x-default

On a set, the job of the person who is tasked with acquiring the content that is shot throughout the day is incredibly stressful. Whether we’re discussing the tape operators of days gone by or the most modern media recordists, there are challenges that have stood the test of time. Somewhere between hundreds of thousands and hundreds of millions of dollars are spent assembling the production. Countless man-hours contribute to making it the very best that it can be. Literal blood, sweat, and tears are spilled to create what we all hope will be a veritable work of art. Then, after all of that, it falls on the shoulders of the one person who is tasked with handling the media. They are simply given very delicate assets that have been created throughout the day and which represent the sum total of the production as a whole. Just about anything can go wrong. Data can be corrupted. Hard drives can be damaged. Video tape can tear. Fortunately, these risks are being minimized by the advent of a new method of media acquisition: server-based recording.

Though different productions utilize a vast array of workflows, every single one since the Roundhay Garden Scene was first filmed in 1888, has come down to the media. And every single production needs someone to manage it. In today’s digital era, the most common workflow goes a little something like this. Cameras or external recorders capture video and audio data to an internal storage device some sort. When that unit is full, it is ejected and turned over to a media manager. The production continues with another memory card while the media manager takes the first one and offloads, backs up, and verifies the files on it. This is usually done with an intermediate program such as Pomfort’s Silverstack or Imagine’s Shotput Pro—programs that can do file comparisons to ensure that what was on the source media is identical to what ends up on the target media. When all of that content is secured on multiple external hard drives, the original memory card is returned to the production so that it can be wiped and reused. Rinse and repeat. At the end of each day, the media manager turns over at least one set of drives containing the day’s work to someone who will bring it to a post-production facility.

There, the work is moved from these temporary storage drives onto work servers, where assistant editors can begin their work.

While prominent, this workflow does come with a few inherent drawbacks. Most notably, the process is both fragile and time-consuming. Digital storage, no matter how sophisticated, is vulnerable to failure, damage, or theft. When the media manager receives a card with part of the day’s work on it, that card is often the only raw copy of the work in existence. Careers could end in a heartbeat if anything were to happen to it. So it becomes his or her job to create multiple copies. Unfortunately, the time during which data transfers from one storage system to another is the time at which it is most vulnerable. An accidentally yanked cable or sudden power surge is all it takes to corrupt the open files as they are transferring over. This vulnerability is compounded by the fact that transferring files is time-consuming and becoming ever more so. As our industry continues to push the boundaries of resolution, color science, and bit depths, video files are getting bigger and bigger. As such, they require more time to offload, duplicate, and verify, meaning that the period of vulnerability is growing longer.
 
But emerging technologies are creating new workflows that circumvent these drawbacks. Among the most promising is server-based recording.

Rather than relying on disparate components that must be passed back and forth between different individuals on a set, server-based recording allows productions to streamline their workflows and unify everything through one interconnected system. All of the components can be plugged into a single network switch and communicate with one another directly. Cameras and audio devices send uncompressed media directly into the switch. The network feeds them into a digital recording server (such as a Pronology’s mRes or Sony’s PWS 4500), which takes the uncompressed data and encodes the signals into ready to edit files. These files are then sent back into the network, which in turn sends them to any desired network-attached storage devices (such as SmallTree’s TZ5 or Avid’s ISIS & NEXIS platforms). The moment the recordist hits the Stop button, he or she can open the files on a computer and bring the newly created clips into a nonlinear editing application in order to assess their viability. This method eliminates the intermediate process of utilizing memory cards, transfer stations, and shuttle drives in favor of writing directly to external storage and thus removes both the time and risk associated with manual offloading. It also offers instant piece of mind to both the person handling the media and the production as a whole that the work that has been done throughout the day is, in fact, intact and ready for post-production.

And this is only the most basic of network-based workflows.

By utilizing advanced encoder systems, such as the aforementioned mRes platform, multiple tiers of files can be distributed across multiple pieces of network-attached storage. This gives the recordist the ability to simultaneously create both high-quality and proxy-grade video files and to make multiple copies of each in real time as a scene is being shot. This eliminates the potential need for time-consuming transcodes after the fact and, more importantly, this instant redundancy removes the key period of danger in which only a single fragile copy of the production’s work exists. As a result, recordists can now unmount network drives mere minutes after productions wrap and turn them over for delivery to post with one hundred percent certainty that there are multiple functioning copies of their work from the day. There is no need to spend several hours after wrap each day offloading cards and making backups.

Or, to take things a step further, productions can take advantage of the inherent beauty that is the internet to skip the need for the shuttle process altogether. It is possible to create files in a manner that sends them directly to a post-production edit bay. With low bitrate files or a high-capacity upload pipeline, recordists can set up their workstations using transfer clients (such as Signiant Agent or File Catalyst) to take files that are created in a particular folder on their network-attached storage and automatically upload them to a cloud-based server, where post-production teams can download them for use. This process has the distinct advantage of sending editors new files throughout the day in order to accommodate a tight turnaround.

Conversely, for productions where the post-production team may be located on site, a hard line can be run from the recording network directly to the edit bays. By assigning the post team’s ISIS server (or comparable network attached server) as a recording destination, editors gain access to files while they are recording. In cases such as this, the production may opt to use “growing” Avid DNxHD files. This format takes advantage of Avid’s Advanced Authoring Format in order to routinely “close” and “reopen” files, allowing editors to work with them while they are still being recorded. For productions with incredibly tight turnarounds, this is the single fastest production to post-production workflow possible.

All of this makes server-based recording an incredibly versatile tool. However, it is not without its limitations. At this time, network-based encoders are limited to encoding widely available intermediate or delivery codecs, such as Apple ProRes or Avid DNxHD. Without direct support from companies with their own proprietary formats, they cannot output in formats such as REDCODE or ARRIRAW. Furthermore, setting up a network of this nature requires persistent power and space. It is also worth considering that, like most new technologies, server-based recording often comes with a hefty price tag. These limitations make the process unsuited for productions hoping to take advantage of the full range of Red and Arri cameras, productions in remote or isolated locations, and low-budget productions.

So when is it most appropriate or necessary to take advantage of this emerging technology? While it can be of use in a single-camera environment, this method of recording truly shines in live or archaically termed “live to tape” multi-cam environments, where anywhere from three to several dozen cameras are in use. After all, if a show records twelve cameras for one hour, the media manager suddenly has to juggle twelve hours’ worth of content. It is much easier to write all twelve to a network-attached storage unit than to offload all twelve cards one by one. Also, due to the fact that network-attached storage drives can be configured to store hundreds of terabytes, the process is ideally suited for live events or sports broadcasts where stopping and starting the records risks missing key one time only moments. But above all, it is best used when time is critical. The ability to bring files into a nonlinear editing system as they are being recorded and work in real time is a game changer for media managers, producers, and editors alike.

This technology is already revolutionizing the way television productions approach on-set media capture and it is still in its infancy. It will continue to grow and evolve. Given time, it is my sincere hope that it will find its way into the feature film market and become more practical for smaller productions to adopt. For the time being, Local 695 Video Engineers should begin to take note of what is available and familiarize themselves with the technology so that they are prepared to take advantage of the technology in the future.

BlacKkKlansman

Truth and Action

Laura Harrier as Patrice and Corey Hawkins as Stokely Carmichael.

Sound mixes a moving palette for Spike Lee’s new joint

by Daron James

The Civil Rights movement in 1950s and 1960s America was a tinderbox ready to explode, then in the ’70s, it continued with the emergence of the Black Power movement. The latter is the historic setting for BlacKkKlansman, a taught sociopolitical film from director Spike Lee.

Based on the book Black Klansman by Ron Stallworth, the first African-American detective in the Colorado Springs Police Department, the adapted screenplay follows the true story of Stallworth’s (John David Washington) infiltration of the Ku Klux Klan and his eventual take down of an extremist hate group.

Jasper Pääkkönen, Director Spike Lee, Ryan Eggold, and Adam Driver on set.
Ashlie Atkinson (plays Jasper’s wife) and Lee.

Production Sound Mixer Drew Kunin, Boom Operator Mark Goodermote, and Utility Marsha Smith took on the project, a paced schedule lasting from October to December 2017 that included a glut of filming locations in New York and a jaunt VFX stop in Colorado to blend Centennial State exteriors into the Big Apple.

The movie opens up in black-and-white, featuring Alec Baldwin as a bigot-spewing racist; a 16mm projector beams images of his message overlapping his face and onto the wall behind him—the noise of the machine pierces through the soundscape. On set, everything was done live with no visual effects. It meant the clacking of the projector would compete with Baldwin’s dialog. “I didn’t know how loud Alec was going to be, which turned out to be very,” says Kunin. “I was a bit surprised when he launched into it—it was a little bit of wild ride to keep his level from overloading, but we were lucky he had enough volume that overrode the projector.” Sound ran a Schoeps CMIT-5U as boom and placed a lav for dialog.

Kunin uses a mix of DPA, Sanken, and Countryman lavs with Lectrosonics wireless on projects. His general mantra being to only wire when necessary, however, on BlacKkKlansman, they went ahead and wired everyone to be safe. Multiple roaming cameras shooting wide and tight coverage were an acting catalyst, but also being the team’s first project with Lee, they didn’t want to interrupt the rhythm with the need of adding a lav after the fact.

We first meet Stallworth working in the file room of the police department but he soon receives his first undercover assignment to attend a lecture delivered by Black Panther Party philosopher Kwame Ture aka Stokely Carmichael (Corey Hawkins). It’s here he meets Patrice (Laura Harrier), a fiery activist Stallworth warms to. Inside the assembly hall, hundreds of extras look on, shouting with praised remarks during Ture’s moving speech.

John David Washington as Ron Stallworth with Laura Harrier.

Sound couldn’t place a microphone overhead Ture because of wide camera shots. Instead, the period microphone at the lectern was made live and an additional plant mic was hidden. Another boom captured reactions of the crowd.

For that scene in post, Re-recording Mixer Tom Fleischman, a longtime collaborator of Lee’s, augmented the speech with a bit a reverb to add to the size of the room and layered dozens of responses from the crowd. “Drew turned in production tracks that were really well recorded. When you have a good track, it makes my job easier,” says Fleischman.  

4117_D018_09780_R (l-r.) Director Spike Lee, actors Topher Grace and Adam Driver on the set of Spike Lee’s BlacKkKlansman, a Focus Features release. Credit: David Lee / Focus Features
Duke and Flip meet for the first time.

The challenge of the Ture speech scene in post was making sure the audience callouts felt like there were actually in the room and not ADR. Mixing in 7.1 made it easier for Fleischman, then it became a matter of going through it to make it sound natural.

“It could have easily become quite a noisy scene, but my philosophy at any given moment in any film is that there is one sound that needs to be in the forefront,” says Fleischman. “We needed to balance the scene in a way that made sure every line was intelligible so that the underlying track sounded real to the audience and not like they were hearing something in a vacuum. The whole idea for me is story and keeping the audience involved and not letting them be distracted by any sound element.”

If you listen closely to the scene, you will hear a “boom shakalaka” from the crowd. That’s actually Lee’s voice. During the mix, the director asked Fleischman to grab a mic so he could record the audio and found spots for it in the scene.

David Duke (Topher Grace) welcomes a new chapter of members.
David Duke (Topher Grace) welcomes a new chapter of members.
Ron (John David Washington) spies on Felix’s residence.

After that initial assignment, Stallworth starts his own undercover operation after stumbling across a newspaper ad from the Ku Klux Klan seeking new members. Stallworth calls the number on the notice and David Duke (Topher Grace), the leader of the hate group, actually picks up the line. Disguising his voice, Duke thinks Stallworth is a racist white man and invites him in the inner circle.

To shoot these scenes, Production Designer Curt Beech built two sets outfitted with period-appropriate props on NY stages so production could shoot Stallworth and Duke simultaneously. Sound recorded both actors’ dialog simultaneously as well. “We placed mics overhead on both ends, plus we tapped the telephone line on a separate track for editorial to play with,” says Kunin, whose cart setup is based around a Sonosax mixer and an Aaton Cantar X-3 recorder. All Fleischman had to do was “filter down the track a little” to make it sound more like it was coming from a phone.

Stallworth, now a member of the KKK, has one problem: he’s black. To be the face of the operation, he asks fellow detective Flip (Adam Driver) to pose as Ron, where he meets the leader of the local chapter, Walter (Ryan Eggold). Stallworth asks Flip to wear a wire and Kunin was able to find a few in the style of the old BCM 30 to put on camera, which added to the complexity of lav placement.

Ron (John David Washington) gives Flip (Adam Driver) his official member card of the KKK.
Ron (John David Washington) gives Flip (Adam Driver) his official member card of the KKK.

Costumes from designer Marci Rodgers posed a different challenge as they lauded the fashion of the time. Stallworth wore lush colors, jazzy prints, and mixed textures of denim, velvet shirts, silky button-downs, suede vests, and leather jackets. To lav Washington, center chest became the default position to avoid material movement leaking into the track.

For Patrice, she was dressed in long leather jackets, dark turtlenecks, mini-dresses, and knee-high boots, among others. “Laura was a little tricky to mic,” admits Kunin. “Not because of the material she was wearing but because it was hard to hide a mic. Marsha worked with wardrobe to sew in special compartment to place in the bodypacks.”

Sound had to pay close attention during the nightclub scenes Ron and Patrice go to hang, talk, and dance. For dialog to be mixed cleanly, Kunin dropped the song to capture the dialog and used a thumper to aid the beat. Music is a big part of Lee’s storytelling. Besides the musical soundtrack that includes “Too Late to Turn Back Now” by Eddie Cornelius and “Oh Happy Day” by Edwin Hawkins, the director tapped Terence Blanchard (Malcom X, 25th Hour) for its score.

“When it comes to writing music, I let the film tell the story,” says Blanchard. “The first thing I thought about when I saw a cut was Jimi Hendrix playing the National Anthem on guitar. Being an African-American, you’re constantly bombarded with issue of bigotry every day. This story is a reaffirmation of what we’re going though. Jimi was a primal scream for all of us, so it’s why the electric guitar plays a prominent role in the sound of the score.”

Flip, now deep in the local Klan chapter, attends meetings at the home of Felix (Jasper Pääkkönen), a follower who wants to put words into action. It’s here the undercover detectives learn Felix is planning a plot to spoil another activists meeting; one that involves a character played by Harry Belafonte, an icon of the Civil Rights movement.

Felix’s residence, a practical location found upstate in Ossining, New York, was small and scenes were filled with multiple actors and multiple cameras. At times, squeezing in a boom operator was not possible, especially when Felix puts Flip through a lie detector test in a broom closet of a room. It meant sound had to rely on plant mics and lavs to cover the dialog. In other tight locations, gaps in the wall allowed to place the boom in the room but not Goodermote.

Leading up to the climax of the movie, picture intercuts two different story plots. On one side, you have Flip being initiated into the Ku Klux Klan, where a crowd gleefully cheers during a screening of 1915’s The Birth of a Nation. On the other, a group from Patrice’s African-American student union peacefully sits around Belafonte as he delivers the most galvanizing moment in the movie; a recounting of the lynching of Jesse Washington he witnessed as a young man. “To see him was a very powerful moment,” says Kunin. “Working on that scene had so much gravity to it, we took extra precaution in our approach.” In recording Belafonte’s dialog, sound let the cameras set up its shots, then they strategically placed an extra in front of a plant mic for additional recording.  

Though backdropped in the mid-’70s, the film is not only about the past but about the state of how we’re living today. Nothing couldn’t be more evident than the film’s final sequence; a collection of uncensored videos from the Charlottesville protests. Lee left the material untouched. “What you hear is the sound straight out of the smartphones and online videos. There’s no foley or effects added,” says Fleischman. “We only mixed in the score and it plays against the raw sound really strongly.” Blanchard notes, “That’s classic Spike. He makes a statement about what’s going on in our country and leaves you there to think about it.”

A Star Is Born

Photo by Clay Enos. Courtesy of Warner Bros. Pictures

by Steve Morrow

From my very first meeting, it was obvious that Bradley Cooper knew he wanted A Star Is Born to feel real and immersive. He certainly achieved that in this film, with handheld camera work and live vocals, he leads us into a world that feels simultaneously epic and intimately authentic.

As a director, Cooper cultivated a great atmosphere on set that was familial and inspiring to work in. Communication and collaboration were paramount for him, which fostered an environment where every member of the cast and crew could perform at their best.

Photo by Neal Preston. Courtesy of Warner Bros. Pictures

There was never any doubt that Bradley Cooper and Lady Gaga would be singing live. Neither of them wanted the film to feel like a traditional musical and there would be no lip-syncing to playback. For me, this was a dream come true, recording a music-based film and capturing the performances live, with the production sound being the vocal track instead of a studio recording.

For the next few months, I ran through different concept setups to figure out exactly how to get the best vocals and tracks possible for the various scenarios we would be shooting in. To ensure that we had the best system in place, we set up a mini-concert during prep to run through and test the concepts. We ultimately landed on having the band perform to playback with just the vocals being recorded live. Jason Ruder, Music Editor, and one of the re-recording mixers, did a quick mix of the prep mini-concert and with Warner Bros., Cooper, and Gaga happy it was clear that we had landed on the right method.

The movie opens on Jackson Maine’s performance at Stagecoach, the entire scene was filmed live at Stagecoach in only eight minutes. We shot between two concert acts, Jamie Johnson and Willie Nelson. Moving only between their sets meant we got our gear up in the few minutes allotted before Johnson’s set and then filmed and broke down in the minutes before Nelson’s set. One of the biggest concerns to everyone was the potential of music being leaked while filming at these venues with live crowds. To counter that, we came up with an earwig playback setup with no amplification. The performers could hear the music and the singing was recorded live, but the crowd couldn’t hear the vocals or music beyond the first couple of rows. We modified this system for Glastonbury where we had to be super mobile, we were a skeleton crew with only four minutes to set up and shoot. We had the festival’s monitor mixer put the instrument playback into Cooper’s wedge at a low level, and he sang live (unamplified) in front of one hundred thousand festival goers. The crowds were always so fantastic and excited even though they couldn’t hear much of anything. There were some fun headlines at the time about technical difficulties causing the lack of amplification, but it was all a part of the plan to keep the music as secret as possible.

Stagecoach opening scene.

We shot performances all over, from Coachella and Stagecoach, to Glastonbury, the Shrine Auditorium, the Greek Theatre, the Orpheum, to the Palm Springs Convention Center, a few small nightclubs and a drag bar. We had to be prepared to record absolutely everything live, which meant up to sixty-one tracks of audio at any given time. I used two Midas M32R mixers with a digital stage snake, each mixer has thirty-two inputs and by combining them via Dante into the Sound Devices 970, I could record all the tracks needed. We muted the musicians’ instruments through the amps but still recorded the feeds for post to use. To help capture the atmosphere, we used a DPA 5100 surround sound mic in the crowd and two shotgun mics, one at stage left and right, aimed at the crowd for their reactions. We created a room mapping whenever possible by recording an impulse response for post to be able to take the original studio instrument recordings and balance them to sound like they were recorded live with the vocals in each venue.

Courtesy of Warner Bros. Pictures

For the track “Always Remember Us This Way” (one of my favorites in this film), Gaga requested a digital grand piano so we could record the piano isolated from the vocals. We were able to take a stereo feed from the digital piano to track her playing, record the vocals cleanly, and keep the music from being heard by the crowd.

For regular production days, we were a three-man crew: myself, Craig Dollinger (Boom Operator), and Michael Kaleta (Sound Utility.) On music days, we had Nick Baxter on our team as our Pro Tools music editor and monitor mixer Antoine Arvizu. Each music day was like throwing a concert, and the whole team was needed to set up and break down the mics, cabling, stage snakes, in ears and earwigs. These days, we generally had a three-hour pre-call, to set up the wedges, mic all the instruments, and get everything patched to a Midas digital stage box. We used the stage box to split the signal of all the feeds, one to me at the cart and the other to a monitor mixer at the side of stage. The monitor mixer would control the reverb that was heard by Gaga and Cooper via Phonak Earwigs. We would also route the singing and music to the band through earwigs for the concerts with live crowds since the music and singing were not amplified. At the cart, I worked off of the two Midas M32R’s recording to two Sound Devices 970’s. I prepared for sixty-four tracks, our highest track count was sixty-one. We were also joined on set by Jason Ruder, there to observe our recording process during music performances to make handling all the tracks in post as easy as possible.

Music interacting with the script was so important to Cooper’s vision of the film. He wanted the songs to be a character in the film and each word of the music there to propel and reflect the story. To help achieve this, multiple songs would often be performed in one setup so that he could have the options available when he went into editing. On our end, we built each Pro Tools session to include nearly all of the film’s music so we could easily and quickly switch from song to song at a moment’s notice, with some songs being added the day of shooting.

There’s always a sense of fun on set when you do a music-driven film. In between setups, the band would jam out and occasionally, Matthew Libatique would grab a camera and start rolling on the action. I was often glued to my seat staring at the monitors as we could just start rolling at any minute. This film is one of the most rewarding I’ve worked on to date, filled with plenty of technical challenges and lots of fun. Bradley Cooper’s directorial leadership always let you know you were an integral part of something special. It was an honor to work with both he and Lady Gaga, two incredibly talented and passionate artists.

The Cart
Venue 2 and 970’s
Digital stage snake for surround recording
All the ins and outs from the stage
Track Count 970 Screen

Mission: Impossible – Fallout

Tom Cruise as Ethan Hunt in MISSION: IMPOSSIBLE – FALLOUT, from Paramount Pictures and Skydance.

Mission: Impossible – Fallout marks my second outing in the series and third film with Tom Cruise. I had previously worked on Mission: Impossible – Rogue Nation, but collaborated with a new team for this latest installment. My longstanding colleagues, Steve Finn and Anthony Ortiz, who worked on Mission: Impossible – Rogue Nation, both decided to make the break to mixing. I’m pleased to say both were successful, and hopefully, they learned as much from me as I learned from them in the years we spent together.

by Chris Munro

Tom Cruise as Ethan Hunt and Henry Cavill as August Walker in MISSION: IMPOSSIBLE – FALLOUT, from Paramount Pictures and Skydance.

Previous experience has taught me to expect the unexpected; after all, anything can happen. So, upon starting pre-production, it came as no surprise when one of my first meetings with Tom Cruise was at the London Heliport in Battersea. It took place as Cruise piloted the helicopter that took us to an airfield close to the studios. He explained there was a plan for a helicopter-chase sequence in the film where he needed to be able to pilot the helicopter with no visible headset or helmet. At this stage there was no script, and for some weeks, we worked with Writer/Director/Producer Christopher McQuarrie, verbally explaining the storyline. The Mission films are all about practical stunts and FX so everything has to work in real-life situations.

Fortunately, I have worked on a number of films featuring helicopters and used them as an essential means of reaching challenging locations. The most notable project, Black Hawk Down, garnered me an Academy Award for Best Sound.

Henry Cavill in MISSION: IMPOSSIBLE – FALLOUT

I had previously considered using bone conduction technology, most recently on Mission: Impossible – Rogue Nation for the sequence where Tom Cruise is on the outside of a giant Airbus A400M in flight. The technology has been around for years, but I learned that the military had adopted it, which greatly improved the audio quality. The challenge with older technology is that many of the sounds in speech are made in the mouth and not all transmitted through bone conduction. Without these sounds, speech can sometimes be less intelligible. The research process was quite extensive, as not all information was readily accessible. I eventually came upon a company that was developing bone conduction headsets for commercial use, but the caveat was headsets needed to be custom made. Thus, we needed to arrange for an audiologist to take impressions of Cruise’s ears and create concealed bone conduction headsets specifically for him.

Tom Cruise as Ethan Hunt in MISSION: IMPOSSIBLE – FALLOUT from Paramount Pictures and Skydance.

Photo: Chiabella James.

The next stage was to test if and how they would work. I set up four large powered speakers in a studio office and played back helicopter sounds at a level in which you could not hear someone speak. We then invited Tom Cruise to sit in the room with the earpieces fitted and connected to a walkie-talkie. Incidentally, the earpieces also offered a high degree of hearing protection, which would be important for anyone spending hours in a helicopter without a headset. I went outside the room with another walkie-talkie, and we were able to communicate perfectly. With the first stage complete, I now had to work out how the system would function in a helicopter.

At this stage, the model of helicopter had not been determined, though we knew that it would be one made by Airbus Industries. Speaking with Airbus engineers, I established that different helicopters may use different avionics systems, and it was not possible to modify or interfere with these in any way given it may affect airworthiness of the aircraft.

As a result, I called upon long-term collaborator Jim McBride, who has been the technical wizard on many films that I have been involved with in the past such as Black Hawk Down, Gravity, and Captain Phillips. McBride has worked in varying technical capacities on films, as well as in music, and even in a nuclear power station. McBride and I decided we needed to build totally independent self-powered interfaces that were isolated from the helicopter avionics, yet could still use the same PTT (push to talk) button on the helicopter cyclic or control stick.

Once we had prototype interfaces made, we were ready to start testing with helicopters. With the assistance of aeronautical engineers, we went back to Denham Airfield and installed our equipment in a helicopter. Pleased with the results, we sought approval from Tom Cruise and asked him to give it a try. The first thing Tom did before take-off was to call the control tower for a radio check. ATC reported good quality but were suspicious. The controller remarked he didn’t believe we were in a helicopter because it was too clear, as he couldn’t hear any engine or rotor noise. This result was due to the fact we’d used the bone conduction units fitted in Cruise’s ears with minimal pickup of background noise.

The next step was sorting how we would connect to a recorder. The limited space within the helicopter prompted us to find something that could be easily hidden when cameras were fitted.
 
We originally thought about using a radio link to connect to a hidden multitrack recorder that would also be recording 5.1 FX but wanted to avoid radio transmission within the helicopter if possible. I decided to record to Lectrosonics PDRs which would have timecode sync with cameras and could be easily hidden. We made an output on the helicopter communication interfaces for them to connect to.

Even with production not yet underway, I had put in a substantial contribution to the film. This level of prep was essential for sound efficiency and to ensure all ran smoothly. In some respects, there are similarities to sound design in the theatre, and perhaps a production sound designer would be a more appropriate title considering we no longer mix to a mono or stereo Nagra. The mixing component of our job has become less important; however, our responsibilities have increased proportionately with the advancements in technology.

The helicopter sequences were not at the start of the schedule so we still had a little time to perfect the systems. Shooting began in Paris in April of 2017 at the Grand Palais, with car and motorbike chases throughout central Paris. I was joined by UK assistants Lloyd Dudley and Jim Hok, as well as Paris-based assistant Gautier Isern, who had recently finished working with Mark Weingarten on Dunkirk.

I needed a small multitrack capability at this stage and experimented with the Zoom F8 and the DPA 5100 surround mic which we could easily hide in the BMW M5 cars. Given we had several cars with different camera rigs, their relative low cost made it possible to hide one in every car. We used radio mics on the actors, primarily so that we could record a rushes track for editorial purposes. This also allowed McQ to monitor performances in a follow vehicle. Supervising Sound Editor James Mather, his dialog editors, along with Re-recording Mixer Mike Prestwood Smith, would later decide which of the mics worked best. We also mounted transmitters on various parts of the car exterior with DPA 4160 mics to get sound FX. We followed the chases in a specially adapted high-speed-chase vehicle with antennae mounted for sound and video and remote camera heads. The van was rigged to carry McQ, DP (who also operated a remote head camera), video assist, another camera remote, and 1st ACs. We would chase the cars or motorbikes whist shooting either from cameras mounted on the action vehicles, tracking vehicles, or very often, an electric bike. We always had a team pre-rigging the next car or bike to be used after a shot was complete. Our team rigged mics on every camera tracking vehicle whether it be the Russian Arm, an electric camera bike or ATV. The rear-mounted mics on the cars and bikes were rigged close to the exhausts, while others were mounted close to the engines. I was particularly looking for FX that sounded real, knowing that sound FX editors could use these later as a base to create a much bigger soundscape. I was not always looking for super-clean FX but usually something raw that sounded more documentary style rather than super-clean FX. That said, I did usually try to record clean interior ambiences in 5.1 with the DPA 5100 surround mic.

The Grand Palais sequences were set in a big music event with lasers, light shows, and projected graphics that all had to be in sync, so we worked closely with an AV company to provide audio playback linked to a timeline based on timecode from playback to ensure continuity from shot to shot of graphics, sound, and lighting. Several weeks were spent working in Paris shooting a major action sequence alongside the River Seine. The sequence is where Ethan Hunt (Tom Cruise) once again meets Soloman Lane (Sean Harris). Sean had previously established that the voice of his character was quiet yet menacing. It was not easy to record during the mayhem of the wild action sequence, but we managed to capture it while retaining his performance.

Photo: David James

The following location was in Queenstown on New Zealand South Island where we started with sequences set in a medi-camp prior to shooting helicopter sequences. Steve Harris joined our crew here, as he had worked on several films in New Zealand, thus very familiar with the landscape. Many of our locations required travel by helicopter, so I needed an ultra-small shooting rig that could give me the facilities of my normal rig yet fit easily into a helicopter. I built a rig on one of the larger all-terrain Zuca carts fitted out with a Zoom F8, a custom-built aux output box with the ability to send feeds to video and comms, an IFB transmitter, a Lectrosonics VR field with six radio mic receiver channels powered by a lead acid block battery in the base.

On our first helicopter shooting day, the 1st AD told me with a grin, we would be starting with dialog on the first shot and had dialog to record from Henry Cavill hanging outside a helicopter. Fortunately, I had already taken the precaution of having bone conduction earpieces made for Henry. These were invaluable when connected to a Lectrosonics PDR to record his dialog in flight. I also needed to record sound FX inside the helicopter, and I used the smallest multitrack with timecode that I could find, the Zoom F8. We fixed a DPA 5100 5.1 microphone to the interior roof and hid the F8 under a seat. This was done so if it were caught in camera, the 5100 could look as if it were part of the helicopter. We were able to start/stop and adjust levels on the F8 without needing to access it by using the F8 controller app on an iPhone. Similar to the Paris chase sequences, I wanted the FX to sound real and not like enhanced library recordings. Thus, a little wind on Henry’s mic often added to the reality. Additionally, because the bone conduction units were largely unaffected by background noise, the 5.1 recordings could be useful even if only certain elements of the six tracks would be used.

Photo: David James

Tom Cruise always piloted his own helicopter with cameras mounted on it. Director Christopher McQuarrie would fly in another and film from at least one other camera helicopter. Because the helicopters would take off and shoot for at least thirty minutes at a time, Editor Eddie Hamilton asked if I could record McQ’s helicopter comms in addition to what I was recording of Tom and Henry for the takes. This would allow him to align with exactly what McQ was intending for each shot. I used another PDR for this because of the timecode sync capability and small size. At the end of the day, we became data wranglers downloading the various SD cards and making sound reports.

After New Zealand, we returned to the UK to shoot in studio and various London locations, eventually joined by Hosea Ntaborwa. I had mentored Hosea when he was at the National Film and TV School on behalf of BAFTA and Warner Bros. creative talent. This was one of his first jobs after graduating and since then, has become part of my regular team working with us on The Voyage of Dr. Dolittle and Spider-Man: Far from Home. We had some rather challenging locations in London and it was whilst shooting on one of these, Cruise injured his ankle.

The company went on hiatus for a few weeks, which gave me an opportunity to prepare for the HALO jump sequence. That is High Altitude Low Opening and involved Tom Cruise jumping from an aircraft at an altitude high enough to require oxygen. We had a huge wind tunnel built on the backlot in the studio so that the skydive team could plan and practice manoeuvres while
experimenting with camera angles. The wind tunnel was also useful for my first asssistant Lloyd Dudley and I to develop the system for recording Cruise during the jump. We were able to set up communications between the skydive team, McQ, camera operator, and ground safety.

The wind tunnel was incredibly noisy so we were appreciative of the fact that the bone conducting earpieces offered a high degree of hearing protection. The skydive team were amazed that we managed to achieve audible communication in the wind tunnel, which made it much easier to plan shots and make adjustments. The team warned that skydivers have been trying for years to get good recordings during the dives but that they were always battling against wind noise and that it would be impossible to record. I reminded them that this was Mission: Impossible!

When we began shooting again, many of the locations were very inhospitable to sound, but nothing I had not come across before. The team also continued on studio sets at WB studios at Leavesden.

Jim McBride testing the comms
Lloyd and Hosea
Lloyd and Chris Munro on the way to the set

During this time, I continued to prepare for the HALO jump sequence, which was to be shot in Abu Dhabi. We did a number of tests with Cruise and the skydive team jumping from a Cesna Caravan at various UK sites. What we couldn’t test was what would happen at the highest altitudes when oxygen was required, and the jump was from a giant C17 aircraft. I was concerned with safety and that the equipment we were using in both Tom and the skydiver’s helmets was intrinsically safe. The dive helmets contained lighting which could potentially ignite the oxygen, so we arranged for tests to be done in an RAF lab with all of the equipment used for the HALO jump. We also had dialog to record inside the C17 as the jump progressed from a dialog scene directly into the jump. We shot some exteriors of the C17 and interiors on the ground at RAF Brize Norton near Oxford in the UK. This at least gave us a chance to consider what we would be up against.

Eventually, the time came to travel to Abu Dhabi. My crew there was Lloyd Dudley and Hosea Ntaborwa. Lloyd concentrated mainly on looking after Tom Cruise’s bone conduction headsets and fitting Lectrosonics PDR recorders to actors. Hosea was in charge of comms for the skydive team and recording 5.1 FX in the aircraft. I was particularly interested in sounds of breathing and how that can add tension. Once we played some of these sounds for Tom, he immediately wanted it to be a major part of the soundtrack during the jump sequence. I was not looking for pristine recordings that sounded like they were shot in a studio, instead, I was interested in the raw sounds of the helmet mics and bone conducting units which could give a more realistic documentary-type sound. I was not opposed to some wind noise and realised this could add to the reality. Having original sound adds to the feeling of reality even the audience are only subconsciously aware of it. Post production was greatly contracted due to the hiatus in shooting. Supervising Sound Editor James Mather had time restraints, thus appreciated all of the raw sound FX we could give him as elements to enable his team to create the final audio to be mixed by Mike Prestwood Smith. Both have been collaborators of mine on several previous films.

In conclusion, this film involved huge leaps of faith from all parties involved. I greatly appreciate the support from Tom Cruise, Chris McQuarrie, the production team, and my team throughout. My previous experiences, intuition, along with an incredibly well-respected new team proved immensely valuable. Much of the sound was unmonitored being recorded on PDR and hidden recorders. Chris McQuarrie and the producers trusted us to deliver on Mission: Impossible – Fallout using never-before-used technology. Most importantly, Tom Cruise trusted we would develop technology that would allow him to perform stunts efficiently, safely, and wherever possible, avoid ADR in order to enhance the reality.

HALO COMMS
The original intention was to use a radio mic TX on TC with a recording facility and have each of the additional divers and camera operator have a receiver connected to helmet-fixed earpieces. When we were safety testing, one of the requests was that we did not transmit anything inside the aircraft but that it was OK to transmit as soon as the divers had exited. I argued that radio mics would be fairly low power and on legal frequencies but then realised that it may have been a mistake to try to achieve recordings and comms with the same device. Additionally, we needed ground contact when the divers reached a lower altitude so that they could be given any safety information about wind or any other issues and radio mic TX may not have enough range.

I decided to use Motorola walkie-talkies for comms mainly because they were reliable and we were familiar with them. We used finger-operated PTT (push to talk) with custom-made interfaces to connect to the Motorolas. The PTT was run inside the sleeve of each diver and operated with a finger and thumb.

For TC (shown as EH on diagram), we used a bone conduction headset in each ear. One ear was talk/listen connected to the Motorola via the PTT and the other ear was connected directly to a Lectrosonics PDR. Another PDR connected to a da-capo mic (Que audio in US?) mounted in the helmet. I chose the da-capo mic mainly because I just happened to have some and also because these are what I had successfully used in helmets on Gravity. I had to send mics for safety testing to be intrinsically safe when used in the helmet, which also had oxygen being pumped in. I immediately thought that the da-capo mics may be well sealed as they are waterproof. It was not a particularly scientific decision.

Tom Cruise as Ethan Hunt in MISSION: IMPOSSIBLE – FALLOUT, from Paramount Pictures and Skydance.

Tom Cruise’s Bone Conductive Earpieces & Microphone

I had custom-moulded bone conduction units in each of TC’s ears. It was necessary to have one in each ear to give hearing protection but also allows use of the aircraft PTT as one ear is for talking and one for listening. There is a switch on the interface that allows for either, and this was connected via the pilot’s headset connectors in the helicopter on a US NATO plug or on some aircraft a Lemo connector. Trim pots on the interface require a miniature screwdriver to adjust talk and listen levels. It is powered by an internal battery and has transformer isolation on the connection to the helicopter to ensure isolation from the helicopter avionics.

The PDR records directly from the “talk” ear for a clean track of TC and is therefore pre PTT. That is to say that even if TC is not transmitting through the aircraft radio, his voice is still recorded.

An output from the co-pilot comms socket goes to track eight of a Zoom F8 which is primarily to record 5.1 ambience within the helicopter. Track eight records all comms: the director, any communication with other aircraft, ATC, and so on.

The DPA 5100 was fixed to the inside roof of the helicopter cabin toward the rear. If ever it were inadvertently caught on camera, it looks like part of the structure. The recorder was hidden and operated by the F8 control app on an iPhone.

The PDR was stopped and started using dweedle tones within the Letrosonics PDR remote app, also from an iPhone.

Reprint: March 1953

Reprint from
International Sound Technician March 1953 Issue

The Way We Were: Mixers Past & Present (Part 1)

Overview

The past ninety years of sound recording for motion picture production has seen a steady evolution in regards to the technologies used both on set and in studios for post production. Formats used for recording sound have changed markedly over the years (with the major transitions being the move away from optical soundtracks to analog magnetic recording, and from analog magnetic to digital magnetic formats, and finally, to file-based recording). Along with these changes, there has been a steady progression in the mixing equipment used both on set for production sound, as well as re-recording. Beginning with fairly crude two input mixers in the late 1920’s, up to current digital consoles boasting ninety-six or more inputs, mixing consoles have seen vast changes in both their capabilities and technology used within. In the article, we will take a look at the evolution of mixing equipment, and how it has impacted recording styles.

In The Beginning

If you were a production mixer in the early 1930’s, you didn’t have a lot of choices when it came to sound mixing equipment. For starters, there were only two manufacturers, Western Electric and RCA. Studios did not own the equipment. Instead, it was leased from the manufacturers, and the studio paid a licensing fee for the use of the equipment (readily evidenced by the inclusion of either “Western Electric Sound Recording” or “Recorded by RCA Photophone Sound System” in the end credits). Both the equipment, as well as the related operating manuals, were tightly controlled by the manufacturers. For example, Western Electric manuals had serial numbers assigned to them, corresponding to the equipment on lease by the studio. These were large multi-volume manuals, consisting of hundreds of pages of detailed operating instructions, schematics, and related drawings. If you didn’t work at a major studio, there is no way you would even be able to obtain the manuals (much less comprehend their contents).

Western Electric film sound operating manuals ca. 1930

Early on, both Western Electric and RCA established operations that were specifically dedicated to sound recording for film, with sales and support operations located in Hollywood and New York. Manufacturing for RCA was done at its Camden, NJ, facilities. Western Electric opted to do its initial manufacturing at both the huge Hawthorne Works facility on the south side of Chicago, as well as its Kearny, New Jersey, plant. These facilities employed thousands of people already engaged in the manufacturing of telephone and early sound reinforcement equipment, as well as related manufacturing of vacuum tubes and other components used in sound equipment.

The engineering design for early sound equipment was done by engineers who came out of sound reinforcement and telephony design and manufacturing, as these areas of endeavor already had shared technologies related to speech transmission equipment. (Optical sound recording was still in its infancy at this stage though, and required a completely different set of engineering skills.)

Western Electric 22C console. While this particular console was designed for broadcast, with some modification to monitoring, it was also the basis of film recording mixers.

With the rapid adoption of sound by the major studios beginning in April 1928 (post The Jazz Singer), there was no time for manufacturers to develop equipment from the ground up. If they were to establish and maintain a foothold in the motion picture business, they had to move as quickly as possible. Due to the high cost and complexities of manufacturing, engineers were encouraged by management to adapt existing design approaches used for broadcast and speech reinforcement equipment, as well as disc recording, to the needs of the motion picture business. As such, it was not unusual to see mixing and amplifier equipment designed for broadcast and speech reinforcement show up in a modified form for film recording.

Examples of these shared technologies are evident in nearly all of the equipment manufactured by both RCA and Western Electric (operating under their Electrical Research Products division). While equipment such as optical recorders and related technology had to be designed from the ground up, when it came to amplifiers, mixers’ microphones and speakers, manufacturers opted to adapt what they could from their current product lineup to the needs of the motion picture sound field. This is particularly evident in the equipment manufactured by RCA, which had shared manufacturing facilities for sound mixing equipment, microphones, loudspeakers and related technology used in the broadcast and sound reinforcement fields.

It was not unusual to see equipment originally designed for broadcast (and later, music recording) show up in the catalogs of equipment for film sound recording all the way through the early 1970’s.

Design Approaches

While the amplifier technology used in early sound mixing equipment varied somewhat between manufacturers, much of the overall operational design philosophy for film sound mixers remained the same all the way up through the mid to late 1950’s, when the stranglehold that RCA and Western Electric had on motion picture business began to be eaten away by the development of magnetic recording. (The sole standout being Fox, who had developed its own Fox-Movietone system.) New magnetic technologies (first developed by AEG in Germany in 1935), began gaining a foothold after the end of WWII, and players such as Ampex, Nagra, Magnecord, Magnasync, Magna-Tech, Fairchild, Stancil-Hoffman, Rangertone, Bach-Auricon, and others began to enter the field. Unlike RCA and Western Electric, these manufacturers were willing to sell their equipment outright to studios, and didn’t demand the licensing fees that were associated with the leasing arrangements of RCA and Western Electric.

RCA BC-5 console. Another of the consoles made by RCA for broadcast, but adapted in various configuration for film use.

Despite these advances, RCA and Western Electric were still the major suppliers for most film sound recording equipment for major studios well into the mid-sixties and early seventies, with upgraded versions of their optical recorders (which had been developed at significant cost) in the late 1940’s still being used to strike optical soundtracks for release prints. Both RCA and Western Electric developed “modification kits” for their existing dubbers and recorders, whereby mag heads and the associated electronics were added to film transports, thereby alleviating the cost of a wholesale replacement of all the film transport equipment. Much of this equipment remained in use at many facilities up until the 1970’s, when studios began taking advantage of high-speed dubbers with reverse capabilities.

Westrex RA-1485 mixer in “tea cart” console. Note the interphone on the left for communication with the recordist.

The 1940’s

After the initial rush to marry sound to motion pictures, the 1940’s saw a steady series of improvements in film sound recording, mostly related to optical sound recording systems and solving problems related to synchronous filming on set, such as camera noise, arc light noise, poor acoustics, and related issues. In 1935, Siemens in Germany had developed a directional microphone which provided a solution to sounds coming from off set.

Disney also released the movie Fantasia, a groundbreaking achievement that featured the first commercial use of multi-channel surround sound in the theater. Using eight(!) channels of interlocked optical sound recorders for the recording of the music score and numerous equipment designs churned out by engineers at RCA, it can safely be said that the “Fantasound”  system represented the most significant advance in film sound during the 1940’s. However, except for the development of the three-channel (L/C/R) panpot, the basic technology utilized for the mixing consoles remained mostly unchanged.

Likewise, functionality of standard mixing equipment for production sound saw few advances, except for much-needed improvements to amplifier technology (primarily in relation to problems due to microphonics in early tube designs). Re-recording consoles, however, began to see some changes, mostly in regards to equalization. Some studios began increasing the count of dubbers as well, which required an increase in the number of inputs required. For the most part, though, the basic operations of film sound recording and re-recording remained as they were in the previous decade.

Nagra BMII mixer. This was one of the first portable mixers to include powering for condenser mics as part of the mixer design.
Sela portable mixer designed for use with Nagra recorders. Incorporating T-power for mics, and low-frequency EQ, this was an industry standard for years.

The 1950’s

While manufacturers such as RCA and Western Electric attempted to extend the useful life of the optical sound equipment that they had sunk a significant amount of development money into, by the late 1950’s, the technology for production sound recording had already begun making the transition to ¼” tape with the introduction of the Nagra III recorder. Though other manufacturers such as Ampex, Rangertone, Scully, RCA, and Fairchild had also adapted some of their ¼” magnetic recorders for sync capability, all of these machines were essentially studio recorders that simply had sync heads fitted to them. While the introduction of magnetic recording significantly improved the quality of sound recording, it would remain for Stefan Kudelski to introduce the first truly lightweight battery-operated recorder capable of high-quality recording, based on the recorders that he originally designed for broadcast field recording. This was a complete game-changer, and eliminated the need for a sprocketed film recorder or studio tape machine to be located off-set somewhere (frequently in a truck), with the attendant need for and AC power or bulky battery boxes and inverters.

Westrex RA-1424 stereo mixer. This mixer, introduced in 1954, was made in six different configurations, equipped with either four or six inputs, and variations on buss assignments. This was most likely developed in response to the need for true 3-channel

Later, Uher and Stellevox would also introduce similar battery- operated ¼” recorders that could also record sync sound. Up until this point, standard production mixing equipment had changed little in terms of design philosophies from the equipment initially developed in the early 1930’s (with the exception being some of the mixing equipment developed for early stereo recording during the early 1950’s for movies such as The Robe). Despite the development of the Germanium transistor by Bell Laboratories in 1951, most (if not all) film sound recording equipment of the 1950’s was still of vacuum tube design. Not only did this equipment require a significant source of power for operation, they were, by nature, heavy and bulky as a result of the power transformers and audio transformers that were a standard feature of all vacuum tube audio designs. In addition, they produced a lot of heat!

Most “portable” mixers of the 1950’s were still based largely on broadcast consoles manufactured by RCA, Western Electric (ERPI), Altec, and Collins. Again, all of vacuum tube design. The first commercial solid-state recording console wouldn’t come around until 1964, designed by Rupert Neve. A replacement for the venerated Altec 1567A tube mixer didn’t appear until the introduction of the Altec 1592A in the 1970’s.

French-made Girardin tube mixer

A common trait amongst all of these designs was that nearly all of them were four input mixers. The only EQ provided was a switchable LF rolloff or high-pass filter. There were typically, no pads or first stage mic preamp gain trim controls. The mic preamps typically had a significant amount of gain, required to compensate for the low output of most of the ribbon and dynamic mics utilized in film production at the time (while condenser mics existed, they also tended to have relatively low output as well).

All had rotary faders, usually made by Daven. And except for the three-channel mixers expressly designed for stereo recording in the 1950’s, all had a single mono output.

Re-recording consoles were of course much larger, with more facilities for equalization and other signal processing, but even these consoles seldom had more than eight to twelve inputs per section.

The 1960’s

While the 1950’s saw some significant advances in the technology of sound recording and reproduction, with the exception of the introduction of stereo sound (which was typically for CinemaScope roadshow releases), there had not been any really significant advances in recording methods since the transition from optical to magnetic recording. Power amplifiers and speaker systems had somewhat improved, boosting the performance of cinema reproduction. However, most mixing consoles relied on circuit topologies that were based on equipment from the 1940’s and 1950’s, with some minor improvements in dynamic range and signal-to-noise ratio.

Westrex RA-1518-A stereo mixer. Note the early ingenious use of straight line faders, which are actually connected to Daven rotary pots on the underside of the panel

It was during this period that technologies developed for music and broadcast began to seep into the area of film sound, and the approach to console designs began to change. The most notable shift was a move from the tube-based designs of the 1950’s to solid-state electronics, which significantly reduced the size and weight of portable consoles, and also for the first time, allowed for a design approach that could use batteries as a direct source of power, without the need for inverters or generators required to power traditional vacuum tube designs. This opened up a significant range of possibilities that had not existed before.

Sennheiser M101 mixer. Another of the early entries to solid-state portable mixers.

With the introduction of solid-state condenser mics, designers began to incorporate microphone powering as part their overall design approach to production mixers, which eliminated the need for cumbersome outboard power supplies.

Some mixers also began to include mic preamp gain trims as part the overall design approach (also borrowed from music consoles of the era), which made it easier to optimize the gain and fader settings for a particular microphone, and the dynamics of a scene.

The 1960’s would also see the introduction of straight-line faders (largely attributed to music engineer Tom Dowd  during his stint at Atlantic Records in New York). In the film world, straight-line faders showed up first in re-recording consoles, which could occupy a larger “footprint.” However, they were slow to be adopted for production recording equipment. This was due in part to some resistance on the part of sound mixers who had grown up on rotary faders (with some good-sized bakelite knobs on them!), but also due to the fact that early wire-wound straight-line faders (such as Altec and Langevin) fared rather poorly in harsh conditions, requiring frequent cleaning.

Still, even by the end of the 1960’s, not much had changed in terms of the overall approach to production recording. Four input mixers were still the standard in most production sound setups, with little or no equalization. But the landscape was beginning to shift. While RCA and Westrex were still around, they had lost their dominance in the production world of film (although RCA still had a thriving business in theater sound service arena).

Things were about to change however.

Part 2 will continue in the next edition.
–Scott D. Smith CAS

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 7
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Interim pages omitted …
  • Page 16
  • Go to Next Page »

IATSE LOCAL 695
5439 Cahuenga Boulevard
North Hollywood, CA 91601

phone  (818) 985-9204
email  info@local695.com

  • Facebook
  • Instagram
  • Twitter

IATSE Local 695

Copyright © 2025 · IATSE Local 695 · All Rights Reserved · Notices · Log out