• Skip to primary navigation
  • Skip to main content
  • Login

IATSE Local 695

Production Sound, Video Engineers & Studio Projectionists

  • About
    • About Local 695
    • Why & How to Join 695
    • Labor News & Info
    • IATSE Resolution on Abuse
    • IATSE Equality Statement
    • In Memoriam
    • Contact Us
  • Magazine
    • CURRENT and Past Issues
    • About the Magazine
    • Contact the Editors
    • How to Advertise
    • Subscribe
  • Resources
    • COVID-19 Info
    • Safety Hotlines
    • MyConnext
    • Health & Safety Info
    • FCC Licensing
    • IATSE Holiday Calendar
    • Assistance Programs
    • Photo Gallery
    • Organizing
    • Do Buy / Don’t Buy
    • Retiree Info & Resources
    • Industry Links
    • Film & TV Downloads
    • E-Waste & Recycling
    • Online Store
  • Show Search
Hide Search

Features

VICE

Power Play

Meticulous prep allows sound to track Writer-Director Adam McKay’s Vice

by Daron James

Adam McKay with Sam Rockwell and Christian Bale

When Production Sound Mixer Ed Novick got the call about Adam McKay’s film Vice, a fictional drama uncovering the unwavering power of Dick Cheney (Christian Bale) who served as the Vice President to George W. Bush from 2001 to 2009, the decision to say yes was an easy one.

Novick previously wrapped the pilot for McKay’s Succession; HBO’s must-watch series about a filthy rich and dysfunctional family trying to keep its media empire alive. In Vice, the writer-director follows his 2015 film The Big Short, another Bale-starring allegory focusing on the 2008 financial crisis, with a biopic of Cheney from adolescence to his rise in the political ranks.

Cast and crew between takes inside the Oval Office

Sitting in the sound recordist’s Los Angeles home, Novick admits he enjoys the “free -wheeling” directing style of McKay. “Adam has a way with actors where he lets them explore. As long as I have enough mics and tracks, I’m good to go,” says Novick, who’s been sliding faders since the early eighties and won an Oscar for Christopher Nolan’s Inception.

Pre-production is where Novick puts in the brunt of the work to give sound its best chance during filming. “The tech scout is the most important day of prept,” he says. “Going to look at the physical locations and finding out what the problems are in advance is going to allow you to solve them much better than you would on the day.” Besides the locations’ natural sound elements to contend with, the conversation involves other departments, especially grip and electric and deciding on where to station things like generators and cables.

McKay directs a scene as Steve Carell and Christian Bale look on

Tapping Boom Operator Randall Johnson and Sound Utility Ryan Farris, the dialog-driven script was shot roughly over sixty days with production ramping up in the Jefferson Park neighborhood of Los Angeles to stand in for 1950s Casper, Wyoming, where Cheney grew up. It’s during this time, we learn how influential then girlfriend and future wife Lynne (Amy Adams) was on Cheney.

Dick Cheney (Bale) and Bush (Rockwell) at the President’s desk.

In a scene filmed in Newhall Orchard west of Santa Clarita, Cheney is driving drunk, singing along to the Hank Locklin song “Send Me the Pillow You Dream On.” As the camera gets closer, sound drives the performance using an earwig, recording the live vocals sung by Bale with a plant mic and lav. His eventual crash and arrests forces Lynne’s hand, telling Cheney over the phone she doesn’t want to marry a nobody; sending him on a completely different path.

Sound Utility Ryan Farris, Boom Operator Randall Johnson and Sound Mixer Ed Novick (photo courtesy of Ed Novick)

Phone conversations are a reoccurring theme in the film and Novick used different methods to record performances. The sound mixer will use a JK Audio BlueKeeper to connect cellphones, a Viking Ringdown and Genter box to connect landlines, and for playback, Soundplant, an application that allows the user to load audio into the program and trigger playback through
a QWERTY keyboard. “It’s important for actors to have someone to talk to and I think they benefit more when they can have the other actor on the line rather than an AD or a script supervisor reading the material,” he admits.

Bush and Cheney at the Bush residence

As Cheney starts his political path, his first stop is the Congressional Internship Program where Rumsfeld makes a speech to the inductees at a podium inside a large echoic room—something that doesn’t bother the sound mixer. “We’re making sound for picture,” says Novick. “The most important tool I have on my cart is the video monitor. It tells me what the shot is. If we’re in a big echoic room, we try to make it sound like what it looks like.” The sound team also took the time to make every on-camera microphone practical; getting a lot of help from Property Master Matthew Cavaliero. In addition to the practical microphone recordings, an added lav and/or overhead mic provided multiple options for post.

Cheney with wife Lynne (Amy Adams)

For most of the show, overhead wireless booms danced between multiple cameras shooting multiple angles and lavs were placed on everyone. For Cheney, Bale’s performance touted a low voice coupled by him burying his chin into his neck. The actor also went through nearly one hundred costume changes. Sound used either a Sanken COS-11D underneath a necktie or a Countryman B6 through a button hole in the absence of one.

Cheney giving a speech to the media

When George W. Bush (Sam Rockwell) entered the picture, a similar mic’ing strategy was put in place. One scene that did challenge sound was a walk and talk between Bush and Cheney at the Bush family home. The short stroll takes place right after Cheney accepts the VP position. Bush’s costume entailed a very scratchy shirt and a tight-fitting hat. Needing to place a lav on Bush as the frame was very wide, Rockwell suggested putting the mic underneath the hat. “It’s something we normally don’t do. Not only because of the weight of the transmitter on the actor’s head but the lav can affect how that hat fits,” says Novick. “But since Sam suggested it, we didn’t think twice about it.”

Cheney walks the halls of the White House; Cheney and Lynne wait to be introduced as VP nominee

Another challenge was recreating the interview between ABC News journalist Martha Raddatz and Cheney that takes place near the end of the movie. The scene was recorded simultaneously in different formats; 35mm and NTSC video. “Because they were going to have video cameras as props, they went ahead and made them working cameras,” says Novick. To record audio, two Deva 16 recorders were used. One slated at 24fps for the film cameras, the other slated at 29.97fps for the video cameras. Identical audio was then passed through AES to both Deva 16’s and two iPads were used for notetaking. A 12-channel Sonosax served as the mixer.  

While the majority of scenes made the final cut, one in particular did not and it just so happened to be sound’s biggest day. It was a musical dance number set inside a cafeteria. Filmed at Santa Anita Park, the scene was between Cheney and Rumsfeld. It starts off in a cafeteria line and the two are talking about how things work in Washington. As they are about to sit down, Brittany Howard, from the musical group Alabama Shakes, stands up and starts singing about how things are supposed to work in Washington. The sequence served as a cutaway element that folded back into the story as if it never happened.

The cast looks on as George H.W. Bush loses out on his second term.

After a weekend of rehearsal and two shooting days that included dozens of extras and choreographed dances, sound provided live vocal recording via a lav hidden in Howard’s hair, playback speakers, thumpers, earwigs, and brought in a separate Pro Tools operator to record the sequence. “It’s the best scene. They just couldn’t find a place for it in the movie,” notes Novick.

For the sound team Novick says, “We’re there to exclude the sounds that don’t belong and include the ones that do belong. We know we’re creating sound for picture and every piece of audio will be manipulated in some way. It’s our job to capture the best performance and I think we did a pretty good job.”

All Photos: Matt Kennedy/Annapurna Pictures, except as noted.

First Man

Mission Critical

Ryan Gosling as Neil Armstrong in First Man.

Sound Mixes an Emotional Journey for Damien Chazelle’s First Man

by Daron James

On July 16, 1969, Apollo 11 launched from Florida’s Kennedy Space Center carrying three astronauts—Neil Armstrong, Edwin Aldrin, and Michael Collins—their destination; the moon, a mere 240,000 miles away. Four days later at 10:56 PM ET, Neil Armstrong stepped onto the lunar surface uttering his now famous words to a billion people listening at home. 

Inside the Apollo 11 capsule

That’s one small step for man, one giant leap for mankind.

In First Man, Director Damien Chazelle (Whiplash, La La Land) viscerally explores the story behind the mission to the moon, immersing us in the life of Neil Armstrong (Ryan Gosling)—his marriage to Janet (Claire Foy), being a father of three, and the tribulations leading up to the historic event.

Visually, Chazelle and Cinematographer Linus Sandgren leaned on a dynamic style tapping three different film formats to distinguish story elements. 16mm emphasized Armstrong’s early life and spacecraft interiors. 35mm captured their El Lago, Texas, home, NASA, and shuttle exteriors. When the Apollo 11 door opens up the moon, it shifts from 16mm to 70mm IMAX.

Damien Chazelle and Costume Designer Mary Zophres review mock-ups

“The film was broken into two halves,” says Production Sound Mixer Mary H. Ellis CAS, known her work on Prisoners and Baby Driver. “The first half was Armstrong’s life on the ground and the second half was the spacecraft work and moon landing.” Along with Ellis were Boom Operator James Peterson, Sound Utility Nikki Dengel and Sound Playback Alexander Lowe and Raegan Wexler.

An early rehearsal before production introduced the shooting style to the sound crew. Chazelle, wanting a realistic portrayal, proposed that all camera movement—except when on the moon—be handheld, cinéma vérité style.

One rehearsed scene intimately placed Armstrong’s two-year-old daughter Karen (Lucy Stafford) in his arms, hugging her as he circled. Only the two actors, director, cinematographer and Peterson were allowed in the room. To record audio, Peterson was given a Sound Devices 788T to place around his neck to track the rehearsal, which ended up in the final version of the film. “We put a lav pretty much on everyone all day every day, but we never wired Lucy. Damien didn’t want her being aware of any of us,” notes Ellis. “The rehearsal helped a lot. It allowed James to get used to Linus’s body language operating the camera as he would spin 180° and widen out the lens often.”

Cinematographer Linus Sandgren on a catwalk with actor Ryan Gosling

Shot primarily in Atlanta, Georgia, on practical locations and stages, production did travel to Edwards Air Force Base in California to recreate Armstrong’s X-15 flight take-off and landing that opens the film and another day at the Johnson Space Center in Houston.

The busiest days for sound took place on the mission control set, a vast replica of the Johnson Space Center by Production Designer Nathan Crawley. The complex scene brought us inside the command center as the cosmonauts rocket toward the moon. As many as twenty-three actors needed to be wired at once in order to cover the dialog. Ellis brought in Production Sound Mixer Michael P. Clark (Stranger Things, The Walking Dead) to help head the task.

The crew filming the Apollo 11 walk

“I insisted on a rehearsal two days prior to shooting as there would be a limited amount of time on production days,” says Ellis. “We wanted to just slap the mic on the actors and find out how everything was.” It allowed the sound mixer to create a seating chart of the actors where Ellis mixed the top eight and Clark took the other fifteen.

“I knew once Damien got going, he would upgrade non-speaking parts to speaking ones, so I warned everyone about it. We had to be careful when stealing a microphone from someone to be sure they were not going to play in a specific part,” Ellis continues. “All the actors were fitted with a Sanken COS-11D. Each wire had its own ISO track and the mix was kept consistent no matter what happened on the page.”

Filming outside the famous X-15 flight

To record the dialog and for communication between the director and actors, an intricate setup was configured that included off-camera readers. “Alex [Lowe] was fundamental in all of this,” says Ellis. Lowe created three different mix options to route through. Sound also accounted for each actor’s preference in terms of who they wanted to hear and what they wanted for playback. For instance, Ben Owen, who played John Hodge, wanted to hear all twenty-three microphones at once in his earwig to feel the sense of urgency and chaos in the room.

Recording dialog inside each spacecraft was a different technical story. An early concern for sound was the multitude of spacesuits and helmets as wardrobe. The film moves from 1961 through 1969 and details five missions, including the X-15 flight, Gemini V, Gemini VIII, Apollo 1, and Apollo 11. Costumer Mary Zophres researched and duplicated each look, even creating two suits for Apollo 11, one for each actor and the other for their stunt double.

Damien Chazelle inside mission control set

“In prep I spoke with Whit Norris and Mark Weingarten, people who had done helmet movies before to find out what they’ve accomplished, but I learned they didn’t have to worry about the period piece part of it like we did. We didn’t have as many wiring options so we planned different strategies for when we could get our hands on the helmets,” says Ellis. “We ended up buying four new mics and had a quick release made right at their neckline because the minute the actors could, they would take off their helmets.”

The spacecraft modules were built for actual size. They were tiny, and once an actor was inside, it was impossible to adjust the wireless microphone. “Our other concern was about airflow and how loud it was going to be inside the helmet. You have to have enough air for the actors so they don’t pass out but it can lead to condensation,” says Ellis.

“Instead of a regulated system, they had an air compressor pushing air into the helmet. It was all or nothing and very loud,” notes Lowe. “Mary sent me separate feeds that I gated open when they talked or reduced air noise. I sent the actors a feed of their own off-camera reader, the feed of the other actors, but not themselves, and mission control comms to all. When Damien talked, it shut down every feed, including their own so they could hear his direction. I also routed the First ADs voice of god to any one of them if needed. All this was done each day. I had to break it all down every night and set it up again the next day. It took two hours.”

Earwigs were not used inside the helmets because if they went out, 108 dB of white noise would blast into the actor’s ear. Instead, Comteks were hardwired inside a prop earwig and set to the earwig frequency and surveillance systems for sound to have complete control over. “The great thing about this was the batteries lasted all day as the actors could be in the capsules for up to seven hours. Also, I could change a battery without taking off the suit in case one failed, though that never happened,” says Lowe.

Additionally, the original launch day recordings from NASA came into play on set when actors wanted to listen to the delivery. “Ryan was very particular about mimicking Neil’s inflections, specifically when we were on the moon,” notes Lowe. “I fed Ryan a recording of Neil and he would work out his moves with the dialog.”

On set with Sound Playback Alexander Lowe

To find the right mic placement for Gosling, the actor was all about experimenting and finding the right levels. “Ryan doesn’t like to do any ADR, so we needed to find the right balance between the air level and audio level so he wasn’t looping two months of capsule work,” says Ellis. Another point of emphasis was placing plant mics as ISO tracks outside the gimbals as they got creakier for post.

For its moon landing, production took over the Vulcan Rock Quarry, a rock quarry south of Atlanta. The shoot took place outdoors in December and at night. Sound approached the work utilizing wires instead of booms to give the actors solitude. “It was a real internal moment for them so we wanted to give them as much space as possible,” says Ellis.

Reflecting on the show, Ellis admits Sandgren gave them some challenging situations. “He would always come over to say sorry but he didn’t have to. He had an amazing team and we were able to have this really great dance together thanks to the crew I had around me.”

All photos: Daniel McFadden/Universal Pictures and DreamWorks Pictures

The Way We Were: Mixers Past & Present (Part 2)

The 1970’s

While the 1960’s saw some further advances in the techniques of both production sound recording and re-recording, it wasn’t until the 1970’s that some of the nascent technologies developed for music recording began to make inroads into the film industry. Although stereo and surround sound were nothing new (going all the way back to the early 1950’s), the films released in either four-track 35mm Cinemascope mag or six-track 70mm mag were limited to major roadshow titles like 2001: A Space Odyssey and Woodstock. Prints were extremely expensive, and the number of theaters equipped to run either 35mm mag or 70mm were typically limited to major cities. And even with the advent of these technologies, theater loudspeaker systems hadn’t really evolved much past the technologies of the late 1940’s and early 1950’s. Despite the extraordinary quality of 70mm magnetic, the Academy curve was still the norm, with its severe rolloff of high frequencies.

Other changes were beginning to take place in the 1970’s as well. Audiences had become more sophisticated in relation to sound. A new generation of music listeners had become accustom to high-quality home sound systems, FM radio began to take off, the quality of the compact cassette improved, and those with the means invested in recorders to listen to four-track reel-to-reel releases. Audiences of this generation were not going to be satisfied with the sound of a theater system developed two decades ago. Commensurately, theater attendance was in decline, and studios were looking for ways to attract a younger audience.

It was against this backdrop that a number of advances in film sound took place. Most notable among these was the introduction of Dolby noise reduction in the post-production stages (first used on Stanley Kubrick’s A Clockwork Orange in 1971). While the film was originally released in Academy mono (due primarily to Kubrick’s concern regarding how many theaters would be able to play stereo optical), it was clear from the tests done at Elstree Studios that the quality of sound could be markedly improved if the process could be applied to the optical track itself. Further development was done at Dolby Labs over the next few years, which culminated with the release of Lisztomania in 1975 and Star Wars in 1977.

Also notable was the introduction of Sensurround by Universal Studios, which was first used for the movie Earthquake in 1974.
And perhaps most important in the realm of production sound recording, 1975 marked the year that Robert Altman’s movie Nashville was released, significant both for its use of multitrack dialog (with stellar work by mixer Jim Webb), in addition to live multitrack music recording (utilizing a remote truck built by the author).    

While multitrack dialog recording was not exactly new per se (having been used for the production of three-channel Cinemascope films), the use of multitrack for production sound would mostly be limited to Robert Altman films for nearly two decades. It did, however, help to spur a move to a more sophisticated approach to production sound, which was still largely done on mono Nagra recorders (despite the introduction of the stereo Nagra 1970).

With the introduction of op-amp technologies, mixer designs began to take on a significant change in design philosophy during the 1970’s. These advances, along with more sophisticated printed circuit board designs and smaller components, made possible more compact mixers with less current draw than their predecessors. It also heralded the adoption of a modular approach to console design, with components separated into input modules, master modules, buss assignment modules, and monitor modules. While these approaches were at first destined for the music and broadcast world, it wasn’t long before they were adopted by manufacturers engaged in designing mixers for the film industry. This was due in no small part to the increase in the channel counts of film dubbing stages, which were beginning to increase with the advent of Dolby stereo in 1975.

The same approach was also used for smaller production sound mixers, with more limited facilities. The 1970’s would also mark an era that would see a more ready adoption of European film sound equipment by US sound mixers. Although companies such as Sennheiser and Neumann had made inroads into the United States with their microphones (primarily for music recording), and Nagra with portable recorders, up until the seventies, if you walked into most film sound operations, nearly everything you saw was of US manufacture.

The Sela 2880-BT mixer, introduced in 1967, paired with a Nagra III recorder. The industry standard for many years. Photo courtesy Film Sound Sweden

In the early seventies, there were still not many choices when it came to lightweight production mixers (the Nagra BM-T and Sela 2880-BT not withstanding). For stage work, it was still common in Hollywood to see mixers made by both Westrex and RCA dating back at least a decade (with many custom variants) used on set. As the move to location shooting became more prevalent, sound mixers started looking for alternatives to the bulky production boards typically used for stage work.

However, there were some alternatives for those who wanted to take a bit different approach, which in many cases involved doing a bit of customizing. Notable among these were the following:

  • The Sennheiser M101 mixer, a four-input, mono-output mixer with built-in battery supply and T power, which was first introduced in 1969, but took a little time to catch on in the US market. Some enterprising individuals would also customize these boards into a six-input configuration.
  • The Stellavox AMI mixer, a five-input, two-output mixer introduced in 1971. Designed by former Nagra engineer Georges Quellet, this was intended as a companion piece to the Stellavox SP7 recorder.
  • The Audio Developments AD031 “Pico” mixer, which could be supplied in a few different configurations, and utilized a 24-volt power supply.
  • The Neve 5422 “suitcase mixer” brought to market in 1977, and intended primarily for use in location music recording and broadcast.
  • The Studer 169/269 series mixers, introduced circa 1978, and which could be ordered in a variety of configurations. Intended primarily as a location broadcast console for the European market, this console could be either AC- or battery-powered. While prized for its sonics by music engineers, it was only used by a handful of production mixers in the States (due in no small part to its size and weight).       
The Neve 5422 “suitcase mixer.” This mixer was the first entry that Rupert Neve made into the “portable” market. Featuring classic Rupert Neve mic preamps and EQ, it was prized by many music and production mixers for its sound.

As amplifier technology evolved and components became smaller, it allowed designers the luxury of adding more features, including three-band equalization, better high-pass filters, better mic preamps, and more sophisticated signal routing. It also marked the move away from the traditional four-input mixer, which had dominated production sound recording for nearly four decades. Still, production sound equipment had to be portable, which limited the sort of features that would be found standard on even fairly rudimentary re-recording consoles of the period.

A custom fifteen-input eight-buss mixer conceived by Jim Webb, and built by Jack Cashin. Designed specifically for eight-track recording on Robert Altman films. Note the individual VU meters for the iso outputs. Photo courtesy James Webb

A (very few) ambitious sound mixers also took it upon themselves to build or commission mixers to their liking from scratch or perform significant modifications to mixers that were designed for other purposes.

The 1970’s also saw an extensive adoption of straight-line faders, which had moved from wire-wound designs to carbon composition resistive elements. While early straight-line faders were prone to problems when used under unfavorable conditions, the new faders were both smaller and more reliable. In addition, sound mixers who began their careers in music recording or post production were more receptive to using them for production work. By the end of the decade, nobody except Sela were manufacturing location mixers with rotary faders.

The 1980’s


Despite the fact that the seventies saw a host of developments in film sound recording, it didn’t translate into very many changes in the sound mixing techniques and equipment used for production work. Most of the advances made in the previous decade were in the area of re-recording, as well as the advent of Dolby Stereo on optical tracks (Stereo Variable Area), which allowed studios to release titles in L/C/R/S stereo without the need for four-track magnetic release prints. Since the optical tracks could be printed and processed on standard laboratory equipment, it greatly reduced the costs associated with making a stereo release.

As such (with the notable exception of Robert Altman), most production sound packages still consisted of a four- or six-input mixer, perhaps mated with a Nagra stereo recorder with Dolby noise reduction, and four channels of wireless. And for most productions, this was sufficient. Even with the introduction of the Sony PCM-F1 in 1981 and DAT in 1987 (both being two-channel formats), there was no compelling reason to change the basic approach used for production recording.

Sonosax SX-S mixer, designed by Jacques Sax, and introduced in 1983. Available in six, eight, or ten inputs, this mixer became a favorite of sound mixers who needed a small, lightweight mixer. Photo courtesy Sonosax

While some sound mixers (including this author) opted to use somewhat larger consoles intended for broadcast and remote music recording, there weren’t really many options available to the industry until the introduction of the Sonosax SX-S in 1983, and the Cooper CS-106 mixer in 1989. Like most equipment destined for the highly specialized film market, these mixers were designed by individuals who had a dedication to producing high-quality sound recording equipment specifically for the film industry.

The Sonosax SX-S mixer was the brainchild of Swiss engineer Jacques Sax, who had begun his career as a live sound mixer. Frustrated with what was available on the market at the time, he took it upon himself to design something that was more to his liking, beginning with the SX-B mixer in 1980, and culminating with the current SX-ST series consoles.

The Cooper CS-106 mixer, designed by Andy Cooper, and introduced in 1989. Could run off of internal batteries. Featured both 12-volt T power and Phantom power for mics. Two-stage high-pass filter. Comprehensive talkback and monitoring facilities. Still i

The Cooper 106 was designed and built by Andy Cooper, who besides being a bright designer, was also cognizant of the particular needs of the film production market. So instead of designing something that he “thought” represented the needs of production mixers, he actually went out and took the time to talk with notable mixers of the era (a lesson that some manufacturers still have yet to learn).

The Cooper CS-106 marked a fairly significant departure from anything else available at the time. With straight-line faders, the option for seven inputs, three-band EQ, a lightweight chassis, DC powering, sophisticated monitoring and signal routing functions, the Cooper mixer embodied much of what production mixers had been looking for at the time.

Film sound being a very small slice of the overall worldwide audio market, larger manufacturers simply weren’t interested in developing a highly labor- and design-intensive console for a small market segment, when there were much bigger rewards to be reaped in the studio, sound reinforcement, and broadcast markets. Many of the consoles built by Sonosax and Cooper Sound are still in use nearly three decades later, which attests to Jacques and Andy’s strengths as careful designers who understood the rigors of film production.

The Audio Developments AD031 mixer was one of the first products introduced by this venerable British firm. Available in a variety of input configurations, it became very popular in the UK.

There were of course, other options available in the eighties. Audio Developments continued with their line of portable mixers, which included the AD 062 and AD 075 series. Sony actually introduced a  twelve-input mixer, the MXP-61, which had some features such as 12-volt T-powered mic inputs, which were clearly aimed at the film production market, but didn’t generate a lot of sales.

There were also some entries in the portable “bag rig” market, most notably by the British company SQN, which introduced the SQN-4S mixer.

Being the highly individual craft that production recording is, many sound mixers weren’t content with what was offered on the commercial market, and opted to design something that suited their personal approach to production recording, or make extensive modifications to stock consoles. Not everyone who sat at a mixing board had the kind of electronic background to undertake this sort of task however.

Highly customized mixer designed and built by Bruce Bisenz in the 1980’s, utilizing Nagra mic preamps. Note the modified Altec graphic equalizer, with one octave band intended for dialog EQ, and the group buss assignments. Photo courtesy Bruce Bisenz

Highly customized mixer designed and built by Bruce Bisenz in the 1980’s, utilizing Nagra mic preamps. Note the modified Altec graphic equalizer, with one octave band intended for dialog EQ, and the group buss assignments. Photo courtesy Bruce Bisenz

An example of the Studer 169/269 series consoles, introduced in 1977. Conceived originally as an all-around broadcast mixer for European radio and television broadcasters, it also became a favorite of many music recording mixers, and eventually found its

Among the few who took on this challenge during the seventies and eighties include David Ronne, an Academy Award-winning production mixer (who also designed the RollLogic remote control). Bruce Bisenz, who built a highly customized console from the ground up, and Jim Webb, who commissioned a console to his liking that was built by Jack Cashin. The list goes on…

There were also sound mixers such as Nelson Stoll, Ray Cymoszinski, Michael Evje, and others who decided they loved the big sound of the Neve consoles, and took it upon themselves to modify the boards to their liking for film work. Others (including the author) opted for the modular configurations offered by the Studer 169/269 series consoles.

The important thing to note in this regard is that every one of these sound mixers had a particular approach to the challenges of doing production sound under all kinds of conditions, and wanted a console that would give them the most flexibility and best sound quality for their style. In a world that has now become defined by the stock offering of various manufacturers, the “signature sound” that many mixers had sought to achieve during this period has now become lost.

Next up, “The Nineties.”
 –Scott D. Smith CAS

With sincere appreciation to Jeff Wexler CAS for invaluable contributions in style and content.

The Rise of Server-Based Recording

by James Delhauer

x-default

On a set, the job of the person who is tasked with acquiring the content that is shot throughout the day is incredibly stressful. Whether we’re discussing the tape operators of days gone by or the most modern media recordists, there are challenges that have stood the test of time. Somewhere between hundreds of thousands and hundreds of millions of dollars are spent assembling the production. Countless man-hours contribute to making it the very best that it can be. Literal blood, sweat, and tears are spilled to create what we all hope will be a veritable work of art. Then, after all of that, it falls on the shoulders of the one person who is tasked with handling the media. They are simply given very delicate assets that have been created throughout the day and which represent the sum total of the production as a whole. Just about anything can go wrong. Data can be corrupted. Hard drives can be damaged. Video tape can tear. Fortunately, these risks are being minimized by the advent of a new method of media acquisition: server-based recording.

Though different productions utilize a vast array of workflows, every single one since the Roundhay Garden Scene was first filmed in 1888, has come down to the media. And every single production needs someone to manage it. In today’s digital era, the most common workflow goes a little something like this. Cameras or external recorders capture video and audio data to an internal storage device some sort. When that unit is full, it is ejected and turned over to a media manager. The production continues with another memory card while the media manager takes the first one and offloads, backs up, and verifies the files on it. This is usually done with an intermediate program such as Pomfort’s Silverstack or Imagine’s Shotput Pro—programs that can do file comparisons to ensure that what was on the source media is identical to what ends up on the target media. When all of that content is secured on multiple external hard drives, the original memory card is returned to the production so that it can be wiped and reused. Rinse and repeat. At the end of each day, the media manager turns over at least one set of drives containing the day’s work to someone who will bring it to a post-production facility.

There, the work is moved from these temporary storage drives onto work servers, where assistant editors can begin their work.

While prominent, this workflow does come with a few inherent drawbacks. Most notably, the process is both fragile and time-consuming. Digital storage, no matter how sophisticated, is vulnerable to failure, damage, or theft. When the media manager receives a card with part of the day’s work on it, that card is often the only raw copy of the work in existence. Careers could end in a heartbeat if anything were to happen to it. So it becomes his or her job to create multiple copies. Unfortunately, the time during which data transfers from one storage system to another is the time at which it is most vulnerable. An accidentally yanked cable or sudden power surge is all it takes to corrupt the open files as they are transferring over. This vulnerability is compounded by the fact that transferring files is time-consuming and becoming ever more so. As our industry continues to push the boundaries of resolution, color science, and bit depths, video files are getting bigger and bigger. As such, they require more time to offload, duplicate, and verify, meaning that the period of vulnerability is growing longer.
 
But emerging technologies are creating new workflows that circumvent these drawbacks. Among the most promising is server-based recording.

Rather than relying on disparate components that must be passed back and forth between different individuals on a set, server-based recording allows productions to streamline their workflows and unify everything through one interconnected system. All of the components can be plugged into a single network switch and communicate with one another directly. Cameras and audio devices send uncompressed media directly into the switch. The network feeds them into a digital recording server (such as a Pronology’s mRes or Sony’s PWS 4500), which takes the uncompressed data and encodes the signals into ready to edit files. These files are then sent back into the network, which in turn sends them to any desired network-attached storage devices (such as SmallTree’s TZ5 or Avid’s ISIS & NEXIS platforms). The moment the recordist hits the Stop button, he or she can open the files on a computer and bring the newly created clips into a nonlinear editing application in order to assess their viability. This method eliminates the intermediate process of utilizing memory cards, transfer stations, and shuttle drives in favor of writing directly to external storage and thus removes both the time and risk associated with manual offloading. It also offers instant piece of mind to both the person handling the media and the production as a whole that the work that has been done throughout the day is, in fact, intact and ready for post-production.

And this is only the most basic of network-based workflows.

By utilizing advanced encoder systems, such as the aforementioned mRes platform, multiple tiers of files can be distributed across multiple pieces of network-attached storage. This gives the recordist the ability to simultaneously create both high-quality and proxy-grade video files and to make multiple copies of each in real time as a scene is being shot. This eliminates the potential need for time-consuming transcodes after the fact and, more importantly, this instant redundancy removes the key period of danger in which only a single fragile copy of the production’s work exists. As a result, recordists can now unmount network drives mere minutes after productions wrap and turn them over for delivery to post with one hundred percent certainty that there are multiple functioning copies of their work from the day. There is no need to spend several hours after wrap each day offloading cards and making backups.

Or, to take things a step further, productions can take advantage of the inherent beauty that is the internet to skip the need for the shuttle process altogether. It is possible to create files in a manner that sends them directly to a post-production edit bay. With low bitrate files or a high-capacity upload pipeline, recordists can set up their workstations using transfer clients (such as Signiant Agent or File Catalyst) to take files that are created in a particular folder on their network-attached storage and automatically upload them to a cloud-based server, where post-production teams can download them for use. This process has the distinct advantage of sending editors new files throughout the day in order to accommodate a tight turnaround.

Conversely, for productions where the post-production team may be located on site, a hard line can be run from the recording network directly to the edit bays. By assigning the post team’s ISIS server (or comparable network attached server) as a recording destination, editors gain access to files while they are recording. In cases such as this, the production may opt to use “growing” Avid DNxHD files. This format takes advantage of Avid’s Advanced Authoring Format in order to routinely “close” and “reopen” files, allowing editors to work with them while they are still being recorded. For productions with incredibly tight turnarounds, this is the single fastest production to post-production workflow possible.

All of this makes server-based recording an incredibly versatile tool. However, it is not without its limitations. At this time, network-based encoders are limited to encoding widely available intermediate or delivery codecs, such as Apple ProRes or Avid DNxHD. Without direct support from companies with their own proprietary formats, they cannot output in formats such as REDCODE or ARRIRAW. Furthermore, setting up a network of this nature requires persistent power and space. It is also worth considering that, like most new technologies, server-based recording often comes with a hefty price tag. These limitations make the process unsuited for productions hoping to take advantage of the full range of Red and Arri cameras, productions in remote or isolated locations, and low-budget productions.

So when is it most appropriate or necessary to take advantage of this emerging technology? While it can be of use in a single-camera environment, this method of recording truly shines in live or archaically termed “live to tape” multi-cam environments, where anywhere from three to several dozen cameras are in use. After all, if a show records twelve cameras for one hour, the media manager suddenly has to juggle twelve hours’ worth of content. It is much easier to write all twelve to a network-attached storage unit than to offload all twelve cards one by one. Also, due to the fact that network-attached storage drives can be configured to store hundreds of terabytes, the process is ideally suited for live events or sports broadcasts where stopping and starting the records risks missing key one time only moments. But above all, it is best used when time is critical. The ability to bring files into a nonlinear editing system as they are being recorded and work in real time is a game changer for media managers, producers, and editors alike.

This technology is already revolutionizing the way television productions approach on-set media capture and it is still in its infancy. It will continue to grow and evolve. Given time, it is my sincere hope that it will find its way into the feature film market and become more practical for smaller productions to adopt. For the time being, Local 695 Video Engineers should begin to take note of what is available and familiarize themselves with the technology so that they are prepared to take advantage of the technology in the future.

BlacKkKlansman

Truth and Action

Laura Harrier as Patrice and Corey Hawkins as Stokely Carmichael.

Sound mixes a moving palette for Spike Lee’s new joint

by Daron James

The Civil Rights movement in 1950s and 1960s America was a tinderbox ready to explode, then in the ’70s, it continued with the emergence of the Black Power movement. The latter is the historic setting for BlacKkKlansman, a taught sociopolitical film from director Spike Lee.

Based on the book Black Klansman by Ron Stallworth, the first African-American detective in the Colorado Springs Police Department, the adapted screenplay follows the true story of Stallworth’s (John David Washington) infiltration of the Ku Klux Klan and his eventual take down of an extremist hate group.

Jasper Pääkkönen, Director Spike Lee, Ryan Eggold, and Adam Driver on set.
Ashlie Atkinson (plays Jasper’s wife) and Lee.

Production Sound Mixer Drew Kunin, Boom Operator Mark Goodermote, and Utility Marsha Smith took on the project, a paced schedule lasting from October to December 2017 that included a glut of filming locations in New York and a jaunt VFX stop in Colorado to blend Centennial State exteriors into the Big Apple.

The movie opens up in black-and-white, featuring Alec Baldwin as a bigot-spewing racist; a 16mm projector beams images of his message overlapping his face and onto the wall behind him—the noise of the machine pierces through the soundscape. On set, everything was done live with no visual effects. It meant the clacking of the projector would compete with Baldwin’s dialog. “I didn’t know how loud Alec was going to be, which turned out to be very,” says Kunin. “I was a bit surprised when he launched into it—it was a little bit of wild ride to keep his level from overloading, but we were lucky he had enough volume that overrode the projector.” Sound ran a Schoeps CMIT-5U as boom and placed a lav for dialog.

Kunin uses a mix of DPA, Sanken, and Countryman lavs with Lectrosonics wireless on projects. His general mantra being to only wire when necessary, however, on BlacKkKlansman, they went ahead and wired everyone to be safe. Multiple roaming cameras shooting wide and tight coverage were an acting catalyst, but also being the team’s first project with Lee, they didn’t want to interrupt the rhythm with the need of adding a lav after the fact.

We first meet Stallworth working in the file room of the police department but he soon receives his first undercover assignment to attend a lecture delivered by Black Panther Party philosopher Kwame Ture aka Stokely Carmichael (Corey Hawkins). It’s here he meets Patrice (Laura Harrier), a fiery activist Stallworth warms to. Inside the assembly hall, hundreds of extras look on, shouting with praised remarks during Ture’s moving speech.

John David Washington as Ron Stallworth with Laura Harrier.

Sound couldn’t place a microphone overhead Ture because of wide camera shots. Instead, the period microphone at the lectern was made live and an additional plant mic was hidden. Another boom captured reactions of the crowd.

For that scene in post, Re-recording Mixer Tom Fleischman, a longtime collaborator of Lee’s, augmented the speech with a bit a reverb to add to the size of the room and layered dozens of responses from the crowd. “Drew turned in production tracks that were really well recorded. When you have a good track, it makes my job easier,” says Fleischman.  

4117_D018_09780_R (l-r.) Director Spike Lee, actors Topher Grace and Adam Driver on the set of Spike Lee’s BlacKkKlansman, a Focus Features release. Credit: David Lee / Focus Features
Duke and Flip meet for the first time.

The challenge of the Ture speech scene in post was making sure the audience callouts felt like there were actually in the room and not ADR. Mixing in 7.1 made it easier for Fleischman, then it became a matter of going through it to make it sound natural.

“It could have easily become quite a noisy scene, but my philosophy at any given moment in any film is that there is one sound that needs to be in the forefront,” says Fleischman. “We needed to balance the scene in a way that made sure every line was intelligible so that the underlying track sounded real to the audience and not like they were hearing something in a vacuum. The whole idea for me is story and keeping the audience involved and not letting them be distracted by any sound element.”

If you listen closely to the scene, you will hear a “boom shakalaka” from the crowd. That’s actually Lee’s voice. During the mix, the director asked Fleischman to grab a mic so he could record the audio and found spots for it in the scene.

David Duke (Topher Grace) welcomes a new chapter of members.
David Duke (Topher Grace) welcomes a new chapter of members.
Ron (John David Washington) spies on Felix’s residence.

After that initial assignment, Stallworth starts his own undercover operation after stumbling across a newspaper ad from the Ku Klux Klan seeking new members. Stallworth calls the number on the notice and David Duke (Topher Grace), the leader of the hate group, actually picks up the line. Disguising his voice, Duke thinks Stallworth is a racist white man and invites him in the inner circle.

To shoot these scenes, Production Designer Curt Beech built two sets outfitted with period-appropriate props on NY stages so production could shoot Stallworth and Duke simultaneously. Sound recorded both actors’ dialog simultaneously as well. “We placed mics overhead on both ends, plus we tapped the telephone line on a separate track for editorial to play with,” says Kunin, whose cart setup is based around a Sonosax mixer and an Aaton Cantar X-3 recorder. All Fleischman had to do was “filter down the track a little” to make it sound more like it was coming from a phone.

Stallworth, now a member of the KKK, has one problem: he’s black. To be the face of the operation, he asks fellow detective Flip (Adam Driver) to pose as Ron, where he meets the leader of the local chapter, Walter (Ryan Eggold). Stallworth asks Flip to wear a wire and Kunin was able to find a few in the style of the old BCM 30 to put on camera, which added to the complexity of lav placement.

Ron (John David Washington) gives Flip (Adam Driver) his official member card of the KKK.
Ron (John David Washington) gives Flip (Adam Driver) his official member card of the KKK.

Costumes from designer Marci Rodgers posed a different challenge as they lauded the fashion of the time. Stallworth wore lush colors, jazzy prints, and mixed textures of denim, velvet shirts, silky button-downs, suede vests, and leather jackets. To lav Washington, center chest became the default position to avoid material movement leaking into the track.

For Patrice, she was dressed in long leather jackets, dark turtlenecks, mini-dresses, and knee-high boots, among others. “Laura was a little tricky to mic,” admits Kunin. “Not because of the material she was wearing but because it was hard to hide a mic. Marsha worked with wardrobe to sew in special compartment to place in the bodypacks.”

Sound had to pay close attention during the nightclub scenes Ron and Patrice go to hang, talk, and dance. For dialog to be mixed cleanly, Kunin dropped the song to capture the dialog and used a thumper to aid the beat. Music is a big part of Lee’s storytelling. Besides the musical soundtrack that includes “Too Late to Turn Back Now” by Eddie Cornelius and “Oh Happy Day” by Edwin Hawkins, the director tapped Terence Blanchard (Malcom X, 25th Hour) for its score.

“When it comes to writing music, I let the film tell the story,” says Blanchard. “The first thing I thought about when I saw a cut was Jimi Hendrix playing the National Anthem on guitar. Being an African-American, you’re constantly bombarded with issue of bigotry every day. This story is a reaffirmation of what we’re going though. Jimi was a primal scream for all of us, so it’s why the electric guitar plays a prominent role in the sound of the score.”

Flip, now deep in the local Klan chapter, attends meetings at the home of Felix (Jasper Pääkkönen), a follower who wants to put words into action. It’s here the undercover detectives learn Felix is planning a plot to spoil another activists meeting; one that involves a character played by Harry Belafonte, an icon of the Civil Rights movement.

Felix’s residence, a practical location found upstate in Ossining, New York, was small and scenes were filled with multiple actors and multiple cameras. At times, squeezing in a boom operator was not possible, especially when Felix puts Flip through a lie detector test in a broom closet of a room. It meant sound had to rely on plant mics and lavs to cover the dialog. In other tight locations, gaps in the wall allowed to place the boom in the room but not Goodermote.

Leading up to the climax of the movie, picture intercuts two different story plots. On one side, you have Flip being initiated into the Ku Klux Klan, where a crowd gleefully cheers during a screening of 1915’s The Birth of a Nation. On the other, a group from Patrice’s African-American student union peacefully sits around Belafonte as he delivers the most galvanizing moment in the movie; a recounting of the lynching of Jesse Washington he witnessed as a young man. “To see him was a very powerful moment,” says Kunin. “Working on that scene had so much gravity to it, we took extra precaution in our approach.” In recording Belafonte’s dialog, sound let the cameras set up its shots, then they strategically placed an extra in front of a plant mic for additional recording.  

Though backdropped in the mid-’70s, the film is not only about the past but about the state of how we’re living today. Nothing couldn’t be more evident than the film’s final sequence; a collection of uncensored videos from the Charlottesville protests. Lee left the material untouched. “What you hear is the sound straight out of the smartphones and online videos. There’s no foley or effects added,” says Fleischman. “We only mixed in the score and it plays against the raw sound really strongly.” Blanchard notes, “That’s classic Spike. He makes a statement about what’s going on in our country and leaves you there to think about it.”

A Star Is Born

Photo by Clay Enos. Courtesy of Warner Bros. Pictures

by Steve Morrow

From my very first meeting, it was obvious that Bradley Cooper knew he wanted A Star Is Born to feel real and immersive. He certainly achieved that in this film, with handheld camera work and live vocals, he leads us into a world that feels simultaneously epic and intimately authentic.

As a director, Cooper cultivated a great atmosphere on set that was familial and inspiring to work in. Communication and collaboration were paramount for him, which fostered an environment where every member of the cast and crew could perform at their best.

Photo by Neal Preston. Courtesy of Warner Bros. Pictures

There was never any doubt that Bradley Cooper and Lady Gaga would be singing live. Neither of them wanted the film to feel like a traditional musical and there would be no lip-syncing to playback. For me, this was a dream come true, recording a music-based film and capturing the performances live, with the production sound being the vocal track instead of a studio recording.

For the next few months, I ran through different concept setups to figure out exactly how to get the best vocals and tracks possible for the various scenarios we would be shooting in. To ensure that we had the best system in place, we set up a mini-concert during prep to run through and test the concepts. We ultimately landed on having the band perform to playback with just the vocals being recorded live. Jason Ruder, Music Editor, and one of the re-recording mixers, did a quick mix of the prep mini-concert and with Warner Bros., Cooper, and Gaga happy it was clear that we had landed on the right method.

The movie opens on Jackson Maine’s performance at Stagecoach, the entire scene was filmed live at Stagecoach in only eight minutes. We shot between two concert acts, Jamie Johnson and Willie Nelson. Moving only between their sets meant we got our gear up in the few minutes allotted before Johnson’s set and then filmed and broke down in the minutes before Nelson’s set. One of the biggest concerns to everyone was the potential of music being leaked while filming at these venues with live crowds. To counter that, we came up with an earwig playback setup with no amplification. The performers could hear the music and the singing was recorded live, but the crowd couldn’t hear the vocals or music beyond the first couple of rows. We modified this system for Glastonbury where we had to be super mobile, we were a skeleton crew with only four minutes to set up and shoot. We had the festival’s monitor mixer put the instrument playback into Cooper’s wedge at a low level, and he sang live (unamplified) in front of one hundred thousand festival goers. The crowds were always so fantastic and excited even though they couldn’t hear much of anything. There were some fun headlines at the time about technical difficulties causing the lack of amplification, but it was all a part of the plan to keep the music as secret as possible.

Stagecoach opening scene.

We shot performances all over, from Coachella and Stagecoach, to Glastonbury, the Shrine Auditorium, the Greek Theatre, the Orpheum, to the Palm Springs Convention Center, a few small nightclubs and a drag bar. We had to be prepared to record absolutely everything live, which meant up to sixty-one tracks of audio at any given time. I used two Midas M32R mixers with a digital stage snake, each mixer has thirty-two inputs and by combining them via Dante into the Sound Devices 970, I could record all the tracks needed. We muted the musicians’ instruments through the amps but still recorded the feeds for post to use. To help capture the atmosphere, we used a DPA 5100 surround sound mic in the crowd and two shotgun mics, one at stage left and right, aimed at the crowd for their reactions. We created a room mapping whenever possible by recording an impulse response for post to be able to take the original studio instrument recordings and balance them to sound like they were recorded live with the vocals in each venue.

Courtesy of Warner Bros. Pictures

For the track “Always Remember Us This Way” (one of my favorites in this film), Gaga requested a digital grand piano so we could record the piano isolated from the vocals. We were able to take a stereo feed from the digital piano to track her playing, record the vocals cleanly, and keep the music from being heard by the crowd.

For regular production days, we were a three-man crew: myself, Craig Dollinger (Boom Operator), and Michael Kaleta (Sound Utility.) On music days, we had Nick Baxter on our team as our Pro Tools music editor and monitor mixer Antoine Arvizu. Each music day was like throwing a concert, and the whole team was needed to set up and break down the mics, cabling, stage snakes, in ears and earwigs. These days, we generally had a three-hour pre-call, to set up the wedges, mic all the instruments, and get everything patched to a Midas digital stage box. We used the stage box to split the signal of all the feeds, one to me at the cart and the other to a monitor mixer at the side of stage. The monitor mixer would control the reverb that was heard by Gaga and Cooper via Phonak Earwigs. We would also route the singing and music to the band through earwigs for the concerts with live crowds since the music and singing were not amplified. At the cart, I worked off of the two Midas M32R’s recording to two Sound Devices 970’s. I prepared for sixty-four tracks, our highest track count was sixty-one. We were also joined on set by Jason Ruder, there to observe our recording process during music performances to make handling all the tracks in post as easy as possible.

Music interacting with the script was so important to Cooper’s vision of the film. He wanted the songs to be a character in the film and each word of the music there to propel and reflect the story. To help achieve this, multiple songs would often be performed in one setup so that he could have the options available when he went into editing. On our end, we built each Pro Tools session to include nearly all of the film’s music so we could easily and quickly switch from song to song at a moment’s notice, with some songs being added the day of shooting.

There’s always a sense of fun on set when you do a music-driven film. In between setups, the band would jam out and occasionally, Matthew Libatique would grab a camera and start rolling on the action. I was often glued to my seat staring at the monitors as we could just start rolling at any minute. This film is one of the most rewarding I’ve worked on to date, filled with plenty of technical challenges and lots of fun. Bradley Cooper’s directorial leadership always let you know you were an integral part of something special. It was an honor to work with both he and Lady Gaga, two incredibly talented and passionate artists.

The Cart
Venue 2 and 970’s
Digital stage snake for surround recording
All the ins and outs from the stage
Track Count 970 Screen

Mission: Impossible – Fallout

Tom Cruise as Ethan Hunt in MISSION: IMPOSSIBLE – FALLOUT, from Paramount Pictures and Skydance.

Mission: Impossible – Fallout marks my second outing in the series and third film with Tom Cruise. I had previously worked on Mission: Impossible – Rogue Nation, but collaborated with a new team for this latest installment. My longstanding colleagues, Steve Finn and Anthony Ortiz, who worked on Mission: Impossible – Rogue Nation, both decided to make the break to mixing. I’m pleased to say both were successful, and hopefully, they learned as much from me as I learned from them in the years we spent together.

by Chris Munro

Tom Cruise as Ethan Hunt and Henry Cavill as August Walker in MISSION: IMPOSSIBLE – FALLOUT, from Paramount Pictures and Skydance.

Previous experience has taught me to expect the unexpected; after all, anything can happen. So, upon starting pre-production, it came as no surprise when one of my first meetings with Tom Cruise was at the London Heliport in Battersea. It took place as Cruise piloted the helicopter that took us to an airfield close to the studios. He explained there was a plan for a helicopter-chase sequence in the film where he needed to be able to pilot the helicopter with no visible headset or helmet. At this stage there was no script, and for some weeks, we worked with Writer/Director/Producer Christopher McQuarrie, verbally explaining the storyline. The Mission films are all about practical stunts and FX so everything has to work in real-life situations.

Fortunately, I have worked on a number of films featuring helicopters and used them as an essential means of reaching challenging locations. The most notable project, Black Hawk Down, garnered me an Academy Award for Best Sound.

Henry Cavill in MISSION: IMPOSSIBLE – FALLOUT

I had previously considered using bone conduction technology, most recently on Mission: Impossible – Rogue Nation for the sequence where Tom Cruise is on the outside of a giant Airbus A400M in flight. The technology has been around for years, but I learned that the military had adopted it, which greatly improved the audio quality. The challenge with older technology is that many of the sounds in speech are made in the mouth and not all transmitted through bone conduction. Without these sounds, speech can sometimes be less intelligible. The research process was quite extensive, as not all information was readily accessible. I eventually came upon a company that was developing bone conduction headsets for commercial use, but the caveat was headsets needed to be custom made. Thus, we needed to arrange for an audiologist to take impressions of Cruise’s ears and create concealed bone conduction headsets specifically for him.

Tom Cruise as Ethan Hunt in MISSION: IMPOSSIBLE – FALLOUT from Paramount Pictures and Skydance.

Photo: Chiabella James.

The next stage was to test if and how they would work. I set up four large powered speakers in a studio office and played back helicopter sounds at a level in which you could not hear someone speak. We then invited Tom Cruise to sit in the room with the earpieces fitted and connected to a walkie-talkie. Incidentally, the earpieces also offered a high degree of hearing protection, which would be important for anyone spending hours in a helicopter without a headset. I went outside the room with another walkie-talkie, and we were able to communicate perfectly. With the first stage complete, I now had to work out how the system would function in a helicopter.

At this stage, the model of helicopter had not been determined, though we knew that it would be one made by Airbus Industries. Speaking with Airbus engineers, I established that different helicopters may use different avionics systems, and it was not possible to modify or interfere with these in any way given it may affect airworthiness of the aircraft.

As a result, I called upon long-term collaborator Jim McBride, who has been the technical wizard on many films that I have been involved with in the past such as Black Hawk Down, Gravity, and Captain Phillips. McBride has worked in varying technical capacities on films, as well as in music, and even in a nuclear power station. McBride and I decided we needed to build totally independent self-powered interfaces that were isolated from the helicopter avionics, yet could still use the same PTT (push to talk) button on the helicopter cyclic or control stick.

Once we had prototype interfaces made, we were ready to start testing with helicopters. With the assistance of aeronautical engineers, we went back to Denham Airfield and installed our equipment in a helicopter. Pleased with the results, we sought approval from Tom Cruise and asked him to give it a try. The first thing Tom did before take-off was to call the control tower for a radio check. ATC reported good quality but were suspicious. The controller remarked he didn’t believe we were in a helicopter because it was too clear, as he couldn’t hear any engine or rotor noise. This result was due to the fact we’d used the bone conduction units fitted in Cruise’s ears with minimal pickup of background noise.

The next step was sorting how we would connect to a recorder. The limited space within the helicopter prompted us to find something that could be easily hidden when cameras were fitted.
 
We originally thought about using a radio link to connect to a hidden multitrack recorder that would also be recording 5.1 FX but wanted to avoid radio transmission within the helicopter if possible. I decided to record to Lectrosonics PDRs which would have timecode sync with cameras and could be easily hidden. We made an output on the helicopter communication interfaces for them to connect to.

Even with production not yet underway, I had put in a substantial contribution to the film. This level of prep was essential for sound efficiency and to ensure all ran smoothly. In some respects, there are similarities to sound design in the theatre, and perhaps a production sound designer would be a more appropriate title considering we no longer mix to a mono or stereo Nagra. The mixing component of our job has become less important; however, our responsibilities have increased proportionately with the advancements in technology.

The helicopter sequences were not at the start of the schedule so we still had a little time to perfect the systems. Shooting began in Paris in April of 2017 at the Grand Palais, with car and motorbike chases throughout central Paris. I was joined by UK assistants Lloyd Dudley and Jim Hok, as well as Paris-based assistant Gautier Isern, who had recently finished working with Mark Weingarten on Dunkirk.

I needed a small multitrack capability at this stage and experimented with the Zoom F8 and the DPA 5100 surround mic which we could easily hide in the BMW M5 cars. Given we had several cars with different camera rigs, their relative low cost made it possible to hide one in every car. We used radio mics on the actors, primarily so that we could record a rushes track for editorial purposes. This also allowed McQ to monitor performances in a follow vehicle. Supervising Sound Editor James Mather, his dialog editors, along with Re-recording Mixer Mike Prestwood Smith, would later decide which of the mics worked best. We also mounted transmitters on various parts of the car exterior with DPA 4160 mics to get sound FX. We followed the chases in a specially adapted high-speed-chase vehicle with antennae mounted for sound and video and remote camera heads. The van was rigged to carry McQ, DP (who also operated a remote head camera), video assist, another camera remote, and 1st ACs. We would chase the cars or motorbikes whist shooting either from cameras mounted on the action vehicles, tracking vehicles, or very often, an electric bike. We always had a team pre-rigging the next car or bike to be used after a shot was complete. Our team rigged mics on every camera tracking vehicle whether it be the Russian Arm, an electric camera bike or ATV. The rear-mounted mics on the cars and bikes were rigged close to the exhausts, while others were mounted close to the engines. I was particularly looking for FX that sounded real, knowing that sound FX editors could use these later as a base to create a much bigger soundscape. I was not always looking for super-clean FX but usually something raw that sounded more documentary style rather than super-clean FX. That said, I did usually try to record clean interior ambiences in 5.1 with the DPA 5100 surround mic.

The Grand Palais sequences were set in a big music event with lasers, light shows, and projected graphics that all had to be in sync, so we worked closely with an AV company to provide audio playback linked to a timeline based on timecode from playback to ensure continuity from shot to shot of graphics, sound, and lighting. Several weeks were spent working in Paris shooting a major action sequence alongside the River Seine. The sequence is where Ethan Hunt (Tom Cruise) once again meets Soloman Lane (Sean Harris). Sean had previously established that the voice of his character was quiet yet menacing. It was not easy to record during the mayhem of the wild action sequence, but we managed to capture it while retaining his performance.

Photo: David James

The following location was in Queenstown on New Zealand South Island where we started with sequences set in a medi-camp prior to shooting helicopter sequences. Steve Harris joined our crew here, as he had worked on several films in New Zealand, thus very familiar with the landscape. Many of our locations required travel by helicopter, so I needed an ultra-small shooting rig that could give me the facilities of my normal rig yet fit easily into a helicopter. I built a rig on one of the larger all-terrain Zuca carts fitted out with a Zoom F8, a custom-built aux output box with the ability to send feeds to video and comms, an IFB transmitter, a Lectrosonics VR field with six radio mic receiver channels powered by a lead acid block battery in the base.

On our first helicopter shooting day, the 1st AD told me with a grin, we would be starting with dialog on the first shot and had dialog to record from Henry Cavill hanging outside a helicopter. Fortunately, I had already taken the precaution of having bone conduction earpieces made for Henry. These were invaluable when connected to a Lectrosonics PDR to record his dialog in flight. I also needed to record sound FX inside the helicopter, and I used the smallest multitrack with timecode that I could find, the Zoom F8. We fixed a DPA 5100 5.1 microphone to the interior roof and hid the F8 under a seat. This was done so if it were caught in camera, the 5100 could look as if it were part of the helicopter. We were able to start/stop and adjust levels on the F8 without needing to access it by using the F8 controller app on an iPhone. Similar to the Paris chase sequences, I wanted the FX to sound real and not like enhanced library recordings. Thus, a little wind on Henry’s mic often added to the reality. Additionally, because the bone conduction units were largely unaffected by background noise, the 5.1 recordings could be useful even if only certain elements of the six tracks would be used.

Photo: David James

Tom Cruise always piloted his own helicopter with cameras mounted on it. Director Christopher McQuarrie would fly in another and film from at least one other camera helicopter. Because the helicopters would take off and shoot for at least thirty minutes at a time, Editor Eddie Hamilton asked if I could record McQ’s helicopter comms in addition to what I was recording of Tom and Henry for the takes. This would allow him to align with exactly what McQ was intending for each shot. I used another PDR for this because of the timecode sync capability and small size. At the end of the day, we became data wranglers downloading the various SD cards and making sound reports.

After New Zealand, we returned to the UK to shoot in studio and various London locations, eventually joined by Hosea Ntaborwa. I had mentored Hosea when he was at the National Film and TV School on behalf of BAFTA and Warner Bros. creative talent. This was one of his first jobs after graduating and since then, has become part of my regular team working with us on The Voyage of Dr. Dolittle and Spider-Man: Far from Home. We had some rather challenging locations in London and it was whilst shooting on one of these, Cruise injured his ankle.

The company went on hiatus for a few weeks, which gave me an opportunity to prepare for the HALO jump sequence. That is High Altitude Low Opening and involved Tom Cruise jumping from an aircraft at an altitude high enough to require oxygen. We had a huge wind tunnel built on the backlot in the studio so that the skydive team could plan and practice manoeuvres while
experimenting with camera angles. The wind tunnel was also useful for my first asssistant Lloyd Dudley and I to develop the system for recording Cruise during the jump. We were able to set up communications between the skydive team, McQ, camera operator, and ground safety.

The wind tunnel was incredibly noisy so we were appreciative of the fact that the bone conducting earpieces offered a high degree of hearing protection. The skydive team were amazed that we managed to achieve audible communication in the wind tunnel, which made it much easier to plan shots and make adjustments. The team warned that skydivers have been trying for years to get good recordings during the dives but that they were always battling against wind noise and that it would be impossible to record. I reminded them that this was Mission: Impossible!

When we began shooting again, many of the locations were very inhospitable to sound, but nothing I had not come across before. The team also continued on studio sets at WB studios at Leavesden.

Jim McBride testing the comms
Lloyd and Hosea
Lloyd and Chris Munro on the way to the set

During this time, I continued to prepare for the HALO jump sequence, which was to be shot in Abu Dhabi. We did a number of tests with Cruise and the skydive team jumping from a Cesna Caravan at various UK sites. What we couldn’t test was what would happen at the highest altitudes when oxygen was required, and the jump was from a giant C17 aircraft. I was concerned with safety and that the equipment we were using in both Tom and the skydiver’s helmets was intrinsically safe. The dive helmets contained lighting which could potentially ignite the oxygen, so we arranged for tests to be done in an RAF lab with all of the equipment used for the HALO jump. We also had dialog to record inside the C17 as the jump progressed from a dialog scene directly into the jump. We shot some exteriors of the C17 and interiors on the ground at RAF Brize Norton near Oxford in the UK. This at least gave us a chance to consider what we would be up against.

Eventually, the time came to travel to Abu Dhabi. My crew there was Lloyd Dudley and Hosea Ntaborwa. Lloyd concentrated mainly on looking after Tom Cruise’s bone conduction headsets and fitting Lectrosonics PDR recorders to actors. Hosea was in charge of comms for the skydive team and recording 5.1 FX in the aircraft. I was particularly interested in sounds of breathing and how that can add tension. Once we played some of these sounds for Tom, he immediately wanted it to be a major part of the soundtrack during the jump sequence. I was not looking for pristine recordings that sounded like they were shot in a studio, instead, I was interested in the raw sounds of the helmet mics and bone conducting units which could give a more realistic documentary-type sound. I was not opposed to some wind noise and realised this could add to the reality. Having original sound adds to the feeling of reality even the audience are only subconsciously aware of it. Post production was greatly contracted due to the hiatus in shooting. Supervising Sound Editor James Mather had time restraints, thus appreciated all of the raw sound FX we could give him as elements to enable his team to create the final audio to be mixed by Mike Prestwood Smith. Both have been collaborators of mine on several previous films.

In conclusion, this film involved huge leaps of faith from all parties involved. I greatly appreciate the support from Tom Cruise, Chris McQuarrie, the production team, and my team throughout. My previous experiences, intuition, along with an incredibly well-respected new team proved immensely valuable. Much of the sound was unmonitored being recorded on PDR and hidden recorders. Chris McQuarrie and the producers trusted us to deliver on Mission: Impossible – Fallout using never-before-used technology. Most importantly, Tom Cruise trusted we would develop technology that would allow him to perform stunts efficiently, safely, and wherever possible, avoid ADR in order to enhance the reality.

HALO COMMS
The original intention was to use a radio mic TX on TC with a recording facility and have each of the additional divers and camera operator have a receiver connected to helmet-fixed earpieces. When we were safety testing, one of the requests was that we did not transmit anything inside the aircraft but that it was OK to transmit as soon as the divers had exited. I argued that radio mics would be fairly low power and on legal frequencies but then realised that it may have been a mistake to try to achieve recordings and comms with the same device. Additionally, we needed ground contact when the divers reached a lower altitude so that they could be given any safety information about wind or any other issues and radio mic TX may not have enough range.

I decided to use Motorola walkie-talkies for comms mainly because they were reliable and we were familiar with them. We used finger-operated PTT (push to talk) with custom-made interfaces to connect to the Motorolas. The PTT was run inside the sleeve of each diver and operated with a finger and thumb.

For TC (shown as EH on diagram), we used a bone conduction headset in each ear. One ear was talk/listen connected to the Motorola via the PTT and the other ear was connected directly to a Lectrosonics PDR. Another PDR connected to a da-capo mic (Que audio in US?) mounted in the helmet. I chose the da-capo mic mainly because I just happened to have some and also because these are what I had successfully used in helmets on Gravity. I had to send mics for safety testing to be intrinsically safe when used in the helmet, which also had oxygen being pumped in. I immediately thought that the da-capo mics may be well sealed as they are waterproof. It was not a particularly scientific decision.

Tom Cruise as Ethan Hunt in MISSION: IMPOSSIBLE – FALLOUT, from Paramount Pictures and Skydance.

Tom Cruise’s Bone Conductive Earpieces & Microphone

I had custom-moulded bone conduction units in each of TC’s ears. It was necessary to have one in each ear to give hearing protection but also allows use of the aircraft PTT as one ear is for talking and one for listening. There is a switch on the interface that allows for either, and this was connected via the pilot’s headset connectors in the helicopter on a US NATO plug or on some aircraft a Lemo connector. Trim pots on the interface require a miniature screwdriver to adjust talk and listen levels. It is powered by an internal battery and has transformer isolation on the connection to the helicopter to ensure isolation from the helicopter avionics.

The PDR records directly from the “talk” ear for a clean track of TC and is therefore pre PTT. That is to say that even if TC is not transmitting through the aircraft radio, his voice is still recorded.

An output from the co-pilot comms socket goes to track eight of a Zoom F8 which is primarily to record 5.1 ambience within the helicopter. Track eight records all comms: the director, any communication with other aircraft, ATC, and so on.

The DPA 5100 was fixed to the inside roof of the helicopter cabin toward the rear. If ever it were inadvertently caught on camera, it looks like part of the structure. The recorder was hidden and operated by the F8 control app on an iPhone.

The PDR was stopped and started using dweedle tones within the Letrosonics PDR remote app, also from an iPhone.

Reprint: March 1953

Reprint from
International Sound Technician March 1953 Issue

The Way We Were: Mixers Past & Present (Part 1)

Overview

The past ninety years of sound recording for motion picture production has seen a steady evolution in regards to the technologies used both on set and in studios for post production. Formats used for recording sound have changed markedly over the years (with the major transitions being the move away from optical soundtracks to analog magnetic recording, and from analog magnetic to digital magnetic formats, and finally, to file-based recording). Along with these changes, there has been a steady progression in the mixing equipment used both on set for production sound, as well as re-recording. Beginning with fairly crude two input mixers in the late 1920’s, up to current digital consoles boasting ninety-six or more inputs, mixing consoles have seen vast changes in both their capabilities and technology used within. In the article, we will take a look at the evolution of mixing equipment, and how it has impacted recording styles.

In The Beginning

If you were a production mixer in the early 1930’s, you didn’t have a lot of choices when it came to sound mixing equipment. For starters, there were only two manufacturers, Western Electric and RCA. Studios did not own the equipment. Instead, it was leased from the manufacturers, and the studio paid a licensing fee for the use of the equipment (readily evidenced by the inclusion of either “Western Electric Sound Recording” or “Recorded by RCA Photophone Sound System” in the end credits). Both the equipment, as well as the related operating manuals, were tightly controlled by the manufacturers. For example, Western Electric manuals had serial numbers assigned to them, corresponding to the equipment on lease by the studio. These were large multi-volume manuals, consisting of hundreds of pages of detailed operating instructions, schematics, and related drawings. If you didn’t work at a major studio, there is no way you would even be able to obtain the manuals (much less comprehend their contents).

Western Electric film sound operating manuals ca. 1930

Early on, both Western Electric and RCA established operations that were specifically dedicated to sound recording for film, with sales and support operations located in Hollywood and New York. Manufacturing for RCA was done at its Camden, NJ, facilities. Western Electric opted to do its initial manufacturing at both the huge Hawthorne Works facility on the south side of Chicago, as well as its Kearny, New Jersey, plant. These facilities employed thousands of people already engaged in the manufacturing of telephone and early sound reinforcement equipment, as well as related manufacturing of vacuum tubes and other components used in sound equipment.

The engineering design for early sound equipment was done by engineers who came out of sound reinforcement and telephony design and manufacturing, as these areas of endeavor already had shared technologies related to speech transmission equipment. (Optical sound recording was still in its infancy at this stage though, and required a completely different set of engineering skills.)

Western Electric 22C console. While this particular console was designed for broadcast, with some modification to monitoring, it was also the basis of film recording mixers.

With the rapid adoption of sound by the major studios beginning in April 1928 (post The Jazz Singer), there was no time for manufacturers to develop equipment from the ground up. If they were to establish and maintain a foothold in the motion picture business, they had to move as quickly as possible. Due to the high cost and complexities of manufacturing, engineers were encouraged by management to adapt existing design approaches used for broadcast and speech reinforcement equipment, as well as disc recording, to the needs of the motion picture business. As such, it was not unusual to see mixing and amplifier equipment designed for broadcast and speech reinforcement show up in a modified form for film recording.

Examples of these shared technologies are evident in nearly all of the equipment manufactured by both RCA and Western Electric (operating under their Electrical Research Products division). While equipment such as optical recorders and related technology had to be designed from the ground up, when it came to amplifiers, mixers’ microphones and speakers, manufacturers opted to adapt what they could from their current product lineup to the needs of the motion picture sound field. This is particularly evident in the equipment manufactured by RCA, which had shared manufacturing facilities for sound mixing equipment, microphones, loudspeakers and related technology used in the broadcast and sound reinforcement fields.

It was not unusual to see equipment originally designed for broadcast (and later, music recording) show up in the catalogs of equipment for film sound recording all the way through the early 1970’s.

Design Approaches

While the amplifier technology used in early sound mixing equipment varied somewhat between manufacturers, much of the overall operational design philosophy for film sound mixers remained the same all the way up through the mid to late 1950’s, when the stranglehold that RCA and Western Electric had on motion picture business began to be eaten away by the development of magnetic recording. (The sole standout being Fox, who had developed its own Fox-Movietone system.) New magnetic technologies (first developed by AEG in Germany in 1935), began gaining a foothold after the end of WWII, and players such as Ampex, Nagra, Magnecord, Magnasync, Magna-Tech, Fairchild, Stancil-Hoffman, Rangertone, Bach-Auricon, and others began to enter the field. Unlike RCA and Western Electric, these manufacturers were willing to sell their equipment outright to studios, and didn’t demand the licensing fees that were associated with the leasing arrangements of RCA and Western Electric.

RCA BC-5 console. Another of the consoles made by RCA for broadcast, but adapted in various configuration for film use.

Despite these advances, RCA and Western Electric were still the major suppliers for most film sound recording equipment for major studios well into the mid-sixties and early seventies, with upgraded versions of their optical recorders (which had been developed at significant cost) in the late 1940’s still being used to strike optical soundtracks for release prints. Both RCA and Western Electric developed “modification kits” for their existing dubbers and recorders, whereby mag heads and the associated electronics were added to film transports, thereby alleviating the cost of a wholesale replacement of all the film transport equipment. Much of this equipment remained in use at many facilities up until the 1970’s, when studios began taking advantage of high-speed dubbers with reverse capabilities.

Westrex RA-1485 mixer in “tea cart” console. Note the interphone on the left for communication with the recordist.

The 1940’s

After the initial rush to marry sound to motion pictures, the 1940’s saw a steady series of improvements in film sound recording, mostly related to optical sound recording systems and solving problems related to synchronous filming on set, such as camera noise, arc light noise, poor acoustics, and related issues. In 1935, Siemens in Germany had developed a directional microphone which provided a solution to sounds coming from off set.

Disney also released the movie Fantasia, a groundbreaking achievement that featured the first commercial use of multi-channel surround sound in the theater. Using eight(!) channels of interlocked optical sound recorders for the recording of the music score and numerous equipment designs churned out by engineers at RCA, it can safely be said that the “Fantasound”  system represented the most significant advance in film sound during the 1940’s. However, except for the development of the three-channel (L/C/R) panpot, the basic technology utilized for the mixing consoles remained mostly unchanged.

Likewise, functionality of standard mixing equipment for production sound saw few advances, except for much-needed improvements to amplifier technology (primarily in relation to problems due to microphonics in early tube designs). Re-recording consoles, however, began to see some changes, mostly in regards to equalization. Some studios began increasing the count of dubbers as well, which required an increase in the number of inputs required. For the most part, though, the basic operations of film sound recording and re-recording remained as they were in the previous decade.

Nagra BMII mixer. This was one of the first portable mixers to include powering for condenser mics as part of the mixer design.
Sela portable mixer designed for use with Nagra recorders. Incorporating T-power for mics, and low-frequency EQ, this was an industry standard for years.

The 1950’s

While manufacturers such as RCA and Western Electric attempted to extend the useful life of the optical sound equipment that they had sunk a significant amount of development money into, by the late 1950’s, the technology for production sound recording had already begun making the transition to ¼” tape with the introduction of the Nagra III recorder. Though other manufacturers such as Ampex, Rangertone, Scully, RCA, and Fairchild had also adapted some of their ¼” magnetic recorders for sync capability, all of these machines were essentially studio recorders that simply had sync heads fitted to them. While the introduction of magnetic recording significantly improved the quality of sound recording, it would remain for Stefan Kudelski to introduce the first truly lightweight battery-operated recorder capable of high-quality recording, based on the recorders that he originally designed for broadcast field recording. This was a complete game-changer, and eliminated the need for a sprocketed film recorder or studio tape machine to be located off-set somewhere (frequently in a truck), with the attendant need for and AC power or bulky battery boxes and inverters.

Westrex RA-1424 stereo mixer. This mixer, introduced in 1954, was made in six different configurations, equipped with either four or six inputs, and variations on buss assignments. This was most likely developed in response to the need for true 3-channel

Later, Uher and Stellevox would also introduce similar battery- operated ¼” recorders that could also record sync sound. Up until this point, standard production mixing equipment had changed little in terms of design philosophies from the equipment initially developed in the early 1930’s (with the exception being some of the mixing equipment developed for early stereo recording during the early 1950’s for movies such as The Robe). Despite the development of the Germanium transistor by Bell Laboratories in 1951, most (if not all) film sound recording equipment of the 1950’s was still of vacuum tube design. Not only did this equipment require a significant source of power for operation, they were, by nature, heavy and bulky as a result of the power transformers and audio transformers that were a standard feature of all vacuum tube audio designs. In addition, they produced a lot of heat!

Most “portable” mixers of the 1950’s were still based largely on broadcast consoles manufactured by RCA, Western Electric (ERPI), Altec, and Collins. Again, all of vacuum tube design. The first commercial solid-state recording console wouldn’t come around until 1964, designed by Rupert Neve. A replacement for the venerated Altec 1567A tube mixer didn’t appear until the introduction of the Altec 1592A in the 1970’s.

French-made Girardin tube mixer

A common trait amongst all of these designs was that nearly all of them were four input mixers. The only EQ provided was a switchable LF rolloff or high-pass filter. There were typically, no pads or first stage mic preamp gain trim controls. The mic preamps typically had a significant amount of gain, required to compensate for the low output of most of the ribbon and dynamic mics utilized in film production at the time (while condenser mics existed, they also tended to have relatively low output as well).

All had rotary faders, usually made by Daven. And except for the three-channel mixers expressly designed for stereo recording in the 1950’s, all had a single mono output.

Re-recording consoles were of course much larger, with more facilities for equalization and other signal processing, but even these consoles seldom had more than eight to twelve inputs per section.

The 1960’s

While the 1950’s saw some significant advances in the technology of sound recording and reproduction, with the exception of the introduction of stereo sound (which was typically for CinemaScope roadshow releases), there had not been any really significant advances in recording methods since the transition from optical to magnetic recording. Power amplifiers and speaker systems had somewhat improved, boosting the performance of cinema reproduction. However, most mixing consoles relied on circuit topologies that were based on equipment from the 1940’s and 1950’s, with some minor improvements in dynamic range and signal-to-noise ratio.

Westrex RA-1518-A stereo mixer. Note the early ingenious use of straight line faders, which are actually connected to Daven rotary pots on the underside of the panel

It was during this period that technologies developed for music and broadcast began to seep into the area of film sound, and the approach to console designs began to change. The most notable shift was a move from the tube-based designs of the 1950’s to solid-state electronics, which significantly reduced the size and weight of portable consoles, and also for the first time, allowed for a design approach that could use batteries as a direct source of power, without the need for inverters or generators required to power traditional vacuum tube designs. This opened up a significant range of possibilities that had not existed before.

Sennheiser M101 mixer. Another of the early entries to solid-state portable mixers.

With the introduction of solid-state condenser mics, designers began to incorporate microphone powering as part their overall design approach to production mixers, which eliminated the need for cumbersome outboard power supplies.

Some mixers also began to include mic preamp gain trims as part the overall design approach (also borrowed from music consoles of the era), which made it easier to optimize the gain and fader settings for a particular microphone, and the dynamics of a scene.

The 1960’s would also see the introduction of straight-line faders (largely attributed to music engineer Tom Dowd  during his stint at Atlantic Records in New York). In the film world, straight-line faders showed up first in re-recording consoles, which could occupy a larger “footprint.” However, they were slow to be adopted for production recording equipment. This was due in part to some resistance on the part of sound mixers who had grown up on rotary faders (with some good-sized bakelite knobs on them!), but also due to the fact that early wire-wound straight-line faders (such as Altec and Langevin) fared rather poorly in harsh conditions, requiring frequent cleaning.

Still, even by the end of the 1960’s, not much had changed in terms of the overall approach to production recording. Four input mixers were still the standard in most production sound setups, with little or no equalization. But the landscape was beginning to shift. While RCA and Westrex were still around, they had lost their dominance in the production world of film (although RCA still had a thriving business in theater sound service arena).

Things were about to change however.

Part 2 will continue in the next edition.
–Scott D. Smith CAS

Adapting to 4K & Beyond

by James Delhauer 

When looking at the history of the technology that defines our industry, the acceleration of progress that has occurred in the recent past is truly staggering. In 1953, the National Television System Committee (NTSC) introduced the color television broadcasting format, which is colloquially known today as NTSC Standard Definition. Though minor variations on the format were introduced over time, it remained mostly unchanged until the first high-definition television standards were officially adopted in the United States more than forty years later in 1994. But just twenty years after that, in 2014, that resolution was made obsolete when the first digital streaming services began to widely distribute content in a 4K Ultra-High Definition (UHD) television resolution. Though 4K UHD is the current highest standard of content distribution, current speculation suggests that mainstream adoption of even larger 8K displays will begin in the United States in 2023 and that distribution platforms will start officially supporting it shortly thereafter. If accurate, this would mean that the amount of time between home media standards has halved between each leap forward. And in the not-so-distant future, that could be a very real challenge.
 
A digital image is made up of what we call pixels—tiny dots that come together in rows and columns to make up a single image. What we call resolution is a measurement of the number of pixels in a given image. NTSC Standard Definition format images are made up of six hundred and forty vertical lines and four hundred and eighty horizontal lines, commonly represented as 640×480. Though a variety of different high-definition formats do exist, the one referred to as True HD increased those figures to one thousand, nine hundred and twenty vertical lines by one thousand and eighty horizontal lines. This is referred to as 1920×1080 or simply 1080 for short. This increase in pixels results in substantially more dots being used to make up the same picture, allowing for more detail, precision, and color shading when replicating what a camera’s sensor captures, making for a more nuanced product.  
 
But while we all love to be dazzled by the absolute clarity, color, and sharpness that high-resolution imagery can offer, there are very real logistical quandaries that filmmakers need to consider. For Local 695 data engineers in particular, whose responsibilities can include media playback, on-set chroma keying, off-camera recording, copying files from camera media to external storage devices, backup and redundancy creation, transcoding, and syncing, digital real estate is a growing concern. At four times the number of pixels as Full High Definition, 4K UHD means four times the amount of data is captured over the same amount of time. The impending move to 8K will multiply this amount by another factor of four, as 8K is double the number of both horizontal and vertical lines of 4K and not just double the number of pixels. In practical terms, productions will need to spend sixteen times as much money on media.  
 
But drive space isn’t the only issue. Just because our data quantities are increasing does not mean that the films and series we make can accommodate turnaround times that are four and sixteen times longer. Therefore, we need faster drives and more powerful computers.

In simplest terms, a hard drive’s read and write speeds determine how quickly it can access the data stored within and add new data onto itself. For data engineers, this is of critical importance when transferring media from one location, such as a camera card, to another, such as a production shuttle. A standard spinning disk hard drive’s speed is determined based on how fast the disk inside of it spins. Typical work drives spin approximately seventy-two hundred times per minute when brand new. In theoretical terms, these drives are capable of transferring between eighty and one hundred and sixty megabytes of data per second. Unfortunately, even at this speed, these drives are not always suited for high-definition work—let alone the more intensive labors of 4K and beyond. A convenient way to get around the problem is through a process known as Redundant Array of Independent Disks, or RAID. Though RAIDing can be done in a variety of ways, the basic concept is that multiple drives are used to accomplish a single task. By using two hard drives (or more) instead of one, the task being performed can use the performance speed of each drive simultaneously. In live broadcast environments, it is common to use RAID configurations that are making use of anywhere from four to sixteen hard drives at once in order to ingest multiple cameras’ worth of media simultaneously.

However, while those speeds are impressive and more up to the challenge of high-resolution production, they do come with serious drawbacks. Every hard drive introduced into the configuration represents a potential point of failure in the RAID array. In a simple RAID configuration, the loss of a single hard drive could mean the loss of all footage contained within the array. More complex configurations take this into account and create redundancies but these require additional hard drives, returning us to the issue of real estate. Newer solid state hard drives—media devices that have no moving parts and therefore, don’t rely on disk speed—may represent a possible solution to the RAID issue in time. Though they are currently significantly more expensive (a single terabyte 7200rpm hard disk drive can be bought for as little as $44, whereas a solid state drive of the same size and brand will run you $230), they are significantly faster at performing the same tasks. This theoretically means that fewer drives could go into a single RAID configuration, reducing the points of failure in the array. Moreover, with no mechanical parts to jam or degrade over time, they may be less prone to failure in the first place. Unfortunately, we will need to wait for production costs to bring retail prices on solid state media down before this becomes a viable alternative.

This is all assuming that a hard drive is not limited in any way by its connection speed to the computer with which it is communicating. The physical port that a hard drive uses to interface with a computer may have a speed limitation completely unrelated to the drive’s. The most common type of port, USB 3.0, has a theoretical limit of five gigabits per second, with one gigabit equaling approximately one-eighth of the more commonly measured gigabyte. A single spinning disk drive does not read or write faster than that and so there is no problem.

However, an array of drives working together can easily exceed that limit, at which point, data is going through a choke point when passing through the wire connecting a computer to a hard drive. At the time of writing this article, the fastest connection port on the market is tied between Thunderbolt 3 and 40g Ethernet. These two port types both have theoretical maximums of 40gbps, though neither has seen widespread adoption within the industry at this time.

All of that being said, engineers don’t just need to be able to move data around faster if we are to keep up with the demands of higher resolution. It is of equal importance that we process it more quickly too. Since working with ultra-high definition and larger formats requires a prohibitive amount of computer processing power, our editor friends in Local 700 rarely do it. Instead, they make use of a process known as “Offline to Online Editing,” where they use lower resolution proxy file copies of the camera media when assembling their projects and then swap out those proxies for the original high-resolution camera media when preparing to color grade and deliver. Where do those proxy files come from?

Us.

Local 695 data engineers can be tasked with creating these proxies on set, which necessitates being able to work with the raw, high-quality footage captured by the camera. This means more powerful and efficient computers are becoming necessary. There are several factors that determine how powerful a computer is for our purposes. Processor speed, memory, graphics memory, hard drive speed, and connection speed all need to be taken into account. For these reasons, and a few others, the majority of the industry has become reliant on Apple computers.

Unfortunately, Apple’s professional line of computers tend to stagnate for long periods of time. The Mac Pro, Apple’s line of professional grade editing machines, has remained unchanged since 2013. The company has announced a replacement, tentatively to be released in 2019 and a stopgap solution was introduced in 2017 when the company unveiled the iMac Pro but these computers are not cheap. An introductory iMac Pro costs $4999 while a fully upgraded machine can be bought for as much as $13,200. And this assumes the use of only one machine at a time.

While the world of major motion pictures has largely embraced the move from lower viewing resolutions to higher ones, with digital cinema cameras recording in 6K and 8K resolutions already, television has yet to catch up with the current Ultra-High Definition standards. In the United States, many series still record in high definition. It’s understandable when the sheer volume of footage is taken into consideration. While feature films record enough footage to assemble a presentation lasting between ninety minutes and three hours, television series spanning multiple seasons can last hundreds of episodes. The need to process and preserve all of that footage requires a staggering amount of resources. Doing all of that in 4K or 8K makes it an even more monumental challenge. But as the cost of 4K televisions, computer monitors, and even cellphone screens continue to plummet and as 8K displays are introduced into the market, our audience is going to demand that this challenge be met.

These concerns are not new. The jump from standard definition to high definition presented the same obstacles in the late 1990s. The new factor at play here is time. While we, as an industry, could no doubt rise to the growing needs of 4K production with time, the advent of the 8K world is already within eyesight. How far behind that is the realm of 16K? Another ten years? Or maybe just another five? It will be interesting to see at what point the innovation of one avenue of technology collides with the reality of another.

My Life as a Commercial Sound Mixer

My Life as a Commercial Sound Mixer

by Crew ChamberlainI

Crew Chamberlain at Disney Ranch

My name is Crew Chamberlain, yes, that is my real name, ever since the spring of 1952 when I was born in Fullerton, Calif., at the edge of what we now call the “Thirty Mile Zone,” a child of the baby boom, as well as a fourth-generation Californian, from a large family that settled in and around Orange County. Many of them worked the “Oil Patch” for the Standard Oil Company, others had their own businesses, but not one of them worked in Hollywood or even knew someone who did. Media back then was a black-and-white television (five stations), radio (lots of stations), and a Saturday-afternoon trip to the Fox Fullerton to see the latest double feature for thirty-five cents.

As much as I loved the films and TV that I grew up with, it never occurred to me that there was a system of people, places, and companies that made the content I consumed. We just loved the grand and silly movies like Journey to the Center of the Earth, The Nutty Professor, The Great Escape, or TV shows like The Twilight Zone and The Man From U.N.C.L.E.. I was totally oblivious to the process of film production. That started changing in high school where I was a proud C student. I only enjoyed having fun, playing drums in my garage band, body surfing, drawing, or painting.

Crew on his first multitrack gig, 14 ball players and two DA88’s

In my sophomore year at Sunny Hills High School, an art teacher gave our class a unique assignment. She had us paint patterns on exposed 16mm film (a clear strip) which we then projected in class and played songs to. We  all liked this so much we asked to do it again and again. She next suggested if we could get access to an 8mm camera, we could try and make a short film. We did and it was terrible. Not a Spielberg in the bunch, but we had a blast doing it. That little spark started a slow smoldering interest of film within me. With parental pressure to go to college, the Vietnam War spinning out of control, and a very motivational mandatory draft, I discovered that there was a thing called “Film School.” Not the ubiquitous film schools of today, at the time there were only three: NYU, UCLA, and USC. It took me a year at ASU to get my grades up, but with some hustle and a lot of luck, I got into the USC School of Cinema in 1971. I had found my calling.

Nike commercial for Joe Pytka in San Francisco; in the late ’80s (from left) with Tom Stern by the arc, Jane Hampton, script, Roger Daniel, mixer, and Crew.

The USC Film School was a nerdy endeavor in the eyes of the student body back then, until the release of American Graffiti by fellow Cinema School alum George Lucas during my second year. All of a sudden, everyone knew and thought highly of our little dilapidated barn and stable complex on the corner of campus. What a deep dive I took. We lived, breathed, and endlessly discussed cinema. I watched at least two films a day from every genre and era. A very talented group of professors like Mel Sloan, Ken Miura, and Drew Casper engaged and enlightened us. We had to make many films, write a screenplay, and learn the nuts and bolts of the gear. Film cameras! Sound recording on Nagras with microphones! History & criticism!

Editing … sweet editing. A filmmaker’s last chance to make a story into something. That hands-on creativity is what I loved the most. That was going to be my choice of career path after college. I would become an editor and of course, some day a director….

After USC, I landed a job at Wexler Films (not related to Haskell or Jeff) that made medical and health-related shorts for high school education departments. I was an ‘assistant editor,’ but really a PA. On shoot days, I did everything needed, including the sound recording for interviews on a Nagra 4L with a 415 or Sony ECM 50 lavs. I loved production shoot days and these interactions lead to other sound gigs. Soon I was sound recording on docs, interviews, and low-budget films. I also started booming for other mixers which seemed very natural to me as I had the coordination and enough sound knowledge to get it going. Looking back, the progression was a steady one. I went from working on drive-in-level Corman films, to a John Cassavetes movie (Opening Night), to Albert Brooks’ first film (Real Life).

For the first thirteen years of my career, I pursued films above all else and landed some good ones. I was fortunate to get better and better movies and experience with some of the best mixers of the day, Jeff Wexler, Jim Webb, Art Rochester, and Keith Wester. Between all the films that I did, I boomed for many good commercial mixers, but mostly Roger Daniel. As much as I appreciated the work, I wasn’t a fan of commercial shoots. No one whoever went to film school wanted to do commercials.

Films were the goal and I did about as well as possible thanks to Jeff Wexler and Don Coufal mentoring me. Thankfully, Roger continued to hire me between films and showed me many of the valuable aspects to commercials that I had overlooked. Like the time/money ratio.

It all came into focus for me in 1986. I was thirty-five years old with a wife and two kids under five years old that I loved and I wasn’t able to be a full-time father and husband for. That year, I was on location for nine of the prior twelve months. I took a good long look around, I felt good about where I’d been and all I had learned and decided that a change was needed and my path forward was going to be in the commercial realm. At least until the kids grew up, I consoled myself.

Commercials. Yep, commercials. Still hard to believe. When I got in the IATSE in 1977, film was the undisputed king of the crop. Television was a different beast than today, closer to the old factory system of the studios that were cranking out a steady stream of cop shows or three-camera sitcoms, and commercials were, well, commercials. The stigma was understandable as commercials were formulaic and square at best. That all started changing in the mid-’80s as a new sensibility and atheistic took hold. Directors like Joe Pytka, Ridley and Tony Scott, Rick Levine, Bob Giraldi, Adrian Lyne and others brought a cinematic look and approach to the process. An entertaining thirty- or sixty-second movie/story was the new direction. These directors all did commercials, as well as the emerging music videos and feature films. For them,  work was work. Another day to practice and perfect their craft was the sensibility. Money was a motivator too. The commercial stigma started to evaporate a bit.

For a Sound Man (that’s what we were called and mostly were back then), the commercial work left the two walled sets on quiet stages and more often ventured out to practical locations where the challenges were more difficult, exactly like film and TV. The work was still ninety percent one boom and a mono Nagra and luckily one camera, so our success rate was high. Roger and I did so well that I was working more days a year than I had prior, but the big difference being I was home every night with my family. I was happy and seldom looked back. On commercial shoots back then, many a day was short for the Sound Department. We’d have a later call, record our part, and go home early. A lot of six- to eight-hour days in the eighties and early nineties. The glory days as it were.

Crew (left) with William (Bill) MacPherson working on The Milagro Beanfield War in Truchas, New Mexico, in 1986

Roger had a very loyal clientele of the top players in the commercial world, the LA, NYC, and London production companies and directors called him first. I nicknamed a lot of people and I called Roger “The Godfather.” Not that original but true. He handed off many, many gigs to other mixers. I and others benefited greatly from him doing this to keep his customers satisfied. That’s how I moved up to mixing in late 1987. I started covering for Roger on the weekends when Pytka would fly a small group up to Santa Rosa to do “Bartles & Jaymes” commercials. Simple shoots: A Nagra, a 415, and a radio mic. Only one of the two actors spoke. As much as I loved booming, the handwriting was on the wall and by the end of 1988, I bought two used Nagra 4.2’s and a stereo Sela from 20th Century Fox, a Lectro Quad Box with 195’s in it and six Comteks. I already owned two Schoeps MK 41’s. So there I was, a Commercial Mixer.

Crew on the set of the 1985 film Prizzi’s Honor; the 1986 film Down and Out in Beverly Hills with Matisse, the dog, and Clint Rowe, dog trainer

Hard to believe it was ever that simple. I continued to cover for Roger most of the time but I soon got new clients of my own.

Crew is booming Nick Nolte and Jamie Anderson is the cam operator.

We were working nonstop and I learned so much about being a team leader because Roger was my sign post and guru. As a mixer, I rely on my boom ops to run the set and I run the gear, get the calls, baby-sit the production people, agency, and clients. I have been fortunate to have so much help from my boom op friends when I started, pros like Steve Bowerman, Randy Johnson, Jim Stuebe, Mychael Smith, later with Pam Raklewicz, Dan Kent, my brother Moe, Alenka Pavlin, Anna Delanzo, Peter Commans, Bryan Whooley, and for the last eighteen years, my niece Marydixie Kirkpatrick. My two sons Case and Cole even boom for me now and again as they are up to speed in our unique world of production sound recording for picture. With that kind of help, it has been easy and fun to do my end of the job all these years. I want two qualities in a teammate, a good work ethic and the ability to find the humor each and every day in the absurd work we do. These women and men are all champs in this regard and I owe them my successful career. While we had some killer days, it never seemed like work, always an adventure.
 
The work. It is really no different technically than film or TV. This is as true in today’s multitrack workflow as it was with a mono Nagra and a boom, maybe a radio or two. The biggest difference between long form or episodic media creation and the thirty-second epic is that sound is a two-person department 99.9 percent of the time. To do our job successfully, we have to pull together to get whatever needs doing done so that “waiting on sound” is never heard. The division of labor is equal, running/wrapping cable, putting down sound blankets, etc. We have to be ahead of the curve as much as possible. On any given day, the cooperation with other departments gets limited due to the time constraints on all of us, so when help is needed, we have to request it early when it can be accommodated, not after we set up a shot. This is true for all the departments in commercial production. With only a storyboard and a few confused conversations with an often overwhelmed production manager, we all show up at a location with sixty-five other craftspeople and make a commercial. We work not only for the director (like films and TV), but also the ten to twenty agency people and clients.

This aspect of the work is what took me a long time to come to terms with. Often these fine people have little or no filmmaking experience or just enough to be dangerous. If you let it, this fact alone will make your life hard, perhaps joyless. Communication is essential in our commercial world as we try to put their fears to rest and assure them it will be great. Then we tear it down and do it all again the next day with a different crew and for a different director and client in some high-rise or on nasty a pig farm. Like our brothers and sisters in TV and film, we work at every location imaginable. Hot or cold, often both in the same fourteen- to eighteen-hour day, as we do our best to make it sound like it looks. It still surprises me that this system works as well as it does.

Shooting in Downtown Los Angeles.

Our simple workflows of the eighties and nineties have all transformed into the new modern work style of multi-camera shoots, with all sorts of rigs to move cameras everywhere, and no money and time to do what is on a storyboard, so sadly we have even less discipline and protocol than before and therefore, sound wires them all to their own tracks as we try to create useful mixes for the director, agency, and the editor in post production. Still, we do this with a two-person Sound Department, even though it would be a full day for three people.

The key for my team is the ability to look two, three steps ahead of what we read on paper and are told by production and actively stay evolved on set, and then have the gear to do almost anything that comes our way. What starts as one woman talking on the phone can morph into a car-to-car shot with four people singing a song as they drive down the road with a video village bus trailing along with always helpful suggestions. In my experience, this wouldn’t happen in TV or film without advanced warning, permits, and sides. Not so in commercials. When it happens and it will, the response “I can’t” is not an option as far as I’ve ever seen. Somehow we need to make it happen. Our video assist brothers know this all too well. They deal with unrealistic expectations and demands all the time. We  help each other. I’ve always been there for them as they have been there for me. It’s been an education to have witnessed the introduction of video assist in the late seventies and the evolution into the modern-day NLE-based multi-cam systems we have today. Really a remarkable group of people who made this happen like John Hill, all the Cogswells, the Hawks, Willow Jenkins, Tom Myrick and his gang, and Cal Evans who makes it fun regardless the shoot.

Marydixie Kirkpatrick and Crew Chamberlain on the docks in San Pedro in December 2017.

I know for many, the uncertainties of the day in the commercial world can be stressful but whether it is a personal defect or talent in me, I really like what I do. I may work twenty days in a row, then have fifteen days off. I can always say I’m booked if I’m called for a rap music night shoot in Palmdale. I always try to pass any job I can’t or won’t do to a member of my close network but I have little control where production will go after calling me.

Some in production want a known person to do the sound, others just want to cut a deal and sadly, there are those in our community who think low balling on rates and gear is a smart move. There are a lot of sound people in LA, so the competition is always there and at times, adversarial and short-sighted by a few, but most of our community is on the up and up and play by the unwritten rule that we never actively try and steal an account if a job comes to us from another mixer or undercut their rate.

Crew (right) with his brother Moe Chamberlain

My favorite aspect of working on commercials is the people we get to work with as they pass through our arena, be they the “Hot” new cinematographer, or the old “Pro” you’ve known for thirty years and their talented crews. Also, the energetic young women and men just starting their careers in Art, Hair, Wardrobe, and Production Departments who are fun to be around and prove that hope springs eternal. The athletes and stars can be fun but for me, the crews are the best part of the job. Ageism, while very real in the workforce in general, seems less so for a mixer. At sixty-six, I am often a decade or two older than most of those I work with. I enjoy the energy and modern culture they expose me to and for those who are interested, the sound and production knowledge I’m able to pass on to them.

While it would be impossible for someone to follow my career path today, that world is long gone and the future of commercials seems destined to a slow demise, as the world of new media is expanding, I do think a rewarding career can be had. Hopefully, the lessons of the past might be helpful for those going forward in what we call Hollywood. At some point in the next four years, I will hang up my headphones and get out of the way. I will do so knowing I met and worked alongside many of the most interesting and talented people in the film business and had a lot of fun doing so. And for the record, I do direct, shoot, and edit personal media projects so I guess my original game plan worked. Just not the way I thought it would.

Y-16a: The Production Sound/Video Trainee

We don’t hear a lot about Local 695’s Y-16a Production Sound/Video Trainee but it’s a position that has been finding its way onto a wider range of film and television productions as it becomes more useful to both Producers and 695 production crews.

WHAT IS THE Y-16a?
Local 695’s Y-16a Production Sound/Video Trainee can be hired to work alongside any video crew or production sound crew.  As the name suggests, this is a Trainee position, but these aren’t newbies. In fact, virtually all Y-16a’s come to the set with solid experience in the responsibilities and job tasks they’re expected to perform. Most of our Y-16a’s have extensive history in production sound and/or video and bring considerable talent to the job. For example, on a production sound crew, Y-16a’s can set up the carts, jam slates, swing an extra fishpole, lay out sound dampening carpets, set up the red lights and bell, service Comteks, prep the wireless mics, operate music playback equipment, secure gear for safe travel and lots more. Y-16a’s can provide support for video crews in all aspects of setup and video engineering.

One important thing to note is that the Trainee doesn’t replace an existing journeyman position on the crew but can be used as an additional hire to increase the efficiency of a standard-sized crew when the production company can benefit from extra help. Within one year as a Trainee, the Y-16a upgrades their status to a journeyman classification.

HOW PRODUCTIONS SAVE MONEY WITH A Y-16a
We’re very aware of the cost constraints that Producers face in every aspect of production. While one might assume that an extra hire represents additional expense, we have enough experience at this to assert with confidence that adding an extra person to the crew at the Trainee rate will consistently SAVE the company money in the form of improved production efficiency. There are many ways to pick up a few minutes on production. For example, our crews use Y-16a’s so the company doesn’t experience delays when multiple video monitors need to be relocated quickly or when there’s a large number of actors waiting to get wired and when the shooting schedule calls for challenging company moves.

Producers and some of our own members have told us that they didn’t know this option existed, but once they hired a Y-16a, it was easy to see the cost benefits when sound and video crews are able to complete the work with greater efficiency on heavily loaded days.

WHAT ABOUT THE INDUSTRY EXPERIENCE ROSTER?
The unique thing about the Y-16a is that, as a Trainee, although they must meet all of our expectations before even being considered for the job, it’s not necessary for a Y-16a to have previously completed all the requirements of placement on the Industry Experience Roster. In fact for many, it’s the days worked as a Y-16a that satisfy the Roster Placement requirements. The big bonus is that 695 members in this job classification have access to first-class apprenticeship training that would be completely unavailable anywhere else.

Y-16a Kendra Bates showing up for work

DIVERSITY IS OUR FUTURE
The film and television industry has long been known for attracting creative and energetic young people and yet for reasons we are all aware of, many of them face a variety of hurdles that can sometimes block entry into the industry.   Given those obstacles, we’re excited when we have the chance to work with studios and production companies who share our commitment to implementing diversity initiatives that extend job opportunities for underrepresented members of our community. For that reason we want to be sure that Producers know that the Y-16a job classification can be used to offer employment opportunities that might otherwise be out of reach. Industry Roster Placement can be a significant impediment for many who seek entry into the motion picture business, but since Roster Placement is not a requirement for the Y-16a, this path opens doors for women and men who struggle for a way into our industry. Local 695 welcomes the chance to partner with Producers who share the desire to build a more diverse and inclusive production community.

CASE HISTORIES
Production Mixer George Flores uses Y-16a’s whenever he can, including Iris Von Hase and Daniel Quintana on FX’s It’s Always Sunny in Philadelphia. He says, “On this fast-paced ensemble TV show, I believe the Producers were very happy with the Y-16a’s I brought in. One of my reasons to have had a Trainee in the Sound Department is to save time and ultimately save money and that always proves to be true.”

Y-16a Kendra Bates joined Production Mixer Scott Stolz and his crew, Alex Burstein and Cara Kovach, on The Affair for Showtime. Scott says, “Kendra was great to work with and a huge help to us and there’s no doubt that production benefited greatly from having her there.” Just a few months after wrapping, Kendra has already moved up to working as a Utility Sound Technician on Season 15 of Grey’s Anatomy.

Production Mixer Scott Harber gave Set PA Erik Altstadt a chance as a Y-16a on ABC’s Castle Season 7. Scott and the rest of his crew, Chris Walmer, Howie Erikson, and John Agalsoff, were able to share decades of production experience with Erik. And Scott says the Producers at ABC were quick to see the cost benefit in having the extra hand.

SUGGESTIONS TO VIDEO ENGINEERS AND SOUND MIXERS
Pay it forward. Use the Y-16a when you can. Help educate Producers to understand how this extra hire can save them money and open doors for young people who work so hard to get into this business. Make ours an industry of inclusivity and opportunity. And pass on your extensive knowledge and vast experience to an eager Y-16a who shares the energy, excitement, and enthusiasm that first brought you to Local 695.

695 Young Workers Committee

The Local 695 Young Workers Committee
We Work for our Members and our Community

by Aaron Eberhardt and Nathan WhitcombYWC Hike 2.jpg

The Young Workers Committee (YWC) works actively to better Los Angeles through extensive community service and political action, while also creating a truly positive environment for the younger generation of Local 695 to network with their peers. It might be the annual LA River Cleanup, hosting a local Blood Drive, or adventuring on a beautiful Saturday-morning hike with our 695 sisters and brothers. We inspire our members to get out and make a difference.

As the newly appointed co-chairs of the Young Workers Committee, myself Aaron Eberhardt and Nathan Whitcomb, would like to thank you for giving us this wonderful opportunity. We want the YWC to act as a constant resource for the younger members of the Local, but also to help any new incoming members make a smooth transition. The Young Workers Committee has toiled very hard the last couple of years to create useful, beneficial activities, and services for all the members of Local 695.

The Young Workers Committee at the LA River Cleanup

One of the best opportunities the 695 Young Workers Committee has been able to take part in is the annual LA River Cleanup. The LA River Cleanup has become a community service tradition. Over the last twenty-nine years, the organization “Friends of the LA River” has come together to help clean up the entire length of the Los Angeles River, removing over one hundred tons of trash in 2017 alone. Last year, the 695 YWC organized over twenty members all decked out in Local 695-wear, and headed to the Glendale Narrows to help. It was quite the fun time indeed, messy, but loads of fun. Descending into the LA River and grabbing old mattresses, clothes, and trash was an arduous job, but we felt that we were doing a huge service for our community. We also had a fantastic time chatting with our sisters and brothers, great stories were told, hard work was performed, and we had bonded together to truly make an impact. This community tradition continued again in 2018 with a passionate group of Local 695 members and friends who wanted to do fantastic work, while enjoying the best company of sound and video professionals around. We will be continuing this great tradition for many years to come.

We hosted a hike at the Stough Canyon Nature Center in Burbank. On a delightfully breezy Saturday morning this past spring, over fifteen came to participate in a pleasant hike up the trail. We had a wonderful time talking about our careers, sharing stories, offering useful advice, and most of all, enjoying all nature had to offer. One of the favorite moments of the hike was seeing a younger prospective member, who was hesitant about joining the Local, having a conversation with a longtime member about the benefits of joining and what it’s provided over their long career.

Aboard the Local 695 Dinner Cruise

We also continued our work with the annual Stamp Out Hunger Food Drive. The National Association of Letter Carriers conducts the Stamp Out Hunger Food Drive to help get food to those in need across Los Angeles. On May 19, 2018, the official IATSE Volunteer Day, members of Local 695, their friends, and families journeyed to the Salvation Army Warehouse in Bell, CA, to take part in the massive sorting of all the canned food before it was to be delivered to the many organizations partnered with the food drive. Members of 695 worked with other union members to help sort thousands of cans that day. We even met one of the co-chairs from Local 705’s Young Workers Committee as we were sorting some green beans. It was inspiring seeing so many union professionals working together to help the community. Needy people received the food within a couple days and it was very satisfying knowing we played a part.

The 695 YWC has many more events coming up but we’d also like to discuss our political activity. The Local cannot use our quarterly dues to promote politicians or legislation. The IATSE-Political Action Committee or PAC was created to help raise money for political activism through donations to support pro-union legislation and political representatives who work to keep our unions safe and our members’ rights protected. You have a choice to sign up for a one-time donation or contribute an amount each month to the fund. We must support pro-union legislation in any way we can, especially in these most trying political times. 

We at the YWC decided to support the IATSE-PAC fund and held a fundraiser on the Local 695 Dinner Cruise this past June 16. We set up a booth on the main deck, posted our trifold describing the benefits of the PAC Fund, and our newly designed YWC banner. We threw on our Caribbean leis and worked hard to inform our members of the benefits of donating to the PAC. We worked exceedingly hard to spread the word to our members and pulled in two hundred and thirty dollars in donations. We were extremely proud of our efforts to help raise money for this phenomenal fund and know that even the smallest little bit can make a difference. A key message we want to stress is that support can be found all around you in Local 695. You will always find a member who is willing to help you in a difficult situation or offer insightful advice.

Spring Hike at the Stough Canyon Nature Center

The YWC hosts these events to help create a bridge between the younger generation and the experienced professionals. We are here for you one hundred percent.

The Young Workers Committee is actively working with Laurence Abrams, Education and Communications Director, in creating an introductory course for incoming members. Laurence has done a phenomenal job creating the content for this course and the YWC will be there for anyone who has further questions on ways to network and become involved in the activities of the union.

Nathan and myself and the many members of the YWC, such as Ben Lazarus, Eva Rismanforoush, Evan Freeman and Chris Thueson, encourage you to become involved in the Young Workers Committee events and activities. Please reach out to Aaron Eberhardt via email at ae.sound@yahoo.com or Nathan Whitcomb at Nathan.whitcomb@gmail.com for more information. Thank you to the Board of Directors of Local 695 for the opportunity of a lifetime to work as the co-chairs of the Young Workers Committee and we look forward to creating the most beneficial opportunities to for our members.

NAB 2018 Recap

NAB 2018 RECAP What you missed, what we liked and how future entertainment is shaping new technology.

by Daron James

The annual NAB trade show held in April at the Las Vegas Convention Center brought new innovation and technology. Each year, themes emerge from the onslaught of manufacturer announcements and this year was no different. Two years ago, we saw 4K and HDR flood the market and they continue to make an appearance. Last year, buzz around the ATSC 3.0 broadcast standard was part of the conversation. In 2018, acquisitions and integrations were the talk of the show floor.   

It was another big year for audio as it started to lead in IP and immersive experiences for virtual and augmented realities (VR/AR). Studios and networks are spending boatloads of cash in order to attract audiences to new immersive forms of entertainment. Whether 3D, 360-degree or ambisonic audio, you’ll be hearing a lot more about the technology advancements going forward.

For location sound, the usual suspects in audio manufacturing made their presence felt, but it was not as striking compared to previous years. It’s interesting; we’ve grown accustomed to hearing about new gear around trade show time, and when we don’t, it feels like a letdown. But that’s not the manufacturer’s fault. It is of our own making. Not to mention, companies like DPA, Countryman, Lectrosonics, Sanken, Schoeps, Sound Devices, Wisycom and Zaxcom have already pushed the boundaries of what we believe can be done with audio technology. They’ve improved polar patterns, increased track counts, made things incredibly smaller, integrated workflows, enhanced the durability and advanced countless other modifications.

Many audio manufacturers, to their credit, design gear that’s future proof and doesn’t need to be replaced often. That’s good for the consumer but it puts them in a challenging position to grow. We’ve seen that especially in the last several years. Zaxcom was the first to introduce a wireless system with simultaneous internal recording to a memory card. Now, Lectrosonics is following Zaxcom’s lead, offering wireless that can transmit or record to an internal memory card. Tascam offers similar solutions. There’s little doubt that if Zaxcom didn’t hold a U.S. patent for its simultaneous recording technology it would be reproduced. Outside North America, it has been replicated by Audio Limited with its A10 Digital Wireless System, a new acquisition of Sound Devices. Though it’s worth noting it doesn’t offer simultaneous wireless transmission and internal recording inside the U.S. because of the aforementioned patent.  
 
This is not to say drawing inspiration from existing ideas and adapting them as your own is a bad idea. It’s done all the time. But it needs to be done with caution. In speaking with Intellectual Property Lawyer Michael Cohen of Cohen IP Law Group in Beverly Hills, having “some form of IP protection is not only important but it can make or break a company.” While non-practicing entities or “patent trolls” have created a bad persona about patents, Cohen mentions, “It’s important to realize that for practicing companies it could be their only means of survival.”

And it’s not just the bigger players battling for market position, nor is it limited to those in audio technology. If anything, NAB this year proved that companies are starting to vertically expand their portfolios. Sound Devices made a splash with the aforementioned purchase of Audio Limited, its first venture into wireless microphone systems. The buy allows the company to possibly package the 688, SL-6 and A10 wireless together at an attractive price in the future. It will also be interesting to see if Sound Devices starts integrating receivers directly into its recorders/mixers.

The Wisconsin-based company also announced a partnership with Sennheiser allowing its MixPre-6 and MixPre-10T recorders to capture and monitor 360° spatial audio through Sennheiser’s VR microphone, AMBEO.

Sennheiser showcased its new evolution wireless, the G4 which replaces its popular G3 series. The G4 comes in three lines: the 100, 300 and 500. The 100 G4 series is identical in spec to the 100 G3 line but offers a new housing and a monochrome LCD instead of the orange display. The 500 G4 series is its pro line offering a new chipset for 2,880 frequencies across an 88 MHz band.

Sennheiser also introduced the Memory Mic that captures audio on smartphones. It’s a lapel mic though it’s pretty bulky. It’s essentially a microphone with an internal recorder that can capture up to four hours of audio. Then the recorded audio can be transmitted to a smartphone via hotspot and synced to video. It might be an interesting solution for a plant mic, possibly a scratch track, but the audio quality remains unknown as it was a prototype.

Other quick hits from the show floor included gear from K-Tek’s new “budget-sensitive” Airo line of accessories, Rode’s NT-SF1 ambisonic mic, Rycote’s 3” windscreen dubbed Baseball for shorter mics and Zaxcom is shipping its Deva 24 and Mix-16.

For the Video Engineers of Local 695, Flanders Scientific started rolling out 12G-SDI UHD monitors as large as 65”. Vaxis now offers an extensive range of wireless video systems from five hundred feet all the way up to three thousand feet. SmallHD and Teradek have teamed up to offer monitors integrated with wireless video. The 703 Bolt is a bright 3,000- nit seven-inch monitor offering five hundred feet wireless range while the Focus RX/TX system is an 800-nit five-inch monitor with range of five hundred feet. On the Teradek side, its new Bolt XT and LT provide a zero delay wireless video system. Additionally, Atomos unloaded Ninja V and partnered with Apple to support ProRes RAW for its video recorders.

Company integration is now part of the future of technology. It was the buzz throughout NAB, at least for this observer. We’ll see more companies partnering to simplify workflows. Others will expand to untapped markets like virtual reality or tweak lucrative ideas as their own. Only time will tell which ones will pay off.

The Way We Were

The Way We Were: Adventures in Film Sound

The year is 1938. You’re on Stage 20 at the Twentieth Century Fox lot. It’s 11:45 at night, and you’ve been shooting for 13½ hours.

“Turnover please!” yells the 1st AD.

Optical Recordist c. 1930


 
You signal the machine room operator to roll, waiting for the red light on your mixer panel. Four seconds later, you call out, “speed!” “Mark it,” says the 1st assistant cameraman. The 2nd assistant claps the sticks. “Background” calls the 1st AD, and, from the director; “Action!”

A 2½-minute take ensues. One of the actors goes up his lines for the third time. “Cut!, Cut!, Cut!” yells the director. He walks up to the offending actor and whispers to him, “Wayne, did you really read these lines last night, or were you and Marion out carousing together?” Red-faced, Wayne turns away, and says, “I’ll get it in this take, I promise.”

You pick up the interphone and call Bert, the Machine Room Operator, “How much we have left?” “Three hundred feet,” he replies. You calculate the odds. If they run straight through again, you’ll be fine. On the other hand, if the director decides to pick up again mid-take, you’re screwed. It’s 11:45 at night and the crew is fifteen minutes away from a very expensive night premium. Do you want to go for it, or risk the wrath of the 1st AD for a two-minute reload? He looks at you. The daggers are already in his eyes. “Good to go!” you say, with no small amount of conviction.

And again, “Turnover!” “Speed!” you yell out. “Marker!” calls the second assistant cameraman. You curse him for taking an extra two seconds. What a prima donna… You didn’t even have a chance to ask the script supervisor how long the last take ran. “Background!” and “Action!” The scene ensues. You don’t have a stopwatch. You’re sweating bullets, as the important part of the scene happens in the last thirty seconds, a pivotal moment between the lead actor and actress, which signals a significant change in the storyline. It’s a one-take shot. If you don’t get this, you’re screwed.

“Marion,” calls out the director. “Pick it up at ‘the last time you said that to me, I said no.’” You can’t see her because she’s hidden behind a set flat, but you hear her voice draw up tightly with a hint of exasperation. “OK,” she says softly.

“And … action!” They nail it. Dead on. There’s a moment at the end of the scene which can’t be recreated, which is the moment you know the director was looking for. Despite the fact you couldn’t see them, you know from their voices it was a good performance. The 1st AD looks at you inquiringly. He doesn’t say a word, but you know what he’s thinking. You give him a “thumbs up,” despite the fact you haven’t heard from the machine room operator. You pick up the interphone immediately. A short pause while it rings, and you say to Bert, “Please tell me we got that, or we’re both out of a job.” “Your job is safe,” he replies, “we had thirty feet left.” “That’s a good thing, for both of us” you reply, with a hint of relief. “Better load another one thousand of ’66. I’m not going to chance the last thirteen minutes of the night to this jerk.”

You get on the com to the boom op. “Larry, I think we should swap out for the 10001.” He responds, “OK, but it will take a few minutes. I’ll need to re-balance the weight box.” You calculate the odds. Is it worth it to incur the wrath of the 1st for a somewhat better track, or live with what you have. Camera is reloading. “Do it,” you tell Larry, “and you better be fast,” praying he’s faster than the 1st assistant cameraman.

You look out the back door of your truck. It’s getting cold out. “Pete,” you call out, “if you smoke one more of those cigarettes, you’re going to blow us all up.” He gives you a look, rolling his eyes. “Really?” he says. “I’ve been doing this for eight years now.” “Whatever” you think. “Before that, you were a night-shift transmitter operator. Big friggen deal…” “Oh, and while you’re out there, coil up the rest of the Selsyn cable.”

What the hell are these guys talking about?

The term “load another one thousand of ’66” refers a 1000-foot load of Eastman Kodak 1366, a highly volatile 35mm nitrate negative stock. Hence, the admonishment to Pete regarding smoking. One spark and the truck blows up. “Truck?” you think. Yes, that’s what was used back then. Unless you were on a stage. Weight box? The counterweight used to balance the mic boom, which has to be changed based on the weight of the microphone. “Selsyn cable” was the cable used to interlock the sound recorder and camera together.

Over the past eighty-eight years, the technology and set practices for sound recording have changed significantly. Soundtracks were not recorded on digital files in 1938. They were not using high quality condenser mics. The term “10001” refers to the venerable RCA KU-3A (also known as the MI-10001, a highly sought-after ribbon mic). There were no iso tracks. If you didn’t have it in the mix, then there was no other resort. They didn’t have sophisticated multi-channel mixers. What they had was one channel of optical sound, a four-channel mixer, and hopes that they could figure out the scene in enough time to get it right, or be replaced the next day.

These are some of the themes that we will explore in the column “The Way We Were: Adventures in Film Sound.” Stay tuned…

–Scott D. Smith CAS

Chinhda

Remembering CHINHDA 1954-2018

by David Waelder

Chinhda Khommarath was the most imaginative, creative, technically-savvy, generous, exasperating, shy, opinionated and mercurial individual I’ve known. It was a love of fast motorcycles and rock-and-roll that brought him to our community. 

I met him twenty years ago when I sought someone to make custom alterations to my sound cart. There’s not a crowded field of skilled machinists willing to take on cockamamie projects so I was keenly interested when both Neil Stone and Mike Denecke recommended Chinhda right away. Mike gave the project a boost when he took me in hand and personally introduced me. Chinhda hated meeting with clients but he had an especially close relationship with Mike, who used to bring him along as his boom operator when he was, at that time, still doing commercials. Without Mike vouching for me, Chinhda might not even have met with me.   

Peter Weibel of KEM Editing Systems was Chinhda’s first employer in California. The KEM was a marvelous but sometimes temperamental beast and Peter needed a service technician. He asked a teacher at the Los Angeles Tradetech to send his best student. Chinhda showed up, was hired, set up a machine shop and serviced KEM editing tables throughout the west for many years. Peter said his work consistently exceeded expectations. After Peter moved on to other endeavors, Chinhda continued to service KEMs as an independent technician and set up a company with Joe D’Augustine to rent Avid systems. He also mixed One Night with You for Joe D’Augustine. Over time, Chinhda became a part of the small film community that settled into an industrial corner of North Hollywood not far from the current Local 695 offices. Mike Denecke, Manfred Klemme, Ray Cymoszinski and Neil Stone would get together often for lunch, an Algonquin Club for movie engineers.

After the Emmy ceremonies, Henry Embry brought his statuette to the shop to share the win with Chinhda. Photo by David Waelder
After the Emmy ceremonies, Henry Embry brought his statuette to the shop to share the win with Chinhda. Photo by David Waelder

Chinhda’s work for me also exceeded my expectations. He haunted the aero surplus shops, especially Joe Factor Sales, to find specialty latches and instrument shock mounts to provide protection for the equipment. When I picked up my cart, I mentioned that I thought there was a market for a cart purpose-built to production needs. He nodded and we parted.

Mike Denecke “Father Time,” also a Local 695 Production Sound Mixer, designed and manufactured the first practical timecode slate. He is profiled in the Fall 2009 issue of the 695 Quarterly. [Fall 2009 – Volume 1 – Issue 3 – PDF]
Manfred Klemme At various times, Manfred Klemme was a representative for Cinema Products, Nagra and Sonosax. He assisted Mike Denecke in the development of the timecode slate. Later, with Mike’s encouragement, he established K-Tek as a supplier of boom poles and other professional audio tools.
Ray Cymoszinski Ray Cymoszinski was a Local 695 Sound Mixer working primarily in television. His credits included Everybody Hates Chris, Columbo, Turks and Crime Story. He was also a skilled electrical engineer and designed a portable battery system that Chinhda supplied to local outlets like Location Sound and Coffey Sound.
Neil Stone An independent service technician, Neil developed a timecode conversion for the Nagra 4.2.

More than three years later, Chinhda called and said, “I’m ready.” We developed the design together but, as we worked, it became clear that he had already given the project considerable thought. He had previously built a cart for Mike Denecke and, later, a special cart for Peter Weibel to showcase the Nagra D. My role in the design was secondary. Every piece in that first cart was crafted by hand. Chinhda took great care to make every surface smooth and every corner rounded so that each touch would bring tactile pleasure.

Custom cart built for Ao Loo.  Photo by David Waelder
Custom cart built for Ao Loo. Photo by David Waelder

He had a remarkable ability to see transformative shapes in ordinary objects. An I-beam could be milled to yield a mounting bracket for a recorder; tubing, both round and square, might be cut to produce folding handles and other elements that made the product both attractive and useful. Elegance always had a purpose: folding trays expanded the workstation; gear was securely affixed by custom brackets; crabbing wheels in follow carts permitted straddling the wheel hump in a crowded camera truck. And, his creative energies were not restricted to carts; he also made, from scratch, a portable telecine machine for projects that wanted to shoot film, screen dailies, but also cut on computer.

This perfection presented its own challenges. As features were added to the carts, the cost rose until we were charging $10,000 each but making very little money even though we sold every cart Chinhda could make. His fertile imagination prevented him from locking down designs so we were never able to have components outsourced to CNC fabrication. Everything he made was milled on a Bridgeport with Chinhda moving the work table against the cutting bit by hand. And everything was crafted exquisitely. Creating those parts was his special pleasure.

Many components were like jewelry. Photo by David Waelder
Many components were like jewelry. Photo by David Waelder

His engineering genius was largely self-taught but he did have some instruction in mechanics and electrics. He told me that, while just a teenager in Laos, he took classes in applied arts and worked part time to repair vehicles damaged by wartime service in Vietnam. But it was the draw of motorcycles that brought him to California. Schoolwork assignments did not bring in enough money to buy fast bikes, but smuggling in a war zone was profitable. Inevitably, this led to a reckoning and police arrived with questions. His mother knew immediately what had to be done—she sent the police on their way and locked Chinhda into his room until she could arrange passage to California where he might join his brother. He was on a plane just two days later.

He joined his brother in mechanical engineering classes that, through the instructor’s friendship, brought him to Peter Weibel and motion picture editing. His youthful passions stayed with him throughout his life. He collected guitars and played in a band. While, as an adult, he no longer rode bikes, many of his friends were sure that he would kill himself on that wicked fast Ducati that occupied the front of his shop for more than a year. Wisely, he never took it out.

Chinhda had strong family connections, as one might expect from someone shanghaied into safety by his mother. He never failed to ask about my son. He was close to his brothers and took special pride in his nephews, who formed the core of the technical workforce in early days at K-Tek. He complained bitterly about how Manfred poached them, but you could tell he was proud. He was proudest of the accomplishments of his daughter Kathy, who studied law, gave him a grandchild and now works as an immigration attorney.

He lost his wife K.K. to cancer a few years ago and he yearned to be reunited with her. We wish them both good karma.

Chinhda Clients

The list of lives he touched, either from the sale of products or other work, is extensive. Cart and accessory clients include Coleman Metts, Glenn Berkovitz, Mark Ulano, Mike Guarino, Michael Barosky, Duke Marsh, Geoff Maxwell, Jan McLaughlin, Eric Rueff, Tom Visser, Mark Weber, Steve Tibbo, Noah Timan, Simon Bishop, Mike Sexton, Tom Stasinis, Walt Hoylman, Cloud Wang, Matt Israel, Adam Jones, Ao Loo, Ludvik Bohadlo, Henry Embry, Christoph Schilling, Caleb Mose, Susan Moore-Chong, Peter Kurland, Chris Duesterdiek, Von Varga, Paul Ledford, Edward Tise and Jeff Jones.

(Apologies to anyone omitted.) Besides the USA, countries represented include China, Poland, England, Germany and Canada.

Names in boldface are members of Local 695; italicized names are believed to be IATSE members in their respective regions. Names in plain type are from outside IATSE membership jurisdiction but are often trade unionists in their respective countries.

ProRes RAW

Introducing ProRes RAW

by James Delhauer

In ProRes RAW, Apple seemingly hopes to introduce a new standard that blends the ease of use of the original ProRes family with the post-production flexibility of a RAW file format. In short, this will give ProRes users the capability of accessing information directly from a camera’s sensor during the intermediate and editorial processes.

Though technology in our industry is an ever-evolving force, there are some constants that we have come to rely upon. In 2007, Apple unveiled the ProRes family of video codecs—a series of lossy video compression intermediate formats intended for use with the company’s Final Cut Studio bundle of post-production applications. The core concept was to introduce an easy-to-use, processor-efficient file type that editors could work with quickly while maintaining a high standard of image quality. The result was a “visually lossless” intermediate codec that celluloid film scans and larger digital files alike could be converted to for the purpose of real-time playback editing in a nonlinear environment. With a wide variety of future-thinking features such as 8K video resolution support, 10-bit sampling depth and variable bitrate encoding, the original four ProRes codecs quickly outshined their predecessor—the simply named Apple Intermediate Codec—and rivaled the capabilities of Apple’s main competitor: Avid Technology’s DNxHD. In 2009, the addition of two new ProRes formats increased these capabilities to 12-bit sampling depth at higher bitrates, making the codec that much more versatile.   
 
Though initially intended for sole use with Apple’s suite of proprietary software, ProRes grew far beyond its original intended purpose. Recognizing the value and efficiency of a high-quality but low-size intermediate codec, other companies began licensing ProRes for use in their own platforms. Adobe Systems quickly integrated it into their Premiere Pro, Media Encoder and After Effects applications—direct competitors to Apple’s Final Cut Pro, Compressor and Motion platforms. Avid Technology, despite having its own intermediate format, added ProRes support into Media Composer—the application that has been the industry standard for nonlinear editing since its release in 1989.  
 
In the last decade, almost the whole industry has followed suit. ProRes has become an integral part of daily life for those of us who work in cinema and television. Platforms critical to Local 695 video engineers such as Pronology’s mRes, AJA’s KiPros, EVS’s XT3, In2Core’s Qtake, Atomos recorders, Adobe System’s Media Encoder and Blackmagic Design’s DaVinci Resolve, all offer ProRes encoding and decoding as a standard feature. More and more camera systems that we provide data management services for offer it as a native capture format. In my personal experience on more than two dozen broadcast series, ProRes has been the preferred format on the majority of them. My experience in this regard hardly deviates from the norm.
 
And that is why it was so exciting when a new version of ProRes was announced at the National Association of Broadcasters (NAB) trade show in Las Vegas in April. Released as part of a joint venture between Apple and Atomos, ProRes RAW hopes to bridge the gap between convenient, easy-to-edit lossy compression formats and their larger, more robust lossless counterparts. A RAW file format is one that contains unprocessed or minimally processed data directly from a digital image sensor. These formats typically result in video files that are untenably large and, in most cases, are too dense for real-time playback or processing without expensive top-of-the-line workstation computers. This sort of compression has been restricted to edit-unfriendly non-intermediary codecs, such as the proprietary formats developed by Red and Arri or the more widely available but processor- and storage-intensive CinemaDNG format. Apple, it would appear, has taken aim to change that.

In ProRes RAW, Apple seemingly hopes to introduce a new standard that blends the ease of use of the original ProRes family with the post-production flexibility of a RAW file format. In short, this will give ProRes users the capability of accessing information directly from a camera’s sensor during the intermediate and editorial processes. This information includes the sensor’s white balance, color tinting, ISO and dynamic range up to 12RGB bits and sixteen alpha bits. The result is more mainstream access to expensive high dynamic range technology—a process by which multiple exposures can be blended into a single image to maximize both highlight and shadow detail. Moreover, resolution support has been increased to beyond 8K, though the current cap has not been specified by either Apple or Atomos.

Somewhat astoundingly, this has all been done without a significant increase in file size over existing ProRes formats. At release, ProRes RAW comes in two distinct flavors. The first, simply called “ProRes RAW,” is comparable in file size to the original ProRes 422 HQ with a variable bitrate that averages approximately 220mbps at a resolution 1080p with a frame rate of thirty frames per second. The second, entitled “ProRes RAW HQ,” is similar in size to the existing ProRes 4444 XQ at an average bitrate of 500mbps at 1080p30. By patterning these new formats on the two most popular variants of the original ProRes family, it appears that Apple plans to integrate this new technology in a manner that is as un-invasive as possible so as not to disturb users during rollout. As companies adopt ProRes RAW into their platforms, the original line of ProRes products will likely be phased out as obsolete and unnecessary.

In practical terms, this could result in major cost saving and increased time efficiency across the industry. With regards to Local 695 specifically, the various responsibilities of a video engineer or technician include media playback, on-set chroma keying, off-camera recording, copying files from camera media to external storage devices, backup and redundancy creation, transcoding with or without previously created LUTs, quality control, and syncing and recording copies for the purpose of dailies creation. This often requires both high-end workstation computers with large amounts of both processing power and graphics memory, as well as large storage solutions with high-speed media. These tools are expensive, with an introductory iMac Pro (Apple’s current professional level workstation) costing $4,999. A fully upgraded machine can cost a user upward of $13,000. While this may seem excessive, that sort of computational power is often a necessity as feature films standardize 4K productions and push beyond to 6K and 8K workflows—especially for live chroma keying or compositing. Storage costs are not cheap either, with a single use 4tb shuttle drive costing productions around $170 on the low end. But if widely adopted, ProRes RAW could make the inevitable days of higher resolution productions more manageable for engineers at a fraction of the cost. By allowing productions to maintain levels of quality with smaller and less demanding files, ProRes RAW could reduce these costs all around.

It also has the power to simplify and streamline our workflows. A common complaint from producers and engineers alike over the last few years has been the non-standardization of ingest formats in cinema-grade cameras. At present, a video engineer is required to be familiar with just about every file format and codec out there. On feature productions, commercials and high-budget television series, it is not uncommon to juggle REDCODE RAW, ArriRAW and CinemaDNG files in a given day. Each of these require their own unique workflow in order to process clips for editorial so that they will be ready for use and offline to online editing at the end of post production and to create dailies for producers. We are required to be intimately familiar with each workflow and seamlessly move from one to the next during the course of a single day. The widespread adoption of ProRes RAW could be the standardization of RAW video that we have been waiting for, allowing engineers to work via a single workflow in any given day rather than bouncing back-and-forth between two or three separate ones.

Currently, ProRes RAW can only be captured through external recorders manufactured by Atomos. Officially, the Atomos Ninja V external HDMI monitor/recorder was the first system to allow ingest of ProRes RAW files in cameras manufactured by Canon, Sony and Panasonic but a firmware update was released the following day that granted these same capabilities to the Shogun Inferno and Sumo19 recorders as well. These recorders are capable of ingesting the new format at a resolution of DCI 4K (4096 x 2160) at sixty frames per second. DJI then announced that their Zenmuse x7 camera would receive a firmware update allowing it to capture ProRes RAW at a resolution of 6016 x 3200 at 23.976 frames per second. On the post-production end, the new codecs are currently only supported for use in Apple’s Final Cut Pro X platform, version 10.4.1 or higher.

That didn’t stop the whole of NAB from asking about the future of this new codec. Representatives from BlackMagic Design, Adobe Systems, AJA, EVS and Canon Inc. were all flooded with inquiries as to when users could expect ProRes RAW support in their systems. However, Adobe Systems’ Senior Product Manager Patrick Palmer quickly put out a simple statement on the company’s forum: “We’re looking into adding support for ProRes RAW as we speak.” Excitement spread like wildfire throughout the week as everyone from feature filmmakers to video enthusiasts and YouTubers clamored to find out more. There seemed to be an unspoken consensus in the air throughout the Las Vegas Convention Center. The next big thing might have just arrived.

Robert Chartier and Live Synchronous HD Car Process

24frame.com

by Richard Lightstone and Mark Ulano

Before there was green screen, there was blue screen, but well before computer graphics, there was rear projection. The Fox Film Corporation in 1930, on the film Liliom, directed by Frank Borzage, first used it.

The best example would be Alfred Hitchcock’s North by Northwest (MGM 1959). The iconic scene of Cary Grant running for his life as a crop-duster attempts to mow him down.

A scene from North by Northwest. ©MGM Studios
A scene from North by Northwest. ©MGM Studios

The bulk of that scene was shot on location in Bakersfield with Cary running well ahead of the low-flying crop-duster. The most dramatic shot was him diving into a ditch to save himself from the swooping aircraft. That cut was shot on an MGM soundstage, with Cary diving into a set-built ditch and the crop-duster images being rear-projected. Robert Chartier and his company, 24frame.com, have advanced this technique into an even more efficient process. They were invited to enter and demo their process to the Academy Scientific & Technical Awards Committee for the 2018, 90th Oscars.

A rare still of the famous scene
A rare still of the famous scene

Mark Ulano CAS AMPS and I paid him a visit at their massive facility just off of Venice Boulevard in downtown Los Angeles.

We’ve all been involved on insert car work with a process trailer. A time-consuming procedure involving street closures, motorcycle police, safety officers, film permits, stunt drivers and logistics that hopefully, are well coordinated as many lives are at risk.

The camera’s view of projected  driving inside the bus for an episode of Curb Your Enthusiasm.
The camera’s view of projected driving inside the bus for an episode of Curb Your Enthusiasm.

Robert’s company removes all those potential obstacles by rear-projecting multiple screens surrounding the picture vehicle with made-to-order in sync footage of the street scenes in the safety and comfort of an air-conditioned soundstage.

Robert Chartier first conceived this process in 1994, using Sony 1080i projectors at eighteen hundred lumens with a DLP engine from Texas Instruments. On a project with Director of Photography Gale Tattersall, Robert did video playback of some plates in the windows of the vehicle, but it could only be night shots with the streetlamps going by.

The technique advanced when Christie Digital came out with their 18K projectors. Robert describes his vision: “Basically, the cameras got better, everything’s gotten better, we’ve changed the equipment over and over and over to try to do it right. Then I opted to build our own capture van that had all this equipment in it. Currently, we are using Christie Digital projectors stacked up to fifty thousand lumens.”

Chartier continues, “We are deck operators and we always record everything on decks and that’s for protection but it’s also the fact that we are playback operators and we will play it back on stage, live, all in sync, using four to five projectors at once with five screens.”

They’ll often get a call from a producer or a DP with a script that has car scenes and they are looking for a better alternative then a process trailer or just green screen.

Robert explains, “They can manipulate the lighting, they have all the time in the world to do what they do as DPs. It’s a controlled environment. Safe, visually accurate, with no location permits, police, process trailers, insert cars. This is still cheaper than doing it any other way.”

24frame.com can go out and shoot custom plates for that project or the client can select from Chartier’s vast library with thousands of hours of footage and multiple angles. Choosing the exact footage and location they want for live playback on stage.

The capture van and Aashish Gandhi of Locals 600 & 695
The capture van and Aashish Gandhi of Locals 600 & 695

If the client needs specific plates, they’ll go out in one of their three proprietary capture vehicles and Aashish Gandhi, a dual card holder of Local 695 & Local 600, their staff cameraperson, will man the cameras.

They recently were in Seattle for the new Shonda Rhimes show Station 19 and captured images with seventeen cameras. The entire series is shot in Los Angeles.

Matt Landry (695), Aashish Gandhi (695/600) and Jim McDonald (695) in Seattle for Station 19
Matt Landry (695), Aashish Gandhi (695/600) and Jim McDonald (695) in Seattle for Station 19

“All our footage is in sync, when we show up on the set, we get all the reflections, we get everything going for you,” says Robert. “We become the DP’s best friend with everything we offer. Our Video Engineers do full-color corrections on the fly. We have designed everything to capture an image that will size correctly when we project it. The capture vehicles are built for specific functions that mimic the picture car we are capturing for, whether it is a fire truck, ambulance, bus or passenger car. We come to the stage and all the math and all the physics are correct.”

Recently, Mr. Chartier got a call from an executive post producer at Disney who was about to start a film with a cast of young children with lots of driving sequences.

She was concerned about using a large amount of green screen and then have to figure out later what angle the DP shot through the car to try to get that correct plate.
 
They brought a busload of Disney executives to 24frame.com and Robert showed them the demo and, right then and there they said, “This is how we’re doing it.”

When Steve Carell walked on the set, he said, “My God, now we’re making movies. This is what movies are all about. On Little Miss Sunshine, we were stuck in the desert for three days filming the Volkswagen van and, God, did we, need this.” The capture vehicles record to SSD drives but everything is backed up to multiple rotary drives.  All of the footage is owned by 24frame.com and becomes part of their vast location library.

Robert Chartier: “It’s on a 100TB server and they can view through the web and access the actual files. Some of our plates are twenty-two-minutes long. They can choose the segment and then our coordinator will make sure that they have the plates requested on the set. We always have the shot before and the shot after, so, if the director or DP doesn’t like the bus pass-by in the shot, we shift forward, they can queue up and re-queue and do anything they want, all live on the set. They can ask for a red Corvette, PCH, dusk, dawn, day or night and if we have it, it’s gonna come up all on a searchable database.” A commercial agency from Mexico brought the cast up to LA and shot it on their stage, instead of Mexico City.

Chartier explains, “We provide motion lighting with LED panels in sync with the projected video to reflect onto the vehicle. Because it’s live video, we can pump it into the car, onto the actor’s face, if they go under trees, you see it, if it’s under a bridge, etc.”

Aashish continues, “Because we start every scene from the top of the clip, when the camera turns around for coverage, you still have the same environment. The lighting is consistent from the left or the right, front or back. You have the same truck pass-bys, tree branches and shadows. It’s perfect for any angle you shoot.”

They essentially bring the location to Los Angeles. For the production of Wolves, they drove the capture van to Davenport, Iowa, for two days bringing the footage back and the production shot all the scene here, saving a lot of money and not having to bring an entire cast and crew to Davenport. A train set was built on their stage for The Newsroom, projecting their plates for two days. “It was a lot of work with three cameras, wide and tight angles, handheld, with the actors walking around the train,” explains Hayk Margaryan.

The bus on stage for Curb Your Enthusiasm.
The bus on stage for Curb Your Enthusiasm.

For Curb Your Enthusiasm, they brought a bus onto stage for some driving scenes. Brian Wright describes, “I remember at one point, the effect was so realistic that you could see grips, gaffers, everyone leaning into the turn, but the bus was not moving. It was so interesting to see that.”

Robert continues, “We’ve had complete productions shoot their show here, ten episodes in eight days; it’s all done indoors. They could not have done it in twenty days outside, apart from heat exhaustion and everything that comes with it.”

Hayk Margaryan concludes, “You know, it makes it easier for the crew and the cast to be in a  ontrolled environment, to be in an air-conditioned place, have a place to sit, wait until they relight and then just walk back into the vehicle. You don’t need to drive, it’s safe and a completely controllable situation.”

We want to thank Robert Chartier, Hayk Margaryan, Pacific Winter, Aashish Gandhi, Brian Wright and Jim McDonald for their time and hospitality on a busy day at their facility.

Robert Chartier and his team are pioneers in the video engineering discipline and proud members of Local 695.

Yellowstone

Occupied Land

Production sound joins an ensemble effort to realize Director Taylor Sherdian’s cinematic story Yellowstone.

by Daron James

Thomas Curley CAS casually takes a sip of his iced tea while sitting outside a North Hollywood pizzeria. The sun settles behind a scattering of puffy grayish clouds as a breeze wisps by reminding him of the weather conditions sound endured while recording Yellowstone, a 10-episode scripted series from creator Taylor Sheridan, whose written material includes Sicario, Hell or High Water and Wind River, the latter which he also directed.

Curley’s relaxed, approachable demander disguises any of his previous accolades—a career that started out shadowing Sound Mixer David MacMillan, a now CAS Career Achievement Award recipient and Boom Operator Duke Marsh—which has flourished into an Academy Award, AMPS and BAFTA win for his work on the Damien Chazelle-directed Whiplash. The New York native is as humble as they come residing in Los Angeles and building up the appropriately named location sound recording company Curley Sound with his brother Brian Curley.

“This was the biggest show in terms of scale I’ve ever worked on,” says the production sound mixer, who was hired on through a Whiplash connection via First Assistant Director Nick Harvard. With its massive scope, Yellowstone was an “enormous undertaking” in terms of its production. The hour-long series was the first greenlit project from the newly formed Paramount Network, which meant “quality was the top priority.”
 

Production preps its next three-camera shot as White looks for a place to boom. Photo: Courtesy of Paramount Network
Production preps its next three-camera shot as White looks for a place to boom. Photo: Courtesy of Paramount Network

The allegory follows the Dutton family, led by John Dutton (Kevin Costner), who controls the largest ranch in the U.S. and the people that are trying to take it away from them by any means necessary. Accompanying Costner is Wes Bentley, Kelly Reilly, Luke Grimes and Cole Hauser, who plays his sons and daughters, as well as Gil Birmingham, a local Indian Chief named Thomas Rainwater and Danny Huston as a greedy land developer.

Production shot in parts of Utah and Montana from August to December of last year utilizing the Utah Film Studios in Park City for its studio work. For crew, Curley tapped veteran Boom Operator Knox White and local Utility Andrew Cier. “Production’s hands were tied in terms of spending extra money to bring out a third, but it turned out very well. Andrew came up working with some good people like Sound Mixer Jonathon Stein (The Sandlot). Plus, he knew the local people, the area and the terrain.” Working with White on the other hand was something that was always in the back of Curley’s mind. “I did a day or two with him a while ago, but for most of my career, he was three tiers above me doing James Cameron films. Our stars finally aligned and I brought him out into the woods. He’s a surgeon with the microphone and very endearing,” says Curley. “Better yet, he has stories for days which made the tough days a bit easier.”
 
Curley was flown out for a week before production and engaged with Cinematographer Ben Richardson (Beasts of the Southern Wild, The Chi) to find out “how tough he was going to make it” for them, he jokingly says. “In terms of DPs, Ben really knows technology which is near and dear to myself. He made a really good recommendation that turned out to be the best money I think I have ever spent.” The suggestion was the Ambient ACN Lockit system for timecode. Curley implemented Ambient Nano Lockit boxes to three ARRI Alexa cameras and one Alexa Mini with the Ambient Master Lockit. “Routing our timecode this way proved it was something I never had to worry about and it also made camera happy.”
 

Knox White booms a walk-and-talk between two Dutton brothers. Photo: Courtesy of Thomas Curley
Knox White booms a walk-and-talk between two Dutton brothers. Photo: Courtesy of Thomas Curley


A production meeting ironed out basic concerns, and for sound, they found out for most part they would be out in the middle of nowhere. Even indoors or inside a house, they would be out in the middle of nowhere. The only big metropolitan stops being locations in Salt Lake City and the small town of Darby, Montana.

A lofty boom finds its place on set.
A lofty boom finds its place on set.

Shooting style dictated its own set of challenges as Sheridan directed all the episodes and blocked organically moving through 7-8 pages a day using a three-camera setup. A fourth camera, the Alexa Mini, was stationed on a jib and massaged into the workflow as well. Outdoor sets, which included different areas of the Dutton Ranch, covered hundreds of yards in any direction. Sound had to be ready to cover anything from groups of actors riding on horses often coming from two different directions to following groups of actors traveling uphill, down through valleys and across rivers. Vehicles intermingled with horses, people intermingled with herds of cattle, bison and the occasional grizzly bear.
 
Authenticity was important to the director in order to ground the story and the characters in it. What this meant for crewmembers was turning the whole world around to prepare for the next shot. “Our sets were massive pieces of land so every time we needed to move, we’d often have to push things one hundred yards in another direction over a rocky, hole-filled terrain,” Curley says. Trucks, equipment, crew, video village—everything had to be transported and positioned out of frame. “No one ever felt rushed because everyone was in the same situation. The ADs did a wonderful job handling the logistics to keep the train moving.”
 

The sound team loading the Gator on location. Photo: Courtesy of Thomas Curley
The sound team loading the Gator on location. Photo: Courtesy of Thomas Curley

Sound and other departments used Gator utility vehicles for mobility and Curley constructed two sound carts for different types of shooting. His main cart consisted of a Sound Devices 688 and CL-12 linear fader controller with a Lectrosonics Venue 2 (wideband low) and the original Venue (wideband low) to handle larger scenes. Then a PSC Euro Cart dubbed the “Pony Cart” touted a Sound Devices 633 and a PSC RF Multi SR Six Pack with Lectrosonics SRb Dual Receivers. The smaller cart transitioned easily into bag work. A Comtek base station handled the video village feed and Lectrosonics IFB for boom and utility.

For most of the exterior locations, Costner’s character and the men who worked for him wore thick flannel-styled shirts paired with a large jacket similar to the Carhartt brand and jeans. Sound worked with costumes in pre-production to find a “happy spot” to hide the lav, then gum was placed on the zipper to keep it from slapping around while on horseback. Curley admits he likes to produce “high-quality sound recordings” with a “certain consistency” which meant using the same microphones for the entire run of the project. Sanken COS-11Ds were paired with Lectrosonics SMQV and SMQ transmitters and the occasional Countryman B6 was implemented when a lav needed to be placed outside the wardrobe or when water was in play. Schoeps CMIT-5U, CMC4/MK 41 and Sennheiser MKH 60 and MKH 50 microphones combined for the overhead work. Wireless booms were equipped with a Sound Devices MM1 preamp for easy access to gain control.

The pilot episode opens up with a car accident where John Dutton is left bloody trying to find his bearings as horses scatter. It’s a visual metaphor for the clash between the country life and the impending urban development that overshadows the ranch. Horses a very much part of the narrative and sound was mindful of safety precautions. “It was all about communication,” admits Curley. “Knox was really good about making sure everyone was comfortable on set with any boom movements and tried to buddy up with the animals as much as possible.” Animal Coordinator Paul ‘Sled’ Reynolds, who worked on Dances with Wolves with Costner, ensured everyone’s well-being. Sound would swap out the boom for a lav if issues arouse, and other times, Reynolds would change out a horse if they were acting up to no fault of sound. “We tried to be cognizant of potential problems. If you are, you can try to get out ahead of it,” adds Curley.
 
The vast landscape created swirling windy conditions that were muffled with either a Rycote or Cinela windscreen. At peak wind, lavs needed to be buried under wardrobe with fur. Curley felt the brunt of the conditions during the practical car work when he and a focus puller would hide in the bed of a pickup truck as it zipped through the countryside. The car setup included all the common rigging and lighting, and to record audio, Curley tapped the bag version of his Pony Cart and placed a Sanken CUB-01 boundary mic above the visor and wired the actors. Besides traditional drive-and-talks, cops would pull over vehicles or multiple cop cars would swarm in. In the latter case, additional wires were needed and the boom would join in to pick up sound. “There was a lot of adapting as we went along,” explains Curley. “They eventually changed the driving shots to green screen for more control because many of the roads were extremely bumpy and shook the frame.”

Luckily, it wasn’t all exterior location work as the sound team found itself inside the Helena, Montana, Capitol Building for a scene where Jamie Dutton (Wes Bentley), a lawyer, tries to defend the family position. The sequence had Jamie speaking at a lectern in front of the governor and other officials. To cover its sound, booms were played overhead and the podium microphones in front of them were turned on and recorded to a separate track as an option for post. Additionally, Curley turned off the PA system inside the chamber so it would not interfere with the lavs or boom mics.

Inside Utah Film Studios & Curley’s setup.
Inside Utah Film Studios & Curley’s setup.

One of the larger interior scenes of the pilot took place at a cattle auction. As John enters the building, the camera tracks him around the auction area where hundreds of extras sit and on up to a second floor VIP room. He grabs a seat and watches a little girl sing the National Anthem as others approach him with their own concerns. The scene was filmed at an actual cattle auction location in Utah and Curley brought in a live audio person to set up the PA speaker system and stage mics. More than six hundred feet of XLR cable was run for the setup. Curley hid behind a door underneath the VIP room and was able to mix in the National Anthem, crowd and cattle noises into his mono track, along with the natural reverb of the auction. Upstairs, multiple lavs and a boom recorded the dialog between the characters.

Thomas Curley, Knox White & Andrew Cier. Photo: Courtesy of Camera Utility Angel Fisher
Thomas Curley, Knox White & Andrew Cier. Photo: Courtesy of Camera Utility Angel Fisher

Mixing, Curley prefers to run a limiter “just in case,” as with digital, “the only thing that can’t seem to be fixed is overmodulation.” “Running the limiter makes me aware of my gain structure. If I find myself hitting the limiter more often than I should, I turn the gain down. It keeps me honest,” says Curley. He also tends to not play with parametric EQ basing, his philosophy on the idea that it’s better applied in a controlled environment where it can be easily removed if it doesn’t work. A low-frequency roll-off around 80 Hz or as much as 120 Hz is added to the mix if something noisy is nearby like in many of the scenes that feature a helicopter.
 
John Dutton uses a helicopter as a mode of transportation around the ranch. Dialog driven scenes took place either directly outside before boarding or through headsets during flight. To record usable audio, the team employed two methods: one with the helicopter turned off completely and the other with the engine turned off but the rotor still churning.
 
Wide and tights are the norm for most productions, but the sound team faced super-wide shots to show the striking landscape coupled with extremely close shots that followed the action. “We were getting the audio, not the way we’d like to, but something I learned over the years is that you address your concerns and ultimately you have to let them decide.” To curb larger audio issues, the wide camera would be turned off once they achieved what they wanted visually.
 
Curley pushed the gear to its limits, often running 10 ISO tracks with some scenes featuring eight different speaking parts. Double booming was the norm and actors were wired the majority of the time with the exception of some stunt and action sequences. The mixer arrange his wires in dialog order so he could visually see the progression of the scene on his faders. “It’s easier not to get confused this way instead of going back-and-forth across the board. If I haven’t brought up a fader, I know the dialog will still be coming.”

Inside Utah Film Studios & Curley’s setup.
Inside Utah Film Studios & Curley’s setup.

In a climactic scene of the pilot, sixteen men were on horseback, others rode four-wheelers, police cars lined the road, gun shots were fired and a helicopter flew overhead witnessing the chaos. The scene was shot over three nights and broken up between action and dialog. While it was one of the biggest track days in terms of wired actors for sound, the way it was scheduled allowed them to keep pace. “Logistically, there was a lot of moving parts, but it was just a matter of jumping in and keeping our high standards,” says Curley.

Realism was a theme throughout the production. Whether it was importing trees from Montana to build the log cabins inside the studios or the fight scenes where stunt men didn’t hold back or the extensive aerial coverage that enhanced the scale and scope of the story. Eight episodes were shot before weather curbed production. The final two will be finished this year. “Quality is important to Taylor as a filmmaker,” says Curley. “If you watch his other projects, he is very intent on making a real and believable world for his characters to be in. That decision to be authentic doesn’t make our jobs easier, but that’s not why we are there to do it. We’re here to support him and I really lucked out having Knox and Andrew with me. When you start seeing episodes on screen, you get a feeling that all the hard work you’ve done is worth it.”  

Bear boom operating.
Bear boom operating.

The Greatest Showman

by Daron James

Sound channels dialog and music to capture P.T. Barnum tale.

Musicals can be a difficult task for production sound mixers. There’s spoken dialog, dialog that transitions into live song or playback and vice versa. There’s straight playback recordings, thumpers, earwigs, speakers, Pro Tools rigs, music editors on set and other contingencies you need to answer. The Greatest Showman from Australian Director Michael Gracey is as ambitious as they come and Production Sound Mixer Tod Maitland was up for challenge.

Maitland has traveled the sound block since the ’70s, earning his Los Angeles union card working on the TV series MASH as a Boom Operator. He eventually found steady work on the mixing board and since has worked with some of the biggest titans in the industry. In 1990, he snagged his first Oscar nomination for Oliver Stone’s Born on the Fourth of July and followed that up with two more noms for JFK (1991) and Seabiscuit, from Director Gary Ross.

Hugh Jackman as P.T. Barnum All photos by Niko Tavernise/20th Century Fox

One of the films Maitland worked on that featured plenty of music—The Doors (1991)—initiated many of the techniques he uses today. The Greatest Showman is a rags-to-riches tale about America’s original pop culture promoter P.T. Barnum (Hugh Jackman) and how he turned his family’s life around in the most peculiar way.

Before shooting began, there were eight weeks of rehearsals in Brooklyn which the actors used to become familiar with their lines and songs. As actors gained traction, it provided an opportunity for sound to record those songs in their own voice at the rehearsal space to use for playback. “We began by already having prerecorded music and temp vocals sung by professional singers. This is what we played back when rehearsals began for dance and music,” explains Maitland. “We built a small recording booth there and once the actors started to get the movements and feeling comfortable singing to the prerecorded temp vocals, we began to record temp vocals with the actors. Together with the music editor, we recorded using the booth in the rehearsal space. This is advantageous for two reasons. First, it allows the actor to bring in their inflection and have an idea of how it works with dance. The other, being that you want them to be comfortable with their own voice.” During rehearsals, sound also implemented earwigs and thumper so actors were comfortable using them when production began.

Once an actor had the performance down, they went to the recording studio to record the final track. There the studio would use three different microphones to record. One being the studio mic, the other two being Maitland’s boom and wireless mic. “Recording the tracks this way allows for a smooth transition in editing the dialog on set to the prerecorded singing. By using the same microphones I use on set, the transition from studio mic to set mic sounds more real,” Maitland notes.

The crew filming the “magical lantern” scene inside the Brooklyn rooftop set

Re-recording Mixer Paul Massey couldn’t agree more. “The songs fall into a few categories. They’re either an all-out performance inside the circus or an auditorium or one that is more reflective. When we transitioned from dialog to song, it had to be seamless. The challenge is in the vocals and to make sure the environment around it doesn’t disappear coming in and out of song. Matching pitch, timbre, quality, voice and reverb all needed to be addressed. Having the studio recordings use the same production microphones helps this tremendously so you can dirty up the studio vocals to match the grit and voice of the production dialog.”

Rounding Maitland’s sound team were Boom Operator Mike Scott, Sound Recordist Jerry Yuen and Playback Mixer Jason Stasium. Production shot in New York, using historic locations like the Woolworth mansion, the Prospect Park boathouse, the Brooklyn Academy of Music and the Tweed Courthouse, among others. To bring to life Barnum’s museum of wonders, an extensive set was built at Capsys, an old brick factory owned by Steiner Studios.

The script is laced with dialog and more than ten original songs from Academy Award winners Benj Pasek and Justin Paul (La La Land) of which sound needed to track. When actors were required to lip sync, they were wired with a microphone so production could listen to the sync. “We set up a dual monitoring system for the sound supervisors and music editors so they can hear the playback in one ear and the live actor singing in the other. This way, they can hear if the actor is on or off sync. We also have them use binoculars to watch live, not off video monitors, which has a two-frame delay. It’s all about keeping it as real as possible,” states Maitland.

The song “A Million Dreams” between Barnum and his wife Charity (Michelle Williams) illustrates the technique. Cinematographer Seamus McGarvey captured the rooftop performance with Alexa 65 cameras skimming over 1800s New York with the Hudson River in the background. Since the rooftop was so big, sound placed speakers so the actors were never fifteen feet away from one. “You never want to set up a speaker fifty feet away and blast the actors because you will have time delay. Every frame is important to lip syncing,” Maitland continues. “Hugh and Michelle are dancing everywhere on the roof, so we placed speakers all over so it was equal for them.”

Phillip (Zac Efron) and Anne (Zendaya) sing “Rewrite the Stars Day” at Steiner Brooklyn Studios

Live singing required everyone who needed to hear the music to wear earwigs and the team would mic everyone vocally involved. Scenes that started off in dialog and then transitioned into playback also required earwigs for the actors so they could start singing on cue.

One of those ambitious sequences takes place between Barnum and Phillip Carlyle (Zac Efron). The two sit at a bar having a drink and Barnum is trying to convince Phillip, a sophisticated man of the theatre, to quit and join him in the circus as his protégé. What ensues is a choreographed bar song dubbed “The Other Side” that includes dance, drinking shots and flipping glasses. It’s flashy and fun.

After camera filmed the scene, they will do an additional take for sound which allows them to record all of the body movements as a separate wild track. “The actors use earwigs or Comteks with headphones so they can hear the music and then continue to make all the same movements while shooting the scene to playback. This allows post to add the track under the recorded audio to make the sound more believable,” says Maitland. In fact, this technique was used for all the musical numbers involving playback, including “The Greatest Show,” “Come Alive” and the Golden Globe Award-winning “This Is Me.” Similarly, songs filmed inside the circus tent would have the crowd reenact their actions while actors performed with earwigs to create the wild track that didn’t contain any dialog or music.

Sound looked to create a seamless operation for the director finding ways to make things easier on set. They found themselves switching from Sanken COS-11D and Countryman B6s lavs depending on the wardrobe and went out of their way to record any props that were period, like the cash machines or typewriters, to create a catalog for post.

Maitland regularly maxed out his two Cooper 208 mixers, admitting he prefers the Cooper over any digital board because of the ability to EQ. “I like the immediacy of EQ. Some people don’t want you to EQ, but when you get an actor’s voice in your head, you want to deliver that same voice through the entire movie. If the actor is wired and they turn their head all the way to their left, they’re going to be off mic, losing the high end. I can adjust that with one hand of my fader and the other on the high-end EQ. You just can’t do that on a digital board.”

Looking back on what was Maitland’s second feature of the year, he admits he couldn’t have done it without the good people around him. “We had a really great crew. When you’re surrounded by people you can trust, it makes it so much easier to get done what you need to accomplish.”

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Page 12
  • Interim pages omitted …
  • Page 16
  • Go to Next Page »

IATSE LOCAL 695
5439 Cahuenga Boulevard
North Hollywood, CA 91601

phone  (818) 985-9204
email  info@local695.com

  • Facebook
  • Instagram
  • Twitter

IATSE Local 695

Copyright © 2025 · IATSE Local 695 · All Rights Reserved · Notices · Log out