• Skip to primary navigation
  • Skip to main content
  • Login

IATSE Local 695

Production Sound, Video Engineers & Studio Projectionists

  • About
    • About Local 695
    • Why & How to Join 695
    • Labor News & Info
    • IATSE Resolution on Abuse
    • IATSE Equality Statement
    • In Memoriam
    • Contact Us
  • Magazine
    • CURRENT and Past Issues
    • About the Magazine
    • Contact the Editors
    • How to Advertise
    • Subscribe
  • Resources
    • COVID-19 Info
    • Safety Hotlines
    • MyConnext
    • Health & Safety Info
    • FCC Licensing
    • IATSE Holiday Calendar
    • Assistance Programs
    • Photo Gallery
    • Organizing
    • Do Buy / Don’t Buy
    • Retiree Info & Resources
    • Industry Links
    • Film & TV Downloads
    • E-Waste & Recycling
    • Online Store
  • Show Search
Hide Search

Features

The rNAS .m4 and rTB from Pronology

by James Delhauer

As laborers in an industry that constantly pushes the boundaries of what technology has to offer, members of the IATSE have unique technological needs. Productions demand custom-fit solutions to the challenges they present, often preventing workers from buying off the rack as it were. Sometimes, artists and craftspeople encounter obstacles for which the market has not presented an adequate solution and when they do, they take it upon themselves to make their own. Such is the case with the new rNAS.m4 and rTB storage units from Pronology, designed by Local 695 member and Pronology President Jon Aroesty to address the specific challenges faced by 695 engineers and technicians.

The story of the rNAS goes part and parcel with mRes, another Pronology product designed out of frustration with the lack of adequate workflow options. The mRes is a standalone server-based encoder capable of capturing multiple SDI inputs or IP streams and simultaneously writing a high-quality deliverable asset, a ready-to-edit proxy file, and a proxy optimized for web streaming for each input in real time. In practical terms, this means that camera or IP-based media can be ready for every stage of review and post production as soon as the director calls cut. However, this workflow presents the challenge of immense data loads, especially when networking multiple mRes servers together. Moving that much media around in real time requires storage hardware capable of sustained high bandwidth reading and writing. Furthermore, production environments are often harsh, requiring equipment that can stand up to the rigors of day-to-day use. After experimenting with various storage options on the market, Aroesty and his team concluded that there was nothing available at the time that presented an ideal solution to these problems.

“We couldn’t find anything that could sustain the kind of write speeds we needed while also being portable and rugged enough to be practical in a production environment,” Aroesty commented. “Eventually, we realized that we needed to build our own.”

“We couldn’t find anything that could sustain the kind of write speeds we needed while also being portable and rugged enough to be practical in a production environment… We realized that we needed to build our own.”
–Jon Aroesty

After studying the shortcomings of existing network-attached storage devices, the first rNAS prototype was constructed in 2017 using off-the-shelf components and featured a full ATX-sized chassis outfitted with eight spinning disk hard drives and a PCIe hardware-based RAID controller. Though several prospective competitors were already offering units with support for RAID (a process whereby multiple storage devices are pooled together to increase speed or create redundancy), all of the units that the Pronology team researched utilized software based or integrated RAID controllers, making them more vulnerable to failure or corruption as the components controlling the disk pool needed to expend resources on other tasks. By having a dedicated piece of hardware to control the RAID, the team was able to achieve a significantly more stable storage volume. In conjunction with the more robust case that offered improved durability, this prototype was a step in the right direction but it was not without shortcomings of its own.

Local 695 President Jillian Arnold (then Vice President), member Nick Amico, and I put the prototype rNAS through its paces over the course of the following year. After stress testing it in every way we could imagine in both controlled and production environments, it was deemed too heavy and cumbersome, making it difficult to transport. The ATX chassis took up a large amount of space and could not be rack mounted, which made it unsuitable for the cramped confines of a production truck environment. Moreover, with the number of 4K productions already on the rise, higher write speeds were still going to be necessary for success. As testing progressed, the team began looking for ways to address these challenges.

Fortunately, all of this was happening as the price of solid-state media was becoming more affordable, which presented an opportunity. Aroesty and his team began to experiment with using both consumer and enterprise grade solid-state drives in conjunction with their hardware RAID configuration and immediately decided that this approach would be essential to increasing read and write speeds to the point where the rNAS would be up for the task of recording 4K media. This also helped to address the issue of weight, as a standard two-and-a-half-inch solid-state drive weighs in at approximately a tenth of a pound, whereas traditional three-and-a-half-inch spinning disk units are closer to a pound and a half. This allowed the team to shed more than eleven pounds off the previous design with no discernable drawbacks. However, no existing two-and-a-half-inch form factor chassis available at the time met the necessary durability requirements, forcing the team to commission custom fabrications. After receiving input from a cross section of 695 engineers familiar with a wide array of production environments, a compact aluminum and steel design was settled upon. The result was the rNAS .m3, a lightweight solid-state network attached storage system ready to take advantage of the complete bandwidth of a 10g network environment. For more than a year, 695 engineers have put it through its paces it in a diverse variety of broadcast television environments, including the Academy Awards, Grammy Awards, and MTV Video Music Awards.

Now, with the data collected over the past four years, Aroesty is proud to release the rNAS .m4, a modest redesign of that capitalizes on the successes of the .m3 while adding subtle improvements where needed. Newer solid-state drives boast longer life expectancies than those released even a few years ago, giving the .m4 a longer product life than its predecessor without requiring maintenance. The new chassis has been designed to easily integrate into a rack-mounted environment, making the devices convenient for long-term storage while remaining quite portable. The body can conveniently fit into a custom-designed insert carry-on suitcase and has been ruggedized for vibration and impact resistance, allowing multimillion-dollar productions to transport their content without fear of checking luggage. The device can be networked utilizing either the two 1g Ethernet ports or two 10g Ethernet ports, allowing for communication with up to four devices without the need for a network switch. Under the hood, the storage caddies have been tweaked to store the solid-state drives at the front of the unit, allowing all eight to be accessed from the front. This facilitates the ability to scale storage size as larger solid-state media continues to be produced at lower costs in the coming years. Several additional security updates have been added to the networking protocols, ensuring the digital safety of production content.

Most importantly, the entire system has been optimized for concurrent throughput. With the COVID-19 pandemic forcing productions to decentralize post-production environments, the ability to securely send digital files across a network in bulk has never been more important. This optimization allows Local 695 recordists to write new media to the rNAS while simultaneously allowing media to be read for the purpose of being copied, transcoded, or uploaded. rNAS supports any cloud-based transfer client, allowing recordists to send content directly to editors before the production even wraps. The result is that editors located anywhere in the world can be cutting content within hours or even minutes of it being shot.

Though specific production conditions may impact performance, the rNAS.m4 set an impressive benchmark recording up to thirty-two streams of high-definition ProRes 422 footage at 27.97fps or ten streams of ultra high-definition content at 59.94fps.

The rNAS .m4 and rTB were designed by members for members and are ready to meet of the many challenges of our craft.

The rTB evolved from the production of the rNAS to meet the growing needs of productions that have not or are unable to migrate to server-based production, including remote and single-camera productions. Nearly identical in design, this direct-attached storage unit boasts the same ruggedized steel and aluminum body but sports two Thunderbolt 3 inputs in lieu of networking ports and an LCD screen. The eight-disk solid-state RAID pool has resulted in write speeds of up to 1469 megabytes per second, bringing rNAS performance to those in need of direct storage solutions such as Digital Media Managers and Video Assist Technicians. These speeds are ideal in production environments where 4K, 8K, and newly emerging 12K video content is being created and requires rapid ingestion. Moreover, the Thunderbolt architecture’s bidirectional protocol in conjunction with the customized RAID controller’s optimization of concurrent throughput means that rTB is ideally suited for use in conjunction with platforms that require high simultaneous read and write speeds, such as In2Core QTAKE. When paired together, a set of rTB units represents one of the fastest drive-to-drive transfer solutions in its form factor. This can facilitate the rapid creation of redundant media, shrinking the period of vulnerability in which media exists in only a single location.

Aroesty adds that “695 members were involved at every stage of production. rNAS and rTB were developed in direct response to members’ requests for resilient high-performance storage that can stand up to the rigors of remote production and transportation.”

In the digital era in which we currently find ourselves, we are inundated with data. For Local 695 data engineers, whose responsibilities can include media playback, projection, on-set chroma keying, off-camera recording, copying files from camera media to external storage devices, backup and redundancy creation, transcoding, and syncing, digital real estate is critical.

Video Engineering The Next Generation

by James Delhauer

Jillian Arnold

Our culture is one that places a high value in legacy. As we walk this earth in search of purpose, we often reflect on questions about our impact or how we will be remembered. We build little empires in our own names in the hopes that our legacy will be a positive one and that when it is time to move on, it will be handed down to the next generation. The divine right of kings saw empires passed from father to son with an expectation that that which came before would be preserved. But times have changed and so must we all. Today, as we strive for things like liberty, equality, and diversity within our communities, children are no longer expected to carry on the work carried out by their parents. They are encouraged to walk their own paths in life and to discover a purpose to call their own. So when a child does choose to walk in the footsteps of their families, there is a heightened significance to the decision. No longer is it a matter of duty or responsibility but one of agency; no longer passed down from father to son but also mother to daughter, mother to son, and father to daughter.

Haley Burnett
and stepfather, Gaylon Holloway

I had the privilege of sitting down with three remarkable women, all of whom chose to pursue careers in video and engineering, like their fathers before them. Today, Cheyenne Wood, Haley Burnett, and Jillian Arnold talk about the legacy of video engineering in their families, being among the first women in their fields, and some of the challenges that women have overcome and continue to face within our industry.

Cheyenne Wood and her father, Roger Wood.

Cheyenne is a Record and Playback Specialist who became a member of Local 695 in 2018 and has already landed on high-profile broadcast television shows like American Idol and Who Wants to Be a Millionaire? In the short time since joining the Local, she’s already become proficient in a wide variety of platforms and can often be found behind the racks on set, building or rewiring her own equipment. This, however, should come as no surprise given her history. Cheyenne’s father has spent the better part of the last thirty years working as a Broadcast Truck Driver and Fabricator.

“It’s touching, in a very sentimental way,” she told me. “Growing up from the very beginning, my earliest memories with my dad are of my brother and me running around workshops and going to work with him. He was a single parent, so on late nights when he didn’t have a babysitter, he’d throw us in the camper and go pick up shows and drive us home. Now, he picks up my shows.”

Cheyenne Wood

Cheyenne went on to describe the first time she and her dad had opportunity to work with one another on set. “I was working on Ultimate Tag as a Media Manager but he had parked the truck before I got to go in for prep, so we didn’t get to see each other. But on the last day, I came out of the truck and he was waiting at the bottom of the stairs just to see me at work. It was a cool moment. And there have been a few times since then.”

Haley Burnett

Similarly, Haley Burnett grew up in a household deeply rooted within the industry. Her father also drove production trucks for several years before becoming a Comms and RF Technician. Meanwhile, her mother and stepfather worked for Viacom (now ViacomCBS) as Director of Operations and Lead Engineer respectively.

“I remember that I didn’t want to work in the industry because so many people in my family had. I was going to do dance therapy for juveniles,” Haley told me. “Then I went to [Country Music Awards] with my mom and got to go backstage. Seeing everyone and everything going on made me think, “Huh, maybe I do want to do this after all.”

When Haley received her Local 695 card in 2017, she began to develop record assist and cloud-based screener protocols that have been essential for contactless delivery and safety during the pandemic. She’s worked on broadcast tentpole productions such as the Video Music Awards and MTV Movie Awards..

Jillian Arnold is a Recording and Workflow Engineer who also followed in someone’s footsteps. “My stepdad was a Tape Duplicator,” she explained. “His claim to fame was the colorized version of It’s a Wonderful Life on VHS. And when I was little, he had five or six tape decks in the basement where he’d duplicate old cartoons for international release. And he taught me how to use those tape decks. Now, I am what I like to refer to as ‘The Artist Formerly Known as the Tape Operator.”

Jillian Arnold

Since earning her 695 membership in 2012, Jillian has been at the forefront of new server-based network recording technology, as well as the cloud-based migration of large assets. She was recently elected to serve as President of Local 695 and holds dual-card status with Local 600. Her clients include Disney, ViacomCBS, Netflix, Apple Inc., TED Conferences, and the Jet Propulsion Lab.

All three of these women are first-generation card holders within their families. “My mother was from the employer’s side of things. So when I joined, she was happy about the benefits and protections that came with that,” Haley said.

“My dad did tell me that he had been approached about joining the Teamsters union many years ago. But due to having to try to find sitters to help take care of his four children and often having to leave work to take care of us on his own, he was unable to commit the time, ultimately missing out on the opportunity.”

“As a woman, I am very mindful that I have to
be on my game. I believe
I need to know all aspects of my craft well.” –Jillian Arnold

“Both of their dads thanked me with tears in their eyes when they got their union cards,” Jillian remarked. “There’s that day our fathers have when we surpass their knowledge. And it can be a very proud moment. My stepdad was also very proud. I know because my uncle—who was in Local 2—calls and tells me,” she laughed. “He was the one who really impressed the importance of unions on me. I’m really proud to be part of a collective that, internationally, has more than 120,000 members. And every level I hit, I can’t get over it.”

As engineers working to develop the bleeding edge of production workflows, these three women are pioneers within their fields. In spite of their impressive résumés, however, all three face unique challenges that present obstacles within their careers. “I’ve certainly had bouts of weird comments and things like that,” Cheyenne said during her interview. “I’ve had people staring and one guy who started following me. There was a day when I pinched a nerve in my back and the medic closed the door so he could massage my lower back. Very weird. So safety is a big concern.”

According to a 2018 USA Today survey of 843 women working in the entertainment industry, an alarming ninety-four percent of respondents reported experiencing some form of sexual harassment or assault. The same survey, which was conducted in partnership with The Creative Coalition and Women in Film and Television, went on to report that twenty-one percent of the women interviewed reported having been asked to or coerced into performing a sex act.

“I’ve left the truck crying before because I’ve been uncomfortable,” Cheyenne admitted. “I don’t think people always realize that it can be intimidating for women to walk in sometimes but it’s nerve wracking.”

“Luckily, I haven’t experienced that too often,” Haley commented. “But sometimes when you come on the truck, you can interact with people who will give you the up and down and you can just read on their faces that they aren’t expecting much out of you.”

“There are definitely people who are shocked or dumbfounded that this easy to laugh off twenty-four-year-old-looking girl comes in and knows her stuff,” Cheyenne agreed.

“People still aren’t used to seeing a woman in our spot. It’s fun when you do know it. My favorite part is blowing them away and doing really well. Then you just look at them and see them crack a little smile,” Haley said. “That’s always pretty great.”

In Rosa Costanza’s “The Best Person for the Job,” an article published in Production Sound & Video in the spring of 2015, Jillian describes her experience in a situation that many women within the industry face. “As a woman, I am very mindful that I have to be on my game. I believe I need to know all aspects of my craft well. Therefore, I spend quite a bit of time training and studying on my days off. I often feel I can’t afford to make a mistake without it reflecting poorly upon me, and my gender. Some may say I’m too hard on myself, but I think that I have to be as good as the best.”

“But we’re put into these positions by people who believe in us,” Cheyenne pointed out. “Tech Managers, the people who hire me, they all say, ‘I have no doubt that you can do it. I’m not worried about you at all.’ I would not be where I am today without the support of so many wonderful men in my career. There are just some people who make me feel like I don’t belong here. And of course, I belong here. I worked really hard to get where I am. And there are definitely people who can see that and who have been encouraging every step of the way.”

Unfortunately, the problems that women face within the industry are not just confined to the set or normal business hours. The unpredictable nature of Hollywood can make balancing work and family life a challenge.

“I think it scares the hell out of me to think about having babies in this industry,” Haley explained. “Growing up, we were fortunate enough to have help. But I can’t imagine getting off from a show and getting home, helping us with a project, putting kids to bed, working until midnight, then getting up, and working a completely different schedule. I think it scares me because we have such an unpredictable schedule. And then there’s the fear and legitimate concern of losing your job. If you need to take time off, there might not be a job waiting for you when you’re ready to come back or someone might think, ‘She’s too distracted to do her job now that she has a kid.’ And I do think that’s something women struggle with more than men.”

“Women are exhausted,” Jillian said frankly. “As someone who doesn’t have children but wants to, this idea of being an exhausted woman is not a workable survivable tactic for me. One of the reasons I became president was to create a world where women can work and have a family. And with the number of women coming into our technical local, we are going to be forced to rethink how certain things happen like flexibility of maternity leave and paternity leave, daycare, and sick pay. My focus is on how we can improve the mental health and wellness of our members beyond just pay structures and healthcare. Because this isn’t just a women’s issue. It’s not in anybody’s best interest for women to be this exhausted all of the time.”

“I can’t imagine trying to have kids right now with this job,” Cheyenne stated. However, she was quick to agree that this is not just an issue impacting women in our field. “I know that we talk about it a lot in terms of single mothers, which is an utterly amazing accomplishment. But I do want to shed light on the fact that there are single fathers out there having to do it on their own too, often raising young women who they want to see succeed as well. I am an example of that. My dad emphasized that it did get very difficult at times and more resources and support would have been wildly helpful as a single parent.”

I would like to thank Jillian, Haley, and Cheyenne for taking the time to sit down and speak with me about these proud and difficult subjects. All three are a credit to their respective fields. As we move forward, Local 695 is committed to nurturing equality within our industry. Members who are interested in being a part of this reform are encouraged to reach out to the Local about joining the Women’s Committee or the Committee for Equity, Opportunity, & Diversity.

“At the end of the day,” Haley concluded, “we need to support women because that’s how we support everybody.”

“As a new female Video Engineer, I am so excited and proud to be supported by strong women. And I want to give credit to the men who continually back and support women in the fields, as I have experienced greatly thus far during my career. It is a very competitive field, but that is also very encouraging and enticing. And as much as it is intimidating at times, I believe that being a woman in this industry is beyond rewarding in so many ways,” Cheyenne told me optimistically.

The Modern Sound Crew

by Simon Hayes AMPS CAS

Filmmaking has changed, and sound crews have had to adapt to new methods of working. Over the last fifteen years, one of the biggest changes we have seen is the introduction of multiple cameras on all projects.

Multi-cameras used to be the domain of high-budget feature films, sitcoms, and soap operas. Production sound crews are finding multiple cameras being used on all formats and budgets, due to the lower cost of shooting digitally. This has resulted in a change to how we capture production sound, and requiring a completely different approach.

Outlander

Two Booms
There was a time that once or twice a day, the Utility Sound Technician would swing the second boom in a wide shot to capture a line of dialog that boom one was finding it difficult to cover. Or on an over-the-shoulder of the actor whose back is to camera, mouth slightly revealed, and is likely to ad-lib or overlap the onscreen dialog. Running a second boom full time is now the only way of supplying the Dialog Editors with complete coverage on a show shooting two cameras or more.

When the Director is shooting two cameras, we are going to come up against the ‘wide and tight’ scenario that makes getting high-quality boom dialog so difficult. It doesn’t matter how many times the Director and DP promise they will use matching headroom on both cameras and not run wide and tight, it will happen regularly. The last thing a Director wants to hear is a Production Sound Mixer continually asking them to re-adjust the shot or shoot separately, even though the Sound Mixer is trying to save the original performances.

Running a pair of booms means that the two Boom Ops can commit their mics to fewer cast members, allowing each to play the scene significantly closer to the frame line of the wide shot. Let’s say the scene has four cast and the two Boom Operators split the coverage. They can take greater risks swinging closer to the edge of the wide frame, allowing the Sound Mixer to reduce the gain which not only reduces background noise, but also room reflections, and helps to create a closer perspective that is more likely to match the closer shot.

Two booms can help when working with Directors who use multiple cameras to shoot longer sequences. Ridley Scott is well known for using three cameras, and positioning them around a set so his cast can play a whole scene and capture the coverage from three angles in a single take. The best way to cope with this shooting style is to assign a specific boom to a camera, so that it always has a microphone covering what the lens is shooting.

The two Boom Operators will use a combination of these two workflows on a shot-to-shot basis, even changing from scenario one to scenario two within the same take. The two booms provide flexibility and the best chance for the Production Sound Mixer to deliver great tracks adequately covering what the cameras shoot.

When negotiating for two Boom Operators, I try to describe this to Producers, informing them I will be able to deliver twice as much usable production dialog, and ask them if they can afford not to have two Boom Operators? More and more shows have two Boom Operators in their credits which is really encouraging, especially on episodic television whose per-episode budgets are rising, and expecting feature-film levels of creative quality from every department.

Wireless mics

Wireless Microphones
We used to watch the way a scene was blocked and make a decision whether radio mics would be necessary. We simply cannot make an accurate judgment despite watching a scene being blocked of when the second camera is going to shoot coverage during a wide shot where we cannot get a boom close enough. This is why many of us now choose to wire the cast as a matter of course, so that when we are presented with an impossible scenario for the booms, we are ready. Even though we dislike having to radio all the cast all day, there are benefits. We don’t have to slow the shoot down and ask to mic the cast after the cameras set up angles the booms cannot cover. It also means less negotiation with on-set costumes when from the get-go, everyone knows the actors will be lav’d at all times. The question now becomes ‘how’ rather than ‘why’?

Delivering lav tracks gives the Picture and Dialog Editors choices that will really help in post production. However, radio mic’ing the cast for every scene creates a much higher workload for the sound team, along with running a second boom full time. This can easily turn into an impossible task for the Utility Sound Technician if they are being expected to boom all day, and also put lavs on the entire cast.

This is why the modern sound crew is more often a four-person team of the Production Sound Mixer, two Boom Operators, and a Utility Sound Technician (UST). This splits the workload and allows the UST to work closely with the cast and the Costume Department on the wiring, as well as carefully watching monitors during shooting to check if any lavs are being exposed.

Lectrosonics Wireless Designer

Frequency Coordination
Gone are the days when all departments could rock up onto a job and simply expect their radio equipment to work. The Sound Department is using many more radio frequencies, including wireless booms, as well as radio mic’ing everyone. Frequency use on films sets used to only be necessary for a couple of departments, however, we are coping with the Camera Department using remote focus, iris control, and remote heads. The Video Department is transmitting and receiving their images via Teradek, the Grips are using crane comms, the Lighting Department using Wi-Fi to remote control their lamps, and Special Effects Department walkie-talkies. Even before the Sound Department encounters bandwidth issues on the locations we are shooting in, the film crew has an enormous amount of wireless, Bluetooth, and Wi-Fi equipment trying to be shoehorned on the set.

The Sound Department is the only department that really understands the subtleties and technical challenges of wireless frequency co-ordination. Unmanaged frequency plots will result in each department attempting to solve their numerous interference issues by increasing output power on their devices, mistakenly thinking that will cure the problems without realising they are causing inter-mods (interleaving modulations) across other department’s equipment.

The way we approach this during prep is asking Production to contact each HOD, and have a person in their department to collaborate with the rest of the departments regarding radio frequencies. We send an Excel spreadsheet to each department and ask their allocated person to input all of the equipment frequencies they are intending to use to the spread sheet. This is quite an eye opener with some departments completely unaware of the frequencies their gear works in. The spreadsheet motivates departments who are inexperienced in radio equipment to, at the very least, get up to speed on the frequencies they require. We then work out which departments are potentially going to have conflicting frequencies with other departments. Then ascertain who is on fixed, and who is on variable frequencies, and ask those using equipment on variable frequencies to compromise by moving into some free space to help the departments on fixed frequencies.

A common issue caused by Teradeks is not setting a locked fixed frequency, so it will channel hop whenever they experience a weak signal, causing big issues for all the other equipment in the same bands (often the other equipment the Camera Department is using…). We are finding that the most congested bands are the Wi-Fi frequencies of 2.4ghz and 5ghz, and when the transmit power gets ramped up, they begin to conflict with our radio mics, not to mention control commands with Zaxnet.

I delegate the entire frequency issue to my Utility Technician. However, the job is certainly gaining an increasingly heavier workload, and the ability to swing a boom, lav the actors, and deal with frequency issues is now a much larger task.

VOG
An outgrowth of the COVID-19 protocols is the Sound Department’s responsibilities of a permanent ‘Voice of God’ (VOG). This used to be something we’d only encounter on large sets which required the 1st AD’s voice to be amplified louder than the output of a loud haler. We are increasingly working with Directors who like to use a VOG at all times. Once the Director is using a VOG, then the 1st AD is going to need one too, so now we are suddenly wrangling handheld mics, receivers, and a large 5kw PA which has to be moved from scene to scene and from set to set. The VOG requirement seems to have grown as a direct result of shooting digitally which has also led to a ‘tent culture’ where Directors and DP’s spend far more time in ‘Ezee Up’ tents, off the set so they can view large LED monitors in darker conditions that are more favourable for judging the DP’s lighting set up.

Comteks
There was a time when the amount of Comteks we were required to provide on an average-size film or TV show was ten to fifteen sets. It is not uncommon now for me and my team to be expected to provide forty Comteks on a show. This has increased the amount of battery changes we have to do on a daily basis, and the number of receivers we are trying to find at the end of each day. The way my team manages this new phenomenon is treating Comteks the same way as the AD’s handle walkie-talkies. We assign a crew member their own unit with their name printed on it using a label maker, and ask them to sign for it. We maintain a record of each crew member’s Comtek serial number, so if the unit is lost, we know who lost it, and add it to the L&D report. Each department is responsible for charging their own rechargeable batteries. Some departments are reluctant to take responsibility for a Comtek full time, exclaiming that they only need one once in a while. However, it’s guaranteed a Grip or an AD will approach the Mixer or Utility Sound at the last moment needing to hear the dialog for a cue, and we will hear “waiting for sound” from the 1st AD as the Utility scrambles.

An example of a boom pole using Greensleeves

Visual Effects
We are now in an era of advanced VFX technology which has benefitted Boom Operators in getting their microphones closer over the actors. There are different scenarios, from complete blue screen and green screen work which allows our boom mics to be in shot all the time to the more popular, real set pieces in foreground with the deep backgrounds being green screen or blue screen. The challenge for our Boom Operators is the booms cannot cross behind the foreground sets, or the actor, unless the boom poles are green or blue, otherwise they will be needed to be painted out, which is costly.

Popularised by Director David Fincher, known unofficially as the ‘House of Cards’ method, is the booms are allowed to cross into locked-off wide shots to capture dialog, matching the tighter lenses of the other cameras. As long as the wide shot delivers a ‘plate’ free of booms, the VFX Departments can remove them in post production. The Boom Operators wait a couple of seconds after the clapper board has left frame before swinging into position and ‘busting’ the frame, which gives each take its own unique couple of seconds of ‘plate’ (particularly important if there is changing light through a window, etc.).

House of Cards

The ability to remove booms from moving shots is also getting more inexpensive each year. Some Directors recognise that VFX boom removal and ADR both have a financial cost, but ADR has a potential performance penalty. VFX removal protects the cast’s original performances. Depending on the Director, DP, VFX Supervisor and budget, the Sound Mixer and Boom Operators must skilfully navigate the discussion. We can either ‘make or break’ boom removal depending on our technical knowledge of what is possible to be achieved, and our ability to articulate it eloquently.

Computing Skills
We are expected to be well versed in the world of computers not only to keep our equipment up to date, including recorders, radio mics, and timecode, but also to have the ability to use ancillary equipment such as Pro Tools, iZotope, Cedar DNS, Dante, and many other plugins. We have to interface with our sound equipment: from firmware updates, knowledge of hard drives, CF cards, Micro SD cards, formats, and workflows. It really pays to have at least one member of the sound team who is an expert in both PC and MAC as we encounter dozens of issues over the course of a show that requires an in-depth knowledge of both operating systems. It would literally be impossible to run a modern sound team without having instant access to that expertise at all times as we problem solve and interface with computer equipment to creatively perform our jobs.

Conclusion
Modern Sound Departments need to have old school skills that previous generations of sound teams had in abundance, but also the ability to keep adding to our knowledge and remain current. Equipment is literally changing show-to-show in our department and all the other departments collectively. Without collaboration, our ability to provide creative solutions for Directors and cast are limited. The more knowledge we have, not just in our own domain but of other departments, allows us a greater advantage to provide brilliant sound recordings, and the all-important performances we have been hired to support and protect.

A Few Notes on the Radio Spectrum and a Brand-New Licensing Guide!

by Jay Patterson CAS

For more than five decades, radio transmissions have played an increasingly large part in the infrastructure of motion picture and television production, worldwide. In the greater Los Angeles production environment, the operation, management, and maintenance of radio transmitters has always been the explicit responsibility of IATSE Local 695. As clearly stated in the “AGREEMENT OF AUGUST 1, 2018, BETWEEN PRODUCER AND I.A.T.S.E. & M.P.T.A.A.C. AND LOCAL #695 THEREOF,” Article 1 Scope of Agreement, Paragraph 5: “It is recognized that the IATSE Constitution grants the following jurisdiction to the IATSE Local #695: Work of any nature in or incidental to the transmission of sound and carrier frequencies…”

The vast majority of bands in the spectrum are only to be used for specific types of transmissions, and require a license, granted by the FCC in order to legally operate a transmitter. There are also bands that allow “Intentional Radiators” (i.e.,—low-power transmitters), devices that do not require a license to operate. Examples of Intentional Radiators include but are not limited to cordless telephones; remote-controlled cars, planes, and drones; Wi-Fi, video links, and remote focus systems used by camera departments; and the broad range of DMX control signals used by special effects, media servers, and set lighting. Many of these devices operate in the 2.4 GHz and 5 GHz frequency bands. The federal regulations regarding the use of Intentional Radiators are contained in Title 47, Chapter 1, Subchapter A, Part 15, Subpart C, which can be found within the Electronic Code of Federal Regulations website (https://www.ecfr.gov/cgi-bin/text-idx?SID=d8fcd9a4dd2c890b5e400718cac89ab1&mc=true&node=sp47.1.15.c&rgn=div6#se47.1.15_1209).

Frequency coordination occurs on a massive scale every single day in motion picture and television production, be it on a studio lot or on location. Every single walkie-talkie, radio microphone, wireless camera hop, etc., is coordinated, enabling the devices to work together without interfering with one another. On a studio lot, this is usually handled by a Local 695 Coordinator, usually a Y-3A (Supervising Sound Engineer). On a scripted show, more than two dozen transmitters may need to be coordinated every day. Talks shows and reality programs might use upward of fifty. Coordinating a sporting event or an awards show becomes an Olympian task as more than a hundred different transmitters might be used at any given time.

It is significant to note that devices that are designed to operate in the 2.4 GHz and 5 GHz bands do not require a license to operate. In fact, there is no license available for the 2.4 GHz and 5 GHz spectrum, which is a portion of the National Wi-Fi Infrastructure Backbone established by the FCC. Due to considerations of range, penetration, and workable antenna lengths, the 2.4 GHz and 5 GHz bands are extremely popular bands for Intentional Radiator manufacturers. In the last several years, there has been a dramatic rise in the use of remotely-controlled everything in motion picture and television production. Generally speaking, devices that approach ‘real time’ operation in these bands rely upon ‘frequency hopping,’ where a device’s carrier is constantly changing. Different devices will use various ‘groups’ of frequencies to hop around on, thus enabling multiple devices to use the same range of available frequencies depending on their moment to moment needs.

Anecdotally, this technology was co-invented by the actress Heddy Lamar.

Several years ago, there were significant conflicts in production in the use of 2.4 GHz and 5 GHz devices, primarily between the wireless network-controlled DMX dimming systems and the wireless transmitters used by the camera departments to stream images from the cameras. This was a very real problem and an ad-hoc committee of users and manufacturers came together to address the issues. These discussions were initiated by the manufacturers of DMX lighting control devices, and all affected crafts and manufacturers were invited to participate, including Local 728, Local 600, and the Technical Trends Committee of Local 695. Various solutions were discussed and models of “coordinated behavior” were identified, primarily by drawing on the practical experiences of our counterparts in the world of theater. With so many theaters nestled so close together on Broadway, the stage industry had overcome the challenge of running fifty channels of wireless microphones, several channels of production communication, DMX control, and Wi-Fi, all without stepping on one another’s toes. The concept of “coordinated behavior” contains the practice of frequency coordination, along with the practice of using all transmitters at their lowest acceptable power level. As the 2.4 GHZ and 5 GHz bands are open to any properly manufactured Intentional Radiator, the committee could only recommend ‘best practices.’ Shortly thereafter, a practical solution was achieved by the manufacturers—the companies making wireless video for the camera departments would only operate in the 5 GHz band, and the DMX devices would be only in the 2.4 GHz band.

In terms of coordination, a simple phone call in pre-production between those crafts using the unregulated portions of the radio spectrum can effectively reduce the risk of interference between them. Along with identification of frequency groups used, if participants agree to transmit at the lowest acceptable ERP (radio energy transmitted from the antenna), multiple departments can and do share the Wi-Fi bands.

It is also paramount to note that these unlicensed Intentional Radiators are not allowed to cause interference to any licensed operation such was wireless microphones in the UHF (Ultra-High Frequency) band.

The FCC recently released a new online process for obtaining the Part 74 license, called Form 601 in the FCC’s Universal Licensing System (ULS). It is cross-platform and should run on any modern HTML 5 browser. Thanks to Bill Ruck in San Francisco and our own Laurence Abrams and Tim Holly, Local 695 is again sponsoring completely rewritten Step-by-Step Application Guide with detailed instructions from start to finish. If you have to obtain a license, please let us know your experience using the guide so we may improve it in future editions and make it as easy as possible to use.

In order to contribute to our database of entertainment professionals holding their FCC Part 74 licenses, please let us know if you hold a license. Send contact info and license number to fcclicense@local695.com

The Little Things

by Richard Lightstone CAS AMPS

Rami Malek as Jim Baxter, Jared Leto as Albert Sparma, and Denzel Washington as Joe “Deke” Deacon. Photo: Nicola Goode

The Little Things, written and directed by John Lee Hancock, is another feature film that has had a much-delayed release into 2021. Sound Mixer Jose Antonio Garcia worked on it back in 2019, with Boom Operator Jonathan Fuh and Utility Sound Technician Sheraton Toyota. Starring Denzel Washington as Deke, a Deputy Sheriff from Kern County who mentors and partners with Detective Baxter, played by Rami Malek, in a cat-and-mouse hunt for a serial killer, Albert Sparma, chillingly portrayed by Jared Leto.

This was the first time Jose worked with Director John Lee Hancock, who he describes as a lovely man and amazing to work with. It was a concentrated forty-seven-day shoot with more than three hundred scenes, and lots of daily moves, so his equipment mostly lived on a stake bed. The locations were predominantly in the cities of Lancaster, Pomona, and Los Angeles, with some stage work in Santa Clarita.

Boom Operators Jonathan Fuh and Sheraton Toyota working the interrogation scene

Jose previously worked with Denzel on Roman J. Israel, Esq. “Denzel is a very intense actor, and always prepared, he was very adamant about not seeing the crew using cellphones on the set because he was in character, and the story takes place at the time when there were no cellphones.”

The routine was to wire everybody that had scripted dialog and always use two booms to cover overlaps. Jose routinely discusses this ahead of shooting with the director, as he believes it just makes the scene more alive. “It works so much better for the actor on the screen having that counterpoint.”

Washington and Leto

An interesting addition to Jose’s sound package was the requirement of one of the lead cast wanting an earwig to hear his own voice, with added reverb. Jose had to split the output of his lav microphone into a separate mixer and found a guitar pedal device that would accomplish the reverb, feeding that signal into a separate Comtek transmitter. The actor also wanted his assistant to be able to communicate with him, so Jose fed an extra handheld wireless microphone into that monitor system as well.

Photo: Nicola Goode; Jose Antonio Garcia and Jonathan Fuh off-loading the sound cart for insert car work.

Jose continues, “It was an intense show, with multiple locations. Moving and packing the equipment, and moving again and again, there were days that we had four location moves. Despite that, it was very fluid, I think that’s what most sticks out in my memory is it flowed really well, and I think it looked very good too.”

“Our DP John Schwartzman was careful with his lighting,” says Jose, “shooting with two cameras, there was matching head room that certainly helped us using two booms.

“There were some days with lots of cast at the police station with eight to ten wires required, but John Lee’s writing is so precise and well thought out. He’s a magnificent screenwriter.”

Washington and Malek. Photo: Nicola Goode

Jose expounded on the evermore crowded radio microphone spectrum, competing with wireless lighting controls, remote camera focus, and the shrinking bandwidth. In those situations, Jose would remote his powered wireless antenna.

There were many split calls, with full nights, sometime on Thursdays and always Fridays, as well as an entire week of nights later in the schedule. “It was very exhausting because you really never get the chance to turn yourself around,” says Jose.

John Lee would block scenes to set the camera moves and have rehearsals instead of rolling as soon as the cast appears on set, giving Jose and his crew time to plan wiring, boom positions, and planting microphones when needed.

There are extensive automobile interiors scenes, from stakeouts, tailing a suspect, and car-to-car dialog. Boom Operator Jonathan Fuh explained that they would wire the actors but also plant lavs on the header with heavy wind protection when necessary. “The distance from the header plant mic is the same from the body mic to the mouth.”

Jose Antonio Garcia at his sound cart

The title refers to a common thread throughout the narrative. Joe “Deke” Deacon (Denzel) is always looking for the tiny clues, “the little things” that can solve a crime. He’s constantly observing, revisiting the crime scenes looking for that one nugget that cracks a case wide open.

Jose had a busy 2019, with Da 5 Bloods and Richard Jewell before tackling The Little Things. He’s very grateful as we all know how 2020 worked out, “That’s the only reason I still have my house, man,” concluded Jose.

Production Sound Crew

Jose Antonio Garcia, Sound Mixer
Jonathan Fuh, Boom Operator
Sheraton Toyota,
Utility Sound and 2nd Boom Operator
Michael Herron, Video Assist
Jordan Kadovitz, Video Assist Utility
Matthew Morrissey & Steve Irwin,
Video Playback Supervisors

Greyhound: The Audio Story

by David B. Wyman CAS

Tom Hanks stars as U.S. Navy Cmdr. Ernest Krause. Photo courtesy of Apple TV+

Greyhound was to be shot both practically on a real WW2 destroyer named USS Kidd, in a floating dock on the Mississippi River at Baton Rouge, Louisiana, and on stage where the pilothouse, radar/command center, and the sonar room sets were to be built to scale.
After my first phone meeting with Aaron Schneider, our Director, it was apparent that he was not only going to be supportive to the Sound Department but really wanted to explore doing as much real-time audio as possible. Meaning, as much sonic interaction as possible between on-screen and off-screen actors, including set-to-set communication for twin units shooting simultaneously, simulation of radio broadcasts, real-time ship-to-shore and ship-to-ship conversations and loud sound F/X of the war action.

During the start of my prep, I watched as many WW2 and naval movies as I could to better understand the scope and process of the work.

USS Kidd during shooting.

Aaron was adamant that everything should be period-correct, including all props, costumes, and sets. Our pilothouse set and CIC set contained all of the same equipment found on the USS Kidd, which made for some really tight quarters for the actors and crew.
Luckily, Ed Borasch was our prop master, he and I have worked on several movies together. With careful coordination, I was kept aware of the key props our actors would be wearing/using during filming, and as each prop was locked in, I was given access to them to study and understand how they could be modified to suit modern filmmaking needs while maintaining period looks.

ound world during shooting (L-R) Sound Cart, Support Cart,
Playback Amps and Mixers for VOG and Sound FX, Comms Utility controller.

The same was true for the construction of our two main set pieces, as equipment was installed, I was given access to see how we could modify them to allow them to function practically for seamless audio use while shooting.
I was fortunate that production was onboard and I had enough prep available to make the Director’s dream an audio reality.

Firstly, I spent a couple of days onboard the USS Kidd scouting and understanding just how the internal communications, radio telephone, sonar, and ship-to-ship actually worked in 1940. Thanks to a very knowledgeable crew there, I was forming a picture of how to make our 1940’s replica ship sets behave as if they were at sea during wartime.

The first challenge was to figure out a way to make the “Bitch-Box” work. This is the internal communication device that allows all critical parts of the vessel to talk to each other. In our world, that was primarily between the pilothouse set (Tom Hanks’ domain) and the CIC set (command center for radar and course plotting during engagement). In real life, these areas are several decks apart, in the movie world, the pilothouse was on a twelve-foot-high gimbal with the ten-foot-high set built on top, which could articiulate more than thirty degrees in any direction (to simulate big seas), truly a marvelous thing to see in motion. The CIC set was built on the stage floor some fifty feet away from the gimbal.

The Director wanted to shoot both sets at once and record both sides of the action. The interaction between the actors would be via both the Bitch-Box and the “Sound Phone.”
The Bitch-Box was a two-way communicator with a speaker for broadcast and a microphone on a push-to-talk switch, which would cut out the speaker and activate the internal microphone. None of the units we had actually worked and most were just shells with a ratty 1940’s speakers rotting away in them. The best solution I came up with was to make this communicator open so that one set could hear the action in the other as we were shooting simultaneously. My option was to install a tiny self-powered monitor in all the boxes and place an Omni-directional conference table-style mic close by to serve as a signal to replace the PTT mic. All the mics and speakers were hardwired from the gimbal via a multichannel stage box (giving us only one major cable to wrangle and protect against the gimbal movement) back to a dedicated Communications Mixer (with its own operator) to mute and open the speaker/mic channels as required. The mics themselves were given to the set painters and painted the exact same color as all the other wires, conduit, and set walls so they literally disappeared in plain sight. In fact, all of our cables some five hundred feet in all were painted so that we could wire the set as needed.

Next was the “Sound Phone” as the Navy calls it. This was very similar to the Bitch-Box except it is a private one-to-one connection handset for the captain to talk to the various departments. Again, all 1940’s equipment.

I decided that the preferred solution for this issue would be to rewire the insides of the phones and send the production audio from the actors directly to the earpiece. I took apart the phones removing the voice module and reinstalling a working headphone driver from some old 7506’s, wired in the driver, ran cable through the handset down to a one-eighth-inch jack so I could plug it into an IFB which could be hidden on set. Now I could route the signals I wanted to whichever phone was needed. This required however, a discreet IFB channel dedicated to “Sound Phone.”

The third major equipment used by the Captain (Hanks) on the bridge (pilothouse) was a ship-to-ship radio telephone referred to as the TBS (Talk Between Ships). This allowed the Captain of the convoy to talk directly to the other ships in the convoy so as to help keep the convoy together on this dangerous Atlantic crossing. As our Director wanted these conversations to be live rather than have a script supervisor read the lines, the TBS also had to be fully functional.

Sound World during prep

The TBS is a unique piece of equipment with a unique look and was very hard to source. Any mods would have to be very carefully done so as to not upset the Production Designer and Set Decorators! Again, I must take time to thank all those that helped me get access and were patient as I took the equipment apart to figure out the best way to make it work for our film needs. It was not possible to replace any of the phone cords and as the unit was totally full of non-working parts, anything I did needed to be remote from the TBS itself. Testing the connections within the unit and the handset, I used the existing 1940’s wiring to connect a replacement headphone driver, then added a discreet cable coming through the set wall to another IFB (another channel). The signal sent to the TBS was going to be from voice actors, off screen and off gimbal, so I set up a VO station close to my Comms Mixer with up to three “push to talk” handheld PA-style mics. This would ensure no unwanted signals when not in use or any cross talk if more than one actor was playing the various convoy ship roles.

Testing comms

The most difficult modification I undertook was that of the “Talker” equipment. Anyone who has seen a period Navy film would recognize the device that looks like a switchboard operator’s mouthpiece worn around the neck and a pair of old school headphones. This device allows any “Talker” to move around the ship plugging in where required and transmitting the orders from the Captain or receiving information from all different parts of the ship. This unit had to function in a duplex style as the script called for multiple overlaps of info coming over the headphones and orders going out via the mouthpiece, as would be the case in a battle scenario.

modifying the breast plates of 1940 equipment to accept wireless feed to headphones and mic/transmitter lines

Again, however, these had to remain period and look unaltered. They would be front and center on camera as they were worn by some of our principal actors. Oh, yes, one more thing: the Talkers operate both inside and outside of the ship so they would be getting soaking wet from the special F/X water sprayers, Ritter fans, and misters.

After taking them apart and drilling access holes, I wired a Countryman B6 in each mouthpiece, added a ton of acoustic foam for wind protection, wired new headphone drivers, sourced some old-looking cable to replace the existing and created an exit for my signal cables at the chest harness so that each actor could wear a transmitter for the mic and an IFB for the headphones. These IFB’s received the same mix as the director/producer so that the actors were aware of their place in the scene and luckily, the 1940’s headphone pads still cut out a lot of background noise from our SFX equipment.

soldering new working headphones into 1940’s equipment

The sonar operator had another set of headphones that were sourced from an old aviation store and luckily they worked, so I only had to modify the plug for an IFB. The sonar room also had a Bitch-Box and a dedicated Omni mic.

I placed more painted Omni mics hardwired into the pilothouse at the entrance and exits as it was such a tight set a boom could never get into those spots. Movable plants were used in many locations depending on the scene. Actors were wired in their helmets during battle stations and on their clothing when not.

Greyhound comms

Betsy, my Boom Operator, spent most of the show on an ENG pole with a Neumann KM185 as the pilothouse was so tight (a Schoeps CMC couldn’t handle the moisture), and when not in the pilothouse, she was getting soaking wet using a Sanken CS3E with full Rycote zeppelin with sock, windjammer, and rain-man on the fly bridge.

Hero Helmets awaiting mics and transmitters

We played a lot of sound FX throughout the shoot (many different gunfire sounds, airplanes, sonar pings, etc.). Some were from editorial but most were sourced from free FX libraries and I modified them on the fly for pitch, speed, volume, and multiple sample layering for loudness. This was done with Steinberg’s Wave lab and they were cranked through my 2k watt six-speaker system mounted all around the ship.

We controlled all props that were wired for sound at all times); full test of Talker equipment in battle stations gear

I treated the workflow like a mashup of production audio and live sound applications. I gave multiple feeds to multiple IFB channels, sent production audio to the Comms Mixer, received ISO channels back from the comms for recording and to feed into the dailies, and played the sound FX live. I used two Sound Devices 788t recorders, (timecode and sample rate locked), the first as a dedicated production audio recorder, the second recording the hardwired Omni mics, the handheld voice actors mics and the sound FX scratch track for reference.

One unforeseen advantage to all of these channels of audio and discreet feeds was that the Director was also given a PTT mic that we could route into any set or any Bitch-Box or phone or even into the headphones of the Talker all of which was invaluable as the communication when all the Special FX, gimbal, and Sound FX were going was somewhat difficult.

It was a very ambitious project and required a lot of forethought, but once dialed in, it went very smoothly. Thanks to the producers’ and director’s desire and my crew’s great work.

Photo courtesy of Apple TV+

CAS Award Nominees

Local 695 OUTSTANDING ACHIEVEMENT IN SOUND MIXING FOR 2020

On March 2, 2021, the Cinema Audio Society announced the nominees for the 57th Annual CAS Awards for Outstanding Achievement in Sound Mixing for 2020 in seven categories. The winners will be revealed at a virtual ceremony on Saturday, April 17.

MOTION PICTURE – LIVE-ACTION

Greyhound
Production Mixer – David Wyman CAS
Re-Recording Mixer – Michael Minkler CAS
Re-Recording Mixer – Christian Minkler CAS
Re-Recording Mixer – Richard Kitting
Re-Recording Mixer – Beau Borders CAS
Scoring Mixer – Greg Hayes
Foley Mixer – George A. Lara CAS
Production Sound Team –
Betsy Lindell, Marc Uddo

Mank
Production Mixer – Drew Kunin
Re-Recording Mixer – Ren Klyce
Re-Recording Mixer – David Parker
Re-Recording Mixer – Nathan Nance
Scoring Mixer – Alan Meyerson CAS
ADR Mixer – Charleen Richards-Steeves
Foley Mixer – Scott Curtis
Production Sound Team –
Michael Primmer
David Fiske Raymond

News of the World
Production Mixer –
John Patrick Pritchett CAS
Re-Recording Mixer – Mike Prestwood Smith
Re-Recording Mixer – William Miller
Scoring Mixer – Shawn Murphy
ADR Mixer – Mark DeSimone CAS
Foley Mixer – Adam Fil Méndez CAS
Production Sound Team –
David M. Roberts, Rob Hidalgo,
Adam Bart, David Brownlow,
Zach Sneesby, Jason Pinney

Sound of Metal
Production Mixer – Phillip Bladh CAS
Re-Recording Mixer – Jaime Baksht
Re-Recording Mixer – Michelle Couttolenc
Re-Recording Mixer – Carlos Cortez Navarrette
Foley Mixer – Kari Vähäkuopus
Production Sound Team – Jeremy Eisener, Yanna Soentjens, Hannes Leemans, Francois Goemaere

The Trial of the Chicago 7. Yahya Abdul-Mateen II as Bobby Seale in The Trial of the Chicago 7. Cr. Niko Tavernise/NETFLIX © 2020

The Trial of the Chicago 7
Production Mixer – Thomas Varga CAS
Re-Recording Mixer – Julian Slater CAS
Re-Recording Mixer – Michael Babcock CAS
Scoring Mixer – Daniel Pemberton
ADR Mixer – Justin W. Walker
Foley Mixer – Kevin Schultz
Production Sound Team – Ken Strain, James Appleton, Adam Mohundro

MOTION PICTURE – ANIMATED

A Shaun the Sheep Movie: Farmageddon
Dialogue & ADR Mixer – Dom Boucher
Re-Recording Mixer – Chris Burdon
Re-Recording Mixer – Gilbert Lake
Re-Recording Mixer – Adrian Rhodes
Scoring Mixer – Alan Meyerson CAS
Foley Mixer – Ant Bayman

Onward
Original Dialogue Mixer – Vincent Caro CAS
Original Dialogue Mixer – Doc Kane CAS
Re-Recording Mixer – Michael Semanick CAS
Re-Recording Mixer – Juan Peralta
Scoring Mixer – Brad Haehnel
Foley Mixer – Scott Curtis

Soul
Original Dialogue Mixer – Vincent Caro CAS
Re-Recording Mixer – Ren Klyce
Re-Recording Mixer – David Parker
Scoring Mixer – Atticus Ross
Scoring Mixer – David Boucher CAS
ADR Mixer – Bobby Johanson CAS
Foley Mixer – Scott Curtis

(from left) Thunk Crood (Clark Duke), Sandy Crood (Kailey Crawford) and Gran (Cloris Leachman) in DreamWorks Animation’s The Croods: A New Age, directed by Joel Crawford.

The Croods: A New Age
Original Dialogue Mixer – Tighe Sheldon
Re-Recording Mixer –
Christopher Scarabosio CAS
Re-Recording Mixer – Leff Lefferts
Scoring Mixer – Alan Meyerson CAS
Foley Mixer – Richard Duarte
Foley Mixer – Scott Curtis

Trolls World Tour
Original Dialogue Mixer – Tighe Sheldon
Re-Recording Mixer – Scott Millan CAS
Re-Recording Mixer – Paul Hackner
Scoring Mixer – Christopher Fogel CAS
Foley Mixer – Randy K. Singer CAS

MOTION PICTURE: DOCUMENTARY

David Attenborough: A Life on Our Planet
Re-Recording Mixer – Graham Wild
Scoring Mixer – Gareth Cousins CAS

My Octopus Teacher
Re-Recording Mixer – Barry Donnelly
Foley Mixer – Charl Mostert

The Bee Gees: How Can You Mend a Broken Heart
Re-Recording Mixer – Gary A. Rizzo CAS
Re-Recording Mixer – Jeff King

The Social Dilemma
Production Mixer – Mark A. Crawford
Re-Recording Mixer – Scott R. Lewis
Scoring Mixer – Mark Venezia
Foley Mixer – Jason Butler

Zappa
Production Mixer – Monty Buckles
Re-Recording Mixer – Marty Zub CAS
Re-Recording Mixer – Lon Bender

NON-THEATRICAL MOTION PICTURE OR LIMITED SERIES

American Horror Story: 1984 Ep. 9 “Final Girl”
Production Mixer – Alex Altman
Re-Recording Mixer – Joe Earle CAS
Re-Recording Mixer – Doug Andham CAS
ADR Mixer – Judah Getz CAS
Foley Mixer – Jacob McNaughton
Production Sound Team –
Raam Brousard, Ethan Biggers

Fargo Ep. 7 “East/West”
Production Mixer – J.T. Mueller CAS
Re-Recording Mixer – Jeffrey Perkins
Re-Recording Mixer – Josh Eckberg
Scoring Mixer – Michael Perfitt
ADR Mixer – Matt Hovland
Foley Mixer – Randy Wilson
Production Sound Team – Sean Kirkpatrik, Nicholas Price, Doug Ryan, Eric Anthony,
Kelsey Zeigler, Nick Ray Harris

Lovecraft Country Ep. 1 “Sundown”
Production Mixer – Amanda Beggs CAS
Re-Recording Mixer – Marc Fishman CAS
Re-Recording Mixer – Mathew Waters CAS
Scoring Mixer – Brad Haehnel
ADR Mixer – Miguel Araujo
Foley Mixer – Brett Voss CAS
Production Sound Team – Adam Mohundo, Thomas Giordano, Mark Agostino

The Queen’s Gambit Ep. 4 “Middle Game”
Production Mixer – Roland Winke
Re-Recording Mixer – Eric Hoehn CAS
Re-Recording Mixer – Eric Hirsch
Re-Recording Mixer – Leo Marcil
Scoring Mixer – Lawrence Manchester
Production Sound Team –
Thomas Wallis, Andre Schick, Bill McMillan

Watchmen Ep. 6 “This Extraordinary Being”
Production Mixer – Doug Axtell
Re-Recording Mixer – Joseph DeAngelis CAS
Re-Recording Mixer – Chris Carpenter
Scoring Mixer – Atticus Ross
ADR Mixer – Judah Getz CAS
Foley Mixer – Antony Zeller CAS
Production Sound Team – Chris Isaac,
Jesse Parker, Steven Willer,
Patrick Anderson, Colt Logan, Josh Tamburo

TELEVISION SERIES: ONE HOUR

Better Call Saul Ep. 8 “Bagman”
Production Mixer – Phillip W. Palmer CAS
Re-Recording Mixer – Larry B. Benjamin CAS
Re-Recording Mixer – Kevin Valentine
ADR Mixer – Chris Navarro CAS
Foley Mixer – Stacey Michaels CAS
Production Sound Team – Aaron Grice, Andrew Chavez

Ozark Ep. 10 “All In”
Production Mixer – Filipe Borrero CAS
Re-Recording Mixer – Larry B. Benjamin CAS
Re-Recording Mixer – Kevin Valentine
Scoring Mixer – Phil McGowan CAS
ADR Mixer – Chris Navarro CAS
Foley Mixer – Amy Barber
Production Sound Team – Jared Watt,
Akira Fukasawa

The Crown S4, Ep. 1 “Gold Stick”
Production Mixer – Chris Ashworth
Re-Recording Mixer – Lee Walpole
Re-Recording Mixer – Stuart Hilliker CAS
Re-Recording Mixer – Martin Jensen
ADR Mixer – Gibran Farrah
Foley Mixer – Catherine Thomas
Production Sound Team – Steve Hancock,
Liam Cotter, India Clayon-Richards

The Marvelous Mrs. Maisel S3, Ep. 8
“A Jewish Girl Walks Into the Apollo…”

Production Mixer – Mathew Price CAS
Re-Recording Mixer – Ron Bochar CAS
Scoring Mixer – Stewart Lerman
ADR Mixer – David Boulton
Foley Mixer – George A. Lara CAS
Production Sound Team – Carmine Picarello, Spyros Poulos, Egor Panchenko

Westworld S3, Ep. 4 “The Mother of Exiles”
Production Mixer – Geoffrey Patterson CAS
Re-Recording Mixer – Keith A. Rogers CAS
Re-Recording Mixer – Benjamin L. Cook
Scoring Mixer – Ramin Djawadi
Production Sound Team –
Jeffrey A. Humphreys, Chris Cooper, Dean Thomas, Veronica Kahn

TELEVISION SERIES: HALF-HOUR

Dead to Me Ep. 201 “You Know What You Did”
Production Mixer – Steven Michael Morantz CAS
Re-Recording Mixer – Brad Sherman CAS
Re-Recording Mixer – Alexander Gruzdev
ADR Mixer – Jason Oliver
Production Sound Team –
Dirk Stout, Mitch Cohn

Modern Family Ep.1117 “Finale Part 1”
Production Mixer – Stephen A. Tibbo CAS
Production Mixer – Srdjan Popovic
Re-Recording Mixer – Dean Okrand CAS
Re-Recording Mixer – Brian Harman CAS
Re-Recording Mixer – Peter Bawiec
ADR Mixer – Matt Hovland
Foley Mixer – David Michael Torres CAS
Production Sound Team – William Munroe, Dan Lipe, Richard Geerts

Ted Lasso
Ep. 110 “The Hope That Kills You”
Production Mixer – David Lascelles AMPS
Re-Recording Mixer – Ryan Kennedy
Re-Recording Mixer – Sean Byrne
ADR Mixer – Brent Findley
ADR Mixer – Marilyn Morris
Scoring Mixer – George Murphy
Foley Mixer – Jordan McClain
Production Sound Team – Emma Chilton, Andrew Mawson, Michael Fearon

The Mandalorian Ep. 102, Chapter 2 “The Child”
Production Mixer – Shawn Holden CAS
Re-Recording Mixer – Bonnie Wild
Re-Recording Mixer – Stephen Urata
Scoring Mixer – Christopher Fogel CAS
ADR Mixer – Matthew Wood
Foley Mixer – Blake Collins CAS
Production Sound Team – Ben Wienert, Veronica Kahn, Jamie Gamble,
John Evens, Ethan Biggers

The Mandalorian Ep. 205, Chapter 13 “The Jedi”
Production Mixer – Shawn Holden CAS
Re-Recording Mixer – Stephen Urata
Re-Recording Mixer – Bonnie Wild
Scoring Mixer – Christopher Fogel CAS
ADR Mixer – Matthew Wood
Foley Mixer – Jason Butler
Production Sound Team – Patrick H. Martens, Randy Johnson, Veronica Kahn,
Patrick “Moe” Chamberlain, Kraig Kishi, Cole Chamberlain

TELEVISION NON-FICTION, VARIETY, MUSIC or SPECIALS

Beastie Boys Story
Production Mixer – Jacob Feinberg
Production Mixer – William Tzouris
Re-Recording Mixer – Martyn Zub CAS

Bruce Springsteen’s Letter to You
Production Mixer – Brad Bergbom
Re-Recording Mixer – Kevin O’Connell CAS
Re-Recording Mixer – Kyle Arzt
Music Mixer – Bob Clearmountain

Hamilton
Production Mixer – Justin Rathbun
Re-Recording Mixer – Tony Volante
Re-Recording Mixer – Rob Fernandez
Re-Recording Mixer – Tim Latham

Laurel Canyon: A Place in Time Ep. 1
Re-Recording Mixer – Gary A. Rizzo CAS
Re-Recording Mixer – Stephen Urata
Re-Recording Mixer – Danielle Dupre
Re-Recording Mixer – Tony Villaflor
Scoring Mixer – Dave Lynch

NASA & SpaceX:
Journey to the Future
Production Mixer – Erik Clabeaux
Re-Recording Mixer – Michael Keeley CAS

AMPS NOMINATIONS 2021

Greyhound
David Wyman CAS
Michael Minkler CAS
Dave McMoyler
Warren Shaw
Greg Hayes
Production Sound Team –
Betsy Lindell
Marc Uddo
Jason Vowel

Mank
Drew Kunin
David Parker
Kim Foscato
Jeremy Molod
Production Sound Team –
Michael Primmer
David Fiske Raymond

News of the World
John Patrick Pritchett CAS
Mike Prestwood Smith
Rachel Tate
Oliver Tarney
Production Sound Team –
David M. Roberts
Rob Hidalgo
Adam Bart
David Brownlow
Zach Sneesby
Jason Pinney

Soul
Vincent Caro CAS
Ren Klyce
David Parker
Cheryl Nardi

Sound of Metal
Phillip Bladh CAS
Nicolas Becker
Jaime Baksht
Michelle Crouttolenc
Production Sound Team –
Jeremy Eisener
Yanna Soentjens
Hannes Leemans
Francois Goemaere

Names in Bold are Local 695 members

The New M1 Processor From Apple

by James Delhauer

In today’s technological zeitgeist, the assembly line of advancement and progress is rarely deterred by anything. Faster processors, nicer screens, and larger storage devices are always just around the corner, ready to supersede last year’s latest and greatest gizmos and gadgets. Annual releases and product refreshments are so much the norm that not even a global sickness that caused the planet to lurch into lockdown could slow the wheels of change. Few have demonstrated this as dramatically as Apple with the release of their new line of M1 Silicon processor computers. For Local 695 technicians and artists, this could be a game changer.

To understand the significance of this launch, some historical context is necessary. Many of the earliest Apple computers, beginning with 1984’s Macintosh 128K, featured 16 and 32-bit processors designed by Motorola. Though revolutionary for the time, these units quickly began to show their age and Apple sales lagged compared to their primary competitor, Microsoft. In 1991, Apple and Motorola joined with IBM to form the AIM Alliance, a group dedicated to developing the next generation of computer processors to compete with hardware being developed by Intel and AMD for Windows-based personal computers. This alliance led to the unveiling of the PowerPC processor, which Apple adopted into their identically named PowerPC line of computers beginning in 1994. These chips would remain the company’s primary units in their Power Macintosh, PowerBook, iBook, iMac, and Xserve line of computers for more than a decade but they were not without their drawbacks. This hardware still struggled to meet the competition and software routinely used by Windows users was difficult to port to Apple units, limiting user options and product utility. Nonetheless, these chipsets have been credited with bringing the company out of the niche enthusiast market and into mainstream prominence, especially as Hollywood productions began to adopt them into the earliest digital post-production workflows.

However in 2006, Apple abandoned the AIM Alliance and elected to integrate more commonly used Intel-brand hardware into their computers going forward. The widely known Macbook, iMac, Mac Mini, and Mac Pro machines of the last fifteen years have all been powered by semi-customized Intel central processing units, as well as graphics processing units from Intel, Nvidia, and AMD. These computers have become so ubiquitous within the entertainment industry for their creative and design applications that filmmakers across the globe eagerly crave each new release from the world’s first trillion-dollar company.

““…it was so surprising when Apple announced that they would be abandoning Intel processors in favor of proprietary, in-house hardware beginning within the year.”

That is why it was so surprising when Apple announced that they would be abandoning Intel processors in favor of proprietary, in-house hardware beginning within the year. The new Apple Silicon line is derived from the same ARM architecture that has powered Apple’s extensive line of mobile devices since the release of the iPod in 2001, further narrowing the ever-blurring line between phones, computers, and tablets. This allows for direct cross platform support for apps initially developed and released for iOS devices such as the iPhone and iPad, meaning users can access mobile apps and games on their home computer systems.

The first of these new proprietary processors is the M1 chip, an all-in-one processing unit that streamlines under the hood performance in a great number of ways. Traditionally, the various processing devices inside of a computer each have been segregated from one another with each possessing a dedicated memory pool to cache data during processing. An inefficiency in this system has always been the need for redundant storage of the same data—with CPU’s and GPU’s requiring separate caches of the same information despite working together to complete a task. By integrating both central and graphics processing units into the same chipset, Apple has removed this limitation and allowed for a shared memory pool between devices. This allows the computer to do more work with fewer resources and reduces power consumption per watt. The end result is a chipset that boasts double the performance of both the CPU and GPU, which translates to 3.9 times faster video processing and 7.1x faster image processing across the company’s entire line of Mac products.

The introduction of an entirely new processing architecture presents numerous compatibility challenges from a design and engineering standpoint. In the past, it has largely been the responsibility of software developers to program their applications with support for the various architectures available on the market. In 2006, Apple circumvented this problem with the introduction of Rosetta, a binary translator application designed to read software developed for PowerPC processors by emulating that older architecture on the newly designed Macs. For the introduction of the M1, Apple has resurrected Rosetta (now branded Rosetta 2) in order to emulate Intel’s x86 architecture across their new line of ARM-based computers. The result is near universal software compatibility with applications designed prior to this migration. Though these applications will not be able to take full advantage of everything the new system has to offer until updated by their respective developers, Rosetta 2 emulation does provide users an immediate means of transitioning to the latest Apple products without the frustrations of generational incompatibility.

Additionally, the M1 chipset contains an emerging technology known as an AI accelerator, which Apple refers to as their Neural Engine. This technology has been present in the company’s line of iPhone products since 2017 but the M1 variant is the first to be integrated into a personal computer platform. Designed to accelerate machine learning applications such as facial recognition and autonomous tasking, this Neural Engine boasts an incredible eleven trillion operations per second, positioning Apple to become the gold standard for the development and use of artificial intelligence applications as those technologies become more mainstream.

The response ranged from profane outrage to skepticism to tears of joy. It was just over a year ago that Apple unveiled its new line of modular and customizable Mac Pro systems. An emphasis on first-party hardware raised concerns that third-party support for these expensive machines may dwindle, punishing early adopters and potentially robbing them of their investments. Consumers old enough to remember the problematic era of the PowerPC were hesitant to embrace another proprietary solution from Apple. DIY enthusiasts have decried the expected loss of personalization and customization options, a common criticism of Apple products in the last two decades. User advocacy groups such as the Hackintosh community (a group of users who seek to modify the macOS operating system in order to run on similar Intel-based Windows machines) mourned the announcement as the beginning of the end for their practice.

It should be noted that Hackintosh practices both violate the Apple-user license agreement and are not endorsed by Local 695.

But more enthusiastically, some users welcomed the announcement with open arms, citing the impressive abilities of existing ARM processor devices such as the iPad Pro and the ability to download any existing mobile device app onto a laptop or desktop.

After months of speculation, the company finally released three computers outfitted with brand-new ARM architecture chipsets: the Macbook Air, Macbook Pro 13”, and the Mac Mini. All three come outfitted with an 8-core M1 processor and are configurable with either eight or sixteen gigabytes of RAM and up to two terabytes of storage. While these three lines of products are generally considered to be entry-level computers in the Apple hierarchy, the company made impressive claims as to the performance capabilities of each of these machines. The Macbook Air, widely considered to be the least powerful machine in Apple’s product lineup, boasts the ability to decode and playback 8K resolution ProRes video files in real time. Real-time 4K video editing is possible in both ARM optimized applications such as Apple’s Final Cut Pro X and Intel-based programs like Adobe Premiere Pro. Similar results were achieved on the M1 Mac Mini, possibly making it one of the most affordable editing solutions out there.

But no debut is without its drawbacks. At launch, native M1 application support is largely limited to iOS applications and software developed and distributed directly by Apple, meaning third-party solutions will not perform at their best until their respective developers learn how to optimize them for both x86 and ARM architecture. At present, no third-party nonlinear editing platform or digital audio workstation has been optimized for use, meaning Avid, Adobe, and DaVinci users will have to be patient if they wish to take full advantage of their new computers. Most notably, Apple has remained silent on the future of their professional grade lines of products. The Macbook Pro 16”, iMac, and Mac Pro systems continue to be manufactured using Intel-based processors, meaning power users will also have to wait before machines optimized for their needs become available.

In practical terms, the first generation of ARM products represents an exciting glimpse into the future. These entry-level machines perform far and above the performance of their pre-2020 predecessors and dramatically shift Apple’s price to power ratio in favor of consumers. For low to moderate processor intensive tasks like word processing, web browsing, image processing, media management, streamlined offline editing, light transcoding, and live session recording, the Macbook Pro 13” and Mac Mini could represent a low-cost workstation solution. For more labor-intensive tasks such as high-resolution transcoding, online editing, color-correction, and audio mastering, we’re going to have to see what news emerges in the coming days.

Intel-powered Macs have been a staple of our industry for nearly fifteen years. The Local 695 Audio Technician has harnessed their power to record and mix some of the industry’s greatest hits using the power that they’ve offered. The 695 Video Engineer has recorded, played back, keyed, and transcoded everything from commercial spots to major blockbuster motion pictures with them. And while they will continue to live on in Windows-based machines, it appears that the sun is setting on the x86 architecture Mac. As it does, I believe a moment of appreciation for all that we have accomplished during this time seems appropriate… Great, now onto the new.

Bruce Arledge Jr. & Boom Trac

by Richard Lightstone CAS AMPS

I met up with Bruce Arledge Jr., in the sound booth for Dancing with the Stars, via Zoom. I thought he was rolling, but Bruce said, “Oh, no, we’re just rehearsing the opening number. It’s all playback. I’ve got the faders open.”

Bruce has been on DWTS for sixteen seasons and is a second- generation Local 695 Sound Mixer. His dad, Bruce Arledge Sr., worked at KTLA, ABC and was one of the first freelance Audio Engineers. Some of Bruce Arledge Jr.’s credits include Grease Live! (2016), Hairspray Live! (2016), and Rent: Live (2019), as the Audio Supervisor.

Bruce began his career as an A-2 and then moved up to a Fisher Boom Operator on videotape shows. In those days they usually had two booms on the stage floor. When he moved on to four-camera live audience film shows, there was no room for the base of the Fisher booms. The boom arms were moved up to the green beds and were mounted on to catwalk stands by Local 80. The booms were locked in place and they no longer had the flexibility to have the A-2’s dolly the perambulator into an ideal position on the stage floor.

After Bruce put in a couple of seasons on Family Matters over at Warner Bros., he started tinkering on an idea to manufacture a device to allow the booms to be moved anywhere in the green beds.

Boom-Trac

Thus, Boom-Trac was born; it’s a T-bar track system that’s interconnected, and can run along the whole front of the stage above the proscenium. It’s essentially a dolly system that sits on the track allowing the booms to move seamlessly and quietly. You can do on-air moves, while the mic faders are open. The Boom Operator can reposition and adjust angles depending on talent blocking, lighting, and shadow issues. You can move straight up and down or as high as you want allowing the opportunity to move the boom anywhere needed.

In the first year, he got his system on about four shows and it was very well received. Bruce explains, “I had great relationships with all the Boom Operators because, I too was a Boom Operator. We went from four shows, to eight shows, to thirty shows, within two seasons.”

Bruce Arledge’s Boom-Trac is a hands-on operation. “I’m still involved with every setup and strike. I handle all the clients and producers, many I’ve known for thirty years. We have shows at Warner Bros., Sony, Radford, basically wherever there is a four-camera show that needs our system, we install it.”

At the time of writing, Bruce has installs on eleven shows. “We’re just starting to get back into it. In the last few weeks, we have set up new shows and a lot of the shows that went down because of COVID, just kept their stuff up. We’re picking back up right now again. We are doing all of our installs on empty stages. I just want to keep my employees safe and keep the clients safe too.”

  • Bruce Arledge’s sound booth for Dancing with the Stars
  • Boom-Trac installed in the perms;
  • The Calrec Apollo console

Bruce is there for every install and setup of Boom-Trac. “I’m there to make sure that everything’s perfect for the twenty-six-foot boom arms.” Bruce continues, “Because nobody does it as good as I would, I care about it, I’m very hands-on. I know how the device works and I want it to be perfect so when the operator steps up and has no issues.”

Bruce’s first live mixing experience was on American Idol Gives Back at the Pasadena Civic Auditorium with Elton John, the show received an Emmy nomination. The next year, he was hired for DWTS. “The show is two hours live. That train gets going, and there’s no stopping.”

“I have fifty-five channels of RF microphones and about two hundred inputs and over three hundred outputs. It’s not just a 5.1 mix that I send to the network. There is redundant Pro Tools, video tape machines, and everything is sent via fiber to edit. Each mic is isolated, so it’s quite an undertaking. My book of notes is a binder so I can keep track of it all.”

Bruce equates his job to that of an athlete. He is also a surfer, skateboarder, and snowboarder, so he knows from where he speaks. “I love that edge and not everybody could be in the seat that we’re in.” Bruce expounds, “It takes a certain type of personality, some people hate it. Some say, ‘I would not do a live show.’ They don’t want the stress. I love that. That’s when I’m at peace; when I’m sitting there looking at a rundown and it’s five minutes to showtime and the only thing I gotta worry about is the page in front of me, that’s calming for me.”

Bruce always has a backup plan. For example, he’ll have a hardwired microphone in case the RF mic dies. He also relies on his crew of five A-2’s, an FOH Mixer with a Tech, and a Playback Operator. When there was a live band, there was an additional Monitor Mixer, Monitor Tech, and two more A-2’s.

Currently, the show is working under COVID protocols, things have changed. Bruce explains, “In the past twenty-eight seasons, the band has been live on stage but because of the current COVID-19 situation, it has forced us to record offsite. All the dance numbers are recorded Thursdays and Fridays, with the final mix on Saturday. After the band is recorded and mixed, our Musical Director, Ray Chew, is on stage during rehearsal and the live show to make any necessary changes to the tracks. Jose Alcantar, the Pro Tools Mixer, has all the recorded stems to make that possible in real time. The completed songs are then uploaded to a server to allow all departments access. All the tracks are striped with timecode for sync for lighting cues, and SFX. The system allows us complete flexibility and consistency.”

Bruce uses a Calrec Apollo console with fifty-two inputs each on the A and B side, going twelve layers deep, with enough pres and analog inputs to handle two hundred and fifty channels of audio.

Fisher booms utilizing Boom-Trac

During the prolonged hiatus, Bruce used the time to be with his grandkids and his family. Bruce explains, “Everything slowed down. I had my second grandson and I threw myself into helping my daughter out, which I also did with my first grandson, and got to watch him three days a week. That’s what I did, and that was beautiful.”

But Bruce is very happy to be back at work and doing what he loves after his five-month layoff. Living on the edge and delivering a fabulous mix—LIVE!

The crew for DWTS, standing L to R: Robyn Gerry-Rose, John Protzko, Doug Wingert, Rick Bramlet, Craig Rovello, David Vaughn, Victor Mercado, Brandon Gilbert.
Seated L to R: Bruce Arledge Jr., Steven Anderson.

The Sound Crew
Bruce Arledge Jr. – Live Production Mixer
John Protzko – FOH Mixer
David Vaughn – Playback
Doug Wingert – Audience Sweetener
Jose Alcantar – Pro Tools Mixer 
Steve Anderson – Lead A-2

A-2’s
Victor Mercado
Craig Rovello
Brandon Gilbert
Robyn Gerry-Rose

System Techs
Rick Bramlet
Dave Ingels

Pre-recording Music Mixer – Randy Faustino

Monitor Mixers
Butch McKarge
Pete Kudas

Music A-2
Damon Andres

The Equipment
Production Mixing Console
Calrec Apollo
Monitoring
JBL 6328 5.1 system

Multitrack Record
Pro Tools
Sound Devices 970

Playback System
Spot-on redundant System

Desk microphones
4 – Neumann 185

Wireless units –
Provided and coordinated by Soundtronics Wireless
45 – Lavs Sennheiser 5212 w/ Vt-500
3 – Hand mics Sennheiser SK5200
w/ DPA 4018v

Audience Reaction
Sennheiser 416’s
Neumann KM-184’s
Countrymen Isomax Hypercardiod

FOH system –
Provided by ATK (AudioTek Corp.)
Console
Digico SD-5

House PA System Line Arrays
JBL-Vertec VT
W-4 4-way Splitter

Subwoofers – JBL VTX S28

Main PA – JBL V20

Fisher Booms on Superstore

by Richard Lightstone CAS AMPS

Steve Cain and the long reach of the Model 2 arm.

Aside from on-set COVID-19 safety protocols, one of the major health and safety concerns for Boom Operators in Local 695 is shoulder and back injuries due to the ever-increasing long takes while using boom poles.

In 1951, James L. Fisher designed a mechanical boom arm and base, known worldwide as the Fisher boom. Fisher booms were in use on most sets and locations for at least forty of the past sixty years. Changes in set design, the construction of four-walled sets and production’s reluctance to “fly walls,” made the use of the Fisher harder to employ on movie and television shows, although still prevalent on productions and sitcoms.

With the use of HD cameras, Boom Operators are forced to hold the boom pole for takes lasting between twenty and even forty minutes at a time. This is obviously untenable and unsafe.

Steve Cain, and his son Shannon, who are the Boom Operators of NBC’s Superstore, explained, “The first season we were living on eight and six-step ladders for the entire day. It was a really hard show, as we shoot with three cameras; two wing and a middle camera. The takes lasted a long time ’cause it was digital. They would reset and do several passes within each take without cutting.” Shannon continues, “You’re fully extended with a sixteen-foot pole pretty much the whole day.”

Sound Mixer Darin Knight went to production to explain that this was a health and safety issue, with the concern that someone could fall from the ladder and or drop the boom, injuring themselves or others. Takes were lasting fifteen minutes a piece with half a dozen shots for each scene. Darin successfully lobbied for one Model 3 Fisher boom with the Model 2 arm, that extends to sixteen feet. A second Fisher was added in season three and they operate with offset arms, extending the boom’s reach to almost nineteen feet each. Each boom is equipped with a microphone tilt hanger and Sennheiser 416’s.

The main set occupies two combined sound stages at Universal. Steve describes the scene: “This is a giant show, really, for sound, sometimes we have up to fourteen actors all on wireless mics, and two Fishers to move around.”

Shannon is busy wiring the cast and dealing with all the other equipment needs, and moves up to manage the second Fisher boom. Initially, the booms were hard wired to Darin, but this season they switched to a wireless configuration to avoid repeated returns to the sound cart, to stay within the COVID protocols.

Superstore incorporates a “Phase” system. Phase 1 is where camera will set up the shot and sound can move the booms in place. Phase 2 is for lighting, but often Steve and Shannon need to be there to move the Fisher base to accommodate set lighting. Phase 3 is for setting the background players, and Phase 4 is shooting.

  • Darin Knight’s sound cart and the two Fisher booms
  • Steve and Shannon Cain, both booms with offset arms, monitor system, and correct
    PPE
  • Steve Cain with an 816 in a blimp, better than a fishpole and a ladder with such a heavy microphone

Steve and Shannon were surprised by the crew’s acceptance of the Fishers. “A lot of the younger trainees and PA’s have never seen a Fisher boom,” Steve explains, “They don’t know what this is, more than half a dozen asked me if this is something I built myself. They had no concept of what this tool could do. I’d tell them, these were around before your parents were born.”

During the first season, seeing Steve and Shannon perched on ladders, the crew understood the need for the Fisher booms. The AD’s made the necessary compromises in placement of the Background Actors, and the Grip and Electric Departments worked to help them with their new tool. The Camera Operators were handheld in season one, then moved to dollies in the following season, making it better for Steve and Shannon on the Fishers.

The show’s DP, Jay Hunter, did some sound work early in his career, so he understood their issues. He was very supportive of Darin and the crew incorporating the Fisher booms. “He’s actually a fan of the Fisher as a piece of film equipment. He understands what a versatile tool it is, and how much more you can do with it than a fishpole,” explains Shannon.

“The hardest part I thought was getting the booms into the right position,” Steve continues, “realizing that this boom was a piece of gear that needed to be there, just like the dollies, just like the cameras. You had to claim your position and not feel awkward about telling people, you’ve gotta move that, as the boom has to be here.”

Unique to the show are the break room scenes with as many as thirteen cast members. Shannon and Steve are pleased at how they can cover those scenes with just the two booms. Steve said, “We have a couple of sets where we have to break it up into zones because of the size, and the way the dialog overlaps.”

The show is very unpredictable with the actors ad-libbing at will. Darin established a workflow of wiring the cast, but utilizing the Fishers in every scene. The booms can be raised high enough to reach over the shelving, so they can cover several aisles at a time. Shannon has often dollied the platform so Steve can cover many ‘walk and talks.’

They use two iPads, one to view the three cameras, and the second is for the script, using the Scriptation app. They have a talk-back system hooked up to foot pedals allowing them to communicate with Darin. The Fishers have proved most effective with two shots, as the actors are now at least six feet apart, although the camera angles cheat them as being much closer. “Even just two people talking, with COVID placement,” explains Shannon, “then the two overs, our typical setup for two people. We started to split those up, with two booms, just to catch ad-libs.”

Steve has mentored his son Shannon and speaks proudly of him, “He started with us about three years ago. I think the neat part about Shannon’s training on this show is that he’s learned to put mics and coordinate frequencies on the Venues, all the things that a Utility person would do. But he’s also got to watch how we’ve done it throughout our careers with two booms, and who’s covering who, telling a mixer how to set all that up. So, he has a really broad oversight of today’s sound.”

  • A typical Superstore setup: booms on each side of the center camera, with the two wing cameras (not pictured) shooting cross-coverage
  • Two Fisher booms can work even small sets
  • The Model 2 arm comes in handy with so many actors spaced six feet apart.
  • A large split, no problem with two Fisher booms

Darin uses three Lectrosonics Venues, with fourteen wireless and three IFB channels for camera, the writers and the off-set feeds, with Shannon managing all of the frequency coordination. Due to the COVID protocols, the show is now shooting six-day episodes, but has a shortened order of thirteen from the original eighteen scheduled. Many actors are now wiring themselves, requiring Shannon to show them how to place them, switch them on and off, and mount the lavalier while Steve moves the Fishers into position.

Steve and Shannon took the Local’s Fisher boom training course from Production Mixer Eric Pierce, and have happily put their new skills to work on the show. They appreciate the accuracy and the versatility of the Fisher booms, as well as the safety they afford during long takes.

Local 695’s “One-on-One Intensive Fisher Boom Training” program is the only one of its kind, offering hands-on training on all of the Fisher microphone booms, including the 16-foot Model 2 boom and Model 3 Base, and the Model 7 boom and Model 6E Base, which comes in lengths of twenty, twenty-three, twenty-six, and twenty-nine foot. We go through safety, transporting, prepping, setting up weights, stringing and the use of accessories, and then we guide you while you get feedback from a live mic and work through an extensive set of exercises on the boom. The training is of course, important for Boom Operators and Utility Sound Technicians but also for Production Sound Mixers who need to know what the Fisher is capable of. Unfortunately, “One-on-One Intensive Fisher Boom Training” is not available at present due to COVID restrictions but we hope to be able to bring it back to you soon.

No Time To Die

Bond 25: (Part 1)

by Simon Hayes AMPS CAS

Daniel Craig as James Bond. Photo: © 2019 DANJAQ, LLC AND MGM

The story starts in 1977 when I was seven years old. My father took me to the cinema to see The Spy Who Loved Me. Enthralled by the world of the secret service and the suave and debonair hero who loved cars and gadgets, I was sold. The deal was done when the Lotus Esprit turned into a submarine in the beautifully clear Caribbean, and then drove out onto the beach. I was hooked, from that moment.

I would watch the Bond movies in the cinema at every opportunity even when they were re-run on television, and each time I watched, I became more interested in the character and the franchise. When I eventually got a job in the film industry, it was my absolute aim to work on a Bond film. This was cemented during my time as an ‘in house runner’ (PA) at a commercials production company, as a teenager. I can remember clearly the respect for the crew members they were trying to book for a commercial by the Producers and Directors, when they were not available because “they’re on the Bond.” The more time I spent on film sets, the more I would be exposed to stories being told during camera turnarounds, lighting setups, or at lunchtime by crew members waxing lyrical about “when we were on the Bond.”

During my childhood, I built the franchise up to be one of the pinnacles of filmmaking. When I arrived in the industry, I realised that working on a Bond film was seen as a badge of honour; a sign that a technician was at the top of their game. And, boy—did I want be one of those technicians.

Fast-forward thirty years and I found myself booked for a Bond movie. Not just any Bond movie either; this was to be Daniel Craig’s last outing in the role, on the twenty-fifth Bond film. My crew and I had worked with Daniel on the film Layer Cake before he was cast as Bond, and we really enjoyed working with him. Daniel is a perfectionist, and knowing how hard he works and how much he values production sound, made me more excited about the project. We have an easy rapport which extends to my team, especially Arthur Fenn, my Key 1st Assistant Sound, who gets on extremely well with Daniel. I knew based on our previous experience that working with Daniel was going to be a pleasure. As so many of you reading this will know, if the star of the show respects and collaborates with the Sound Department, then the rest of the cast generally will follow suit.

I was invited to the offices of Eon Productions in Mayfair, London, an imposing building in the heart of the city. The production team wanted me to meet Cary Fukunaga; it is always quite intriguing meeting a director for the first time. To get myself up to speed, I watched a bunch of Cary’s work to learn his shooting style, and how he uses production sound. I was super-excited on how little ADR there seemed to be in his films. When I arrived, I was warmly welcomed by Producer Chris Brigham and invited to join Cary, as well as Producers Michael G. Wilson and Gregg Wilson. There was an ease to the conversation as soon as we started, and it became clear how interested everyone around the table was about sound, not only production sound, but theatre sound systems, home Hi-Fi, Dolby Atmos; it was literally like talking to other Sound Mixers. Cary asked me if I’d ever recorded on a Nagra. I told him that I was fortunate enough to have spent my first six years mixing on a Nagra, on hundreds of commercials, starting on a IV-S that I had converted to timecode when I’d saved up enough money. Cary looked excited and asked if I still had one, Gregg Wilson cut in saying, “I’ve got a Nagra, I adore them.” Michael G. Wilson talked about the Swiss workmanship, at this point I knew I was sitting at a very special table full of real film audio enthusiasts. I told Cary I still had my Nagra at home on display, and he said, “We have a flashback sequence on the film that I’d really like you to record on a Nagra to give it an old school feel.” I told him how interesting I found that, and that I’d also like to run my Deva 24, alongside the Nagra to give him a choice in post. I explained that perhaps when he listens to the Nagra through a modern digital theatre system, he may feel the analog sound is too old school. However, if he wished, he could use the Nagra as a reference, and treat the Deva digital recordings with a plugin to give them the warmth of the Nagra analog recording, but not going quite as far with the analog tape hiss. Cary said that is exactly how he likes to work—he wants choices in Post. I agreed, that is exactly my preference too: give the Director, Supervising Sound Editor, Dialog Editor, and Re-recording Mixer options to choose from. I am completely aware that the way a scene reads in a script may change completely once in picture editorial, and being locked into one specific production sound workflow can be limiting and irritating. As Production Sound Mixers record a scene, we cannot know, how loud the score is going to play or how the Director and Picture Editor may intercut the scene with others, to match dialog perspectives. Cary and myself were speaking exactly the same language, and a burgeoning relationship was developing.
Cary told me that some of the situations were going to be tough, as he wanted to use IMAX cameras for significant sequences during the film. He explained that he was aware they were noisy but he was also pretty sure the dialog on those scenes was going to be minimal, as they were mainly action and stunt sequences. I spoke to him about signal to noise, and how I would try to achieve dialog recordings on the IMAX sequences that would hopefully not need ADR. Cary doesn’t like to use ADR for technical reasons and if at all possible, he’d like to use the production dialog on the IMAX scenes, which would generally be loud sequences with the cast shouting. They would have a lot of FX and score laid underneath which we both felt would help to hide the IMAX camera noise without having to go too far with noise reduction in post. We spoke at length about the Schoeps Super CMIT’s I like to use when recording scenes where there is a lot of background noise, and he was impressed when I explained I would be recording two tracks from each boom mic; the processed signal with 10db off-axis noise reduction and the unprocessed signal with the usual 4db off-axis reduction of a standard CMIT microphone.

  • Director Cary Fukunaga, Linus Sandgren, DP, and Simon Hayes
  • Arthur Fenn, Key 1st AS, and Simon
  • Ben getting the DB5’s ready for a wildtrack

Cary and the producers said that they’d like me to run some tests with the IMAX cameras that could be listened to by Supervising Sound Editor Oliver Tarney, so he could assess the camera noise, treat it with some different de-noising plugins, and see what could be achieved. It was a great idea and it would be really helpful for all of us to know exactly what the limits were in terms of proximity of the camera to the dialog, booms versus lavaliers and how each source would react to the de-noising. Cary said when Oliver had worked on the dialog, we could reconvene and listen to the results in a viewing theatre.

When I left the Eon building, I felt like I’d had a really collaborative meeting with filmmakers who deeply care about sound, and wanted to preserve the all-important original performances. I knew we were at a great starting point and rather than seeing the IMAX camera noise issue as a negative, I started planning how I could minimise the issue and make it work for Cary and our cast.

We set up a test where we ran dialog on exteriors and interiors, on different boom positions, and performance levels from whispers to shouts. For the exteriors, we used Schoeps Super CMIT’s and DPA 4061 lavaliers. On the interiors we tested my preferred interior boom mic, the Schoeps CMC6 and MK41 hyper cardioid.

The IMAX camera was loud, but I know that de-noising technology has really come on leaps and bounds in recent years, and what I needed to deliver to Oliver and his sound post team was a good signal (dialog) to noise (camera) ratio. The greater the ratio, the more ability they would have of successfully cleaning and preserving the original performances. I also knew that a Bond film is generally going to have a driving score and loud sound effects that would help the process of hiding the unwanted camera noise, and the de-noising process would not need to be too aggressive.

After Oliver received the tests, I spoke to him at length where he explained that the camera noise was filterable but only under certain parameters. The dialog needed to be a close perspective; whether that be the boom in close-up, or the lavalier didn’t really matter. This ruled out the possibility for a boom to be used in a mid-shot or wide position. In those instances, Oliver and his Dialog Editor, Becki Ponting, would use the lavalier as it had better signal to noise for the cleanup. We also discovered that the Schoeps Super CMIT should be used on interiors, as well as exteriors if we were shooting IMAX, as the wider pick up pattern of the Schoeps CMC6/MK41 was unsuitable for reducing the camera noise enough, even in a close-up position. Whenever we were shooting IMAX, we would be using the Super CMIT’s and DPA 4061 or DPA 6061 lavaliers to give sound post the best chance of cleaning the recordings. The Super CMIT’s supplied both processed and unprocessed channels, which gave Oliver and his team choices; rather than a ‘one size fits all’ approach, and they would decide which channel to use in every situation and scene.

At the viewing theatre at Pinewood Studios, Linus Sandgren, Cary’s wonderful DP, joined us to listen to the tests so he could get a handle on how the noise of the IMAX would impact the performances. Cary’s 1st Assistant Director, Jon Mallard, was also present who would become an extremely strong ally of the Sound Department. Linus and Jon were absolute gentlemen, really enthusiastic collaborative filmmakers, who treated everything we did throughout the movie as team work.

After listening to Oliver’s cleaned-up tracks, it was evident to Cary that the IMAX could work for action sequences that would have loud dialog and a driving score. For softer level drama, we would shoot 35mm film and for larger set pieces, stunt work, and chase/fight sequences, we would shoot IMAX. We all left the theatre confident we had found a workable solution without too much compromise and that we could go ahead and use the IMAX cameras in certain conditions, without having to commit the scenes to ADR.

The next item on my agenda was to start to plan a workflow for lavaliers with Arthur, my Key 1st AS, who is a first-class boom operator and also manages the lavaliers and places them on the cast. A number of years ago, Arthur took on this role when we started shooting multi-cameras and we realised that the boom wasn’t going to be able to be prioritised in every scene. He has become an absolutely excellent radio mic technician who has an ease and ability to interact with the cast members with a very comfortable and confident charm. If you saw Arthur on a set without his boom pole, only his headphones would give away his role in the Sound Department. He carries a bag on his shoulder, with needle, thread, safety pins, and double-sided tape, giving the impression that he is a member of the costume department, and that is exactly how he behaves around the actors.

When we previously worked with Daniel, we were shooting single camera, and were at the stage in our filmmaking careers where it was possible to use lavaliers sparingly, as two booms could pretty much cover anything a single camera could throw at us. Arthur and I remembered that Daniel is very particular about how the transmitters can create problems in the way his fitted suits hang on the body if the placement of the pack isn’t specifically planned in advance. We also knew that Daniel likes his tie knots to be uncompromised so he can have them in the fashion that the particular suit he is wearing demands. This was very important because of the amount of time Bond spends wearing a suit. Based on this we knew, we needed to reduce the size of the pack and lavaliers Daniel would wear. We decided that Bond would always be rigged with the newly available and absolutely tiny DPA 6061. Having used the 6061 on a couple of movies, I was happy that its small size would not compromise its ability to deliver extremely rich and clear dialog. As far as I am concerned, it is just as good as my go-to lavalier, the DPA 4061. I could use 4061’s on other cast members who didn’t have such difficult costumes and that the two different mics could be mixed and intercut seamlessly. The really great factor with the 6061 was that we could fit one in Bond’s tie knot without it being seen and not compromising the type of tie knot appropriate for the style of suit Bond was wearing. Even a really modern, slim tie knot could have a 6061 hidden inside it invisibly.

We then started to discuss his radio pack. I generally use Lectrosonics for several reasons; first, the build quality is just phenomenal—if a pack gets dropped, it survives, and I have never had a pack fail from a fall. Second, all of my crew have the Lectrosonics app, LectroRM on their cellphones, and are adept at quickly changing gain settings when I ask them over our sound crew comms. We start with a base level on a rehearsal, or sometimes the first Take, after which, I start fine-tuning the gain settings, increasing three or 4dB for whisperers. It is rare for us to reduce gain as my base level setting is one that is impossible for the human voice to cause a square wave regardless of how loud they shout. This is assisted by the limiter in the Lectrosonics transmitters but also that I’m quite conservative with my base-level setting. As I manipulate the gains, I try to achieve a setting that won’t be so high that it engages the limiter on loud parts of the dialog. I try to record without any limiters through the whole recording chain, preferring to deliver raw, uncompressed dialog to Sound Post, so that Re-recording Mixer Paul Massey can choose to use compression later, based on how the dialog will play when mixed with the score and sound effects. I try to use enough headroom to minimise the limiter kicking in.

Our go-to radio packs are Lectrosonics SMB’s, which are really small. However, to really show Daniel we were pulling out all the stops, myself and Arthur decided to dedicate the tiny, super-micro Lectrosonics SSM to Bond full time. We generally only use the SSM for specific costumes (bathing suits, bikinis, ball dresses, etc.) as there is a slight compromise in output power and battery life. However, because I knew Arthur has a great relationship with Daniel if we needed to change battery at a difficult moment, it would be cool.

Obviously, that was never our intention, but the SMB will do a whole morning until lunch, whereas the SSM runs out about thirty minutes earlier. With the ability to ‘sleep’ the radio pack using the cellphone app, we knew that Arthur would be powering down Daniel’s pack wherever possible. This would not only give Daniel confidence he had privacy when not on set, but also increase the period between battery changes. Arthur would talk to Daniel before rigging the costume and find out whether he wanted an ankle pack, calf pack, in the small of his back, or hidden in his jacket. Daniel could base his decision on the action he was required to do, rather than where a ‘bulge’ would be less visible, because the SSM simply didn’t cause bulges in the costume. There were times when we asked Daniel to wear two radio mics, especially when he was in military webbing, because of severe head turns in action sequences, and clothing rustle the webbing can create. This generally happens on one side of the body but not the other, meaning that if radio one had a rustle on it, radio two on the other side of Bond’s chest was clean. Each mic was assigned their own track on the Zaxcom Deva 24. We didn’t overuse this strategy. Daniel was being very generous in letting us use two mics, and we didn’t want him to think it was a ‘belt and braces’ situation, so we only asked when we felt we really needed it, explaining why, and Daniel kindly accommodated the request.

The rest of the cast were assigned Lectrosonics SMB transmitters and DPA 4061 mics unless there was a specific costume that required an SSM and a 6061; for instance, Ana De Armas’s stunning ball dress.

Robin Johnson, my other 1st Assistant Sound, is responsible for frequency mapping all the radio mics and comms, so he assigned a general plan that would allow me to run twenty radio mics at all times, only having to adjust and fine-tune specific frequencies if we had issues on location. We were actually incredibly fortunate that during the making of No Time to Die in Norway, Italy, Jamaica, and the UK, we didn’t come up against any negative frequency situations, and apart from a few minor tweaks, our frequency plot remained the same throughout the film.

Vehicles are a huge part of what makes a Bond movie. Oliver Tarney asked me how I was planning to mic up the vehicles and I was happy to use whatever workflow he preferred. Whatever he asked me to do on main unit, I would also ask our Second Unit Sound Mixer, Tom Barrow, to mirror. Oliver asked for a stereo pair of lavaliers on the exhaust region of the cars that were being featured in each scene. He also asked that we use ‘spot mics’ (lavaliers) on any other parts of the vehicle that we felt gave interesting sounds. We would generally try to place a lav in the engine bay and then think about other unique sound effects the particular vehicle would give us. In Jamaica, Bond was driving an old school Land Rover, so it was the gear shift that had an old, grinding sound to it which I thought would mix well with the stereo exhaust tracks and the engine bay track, to wrap the theatre audience acoustically in exactly what it sounds like to be driving one. This was how we treated each vehicle—find the stereo sweet spot on the exhaust and then add spot mics to pick up the other effects that would build a unique sound of each vehicle.

I knew on No Time to Die we were going to come up against some huge SPL’s for extended periods, as a lot of the vehicles were highly tuned and would be driving at high speeds with tire squeal, etc. There was also a bunch of motorcycles to consider. This motivated me to buy some specific lavs for the job. As I am extremely happy with the famous DPA frequency response—i.e., pretty much flat from 20Hz to 20kHz—and I wanted to stick with the brand I knew and trusted, but I wanted to know I had the headroom to cope with anything, so I purchased some DPA 4062 lavaliers. These are acoustically the same as our favourite 4061, but give another whole 10dB of headroom, with the max SPL a huge 154dB. As soon as we tested them, I knew they were a great addition to the kit. They could be mounted very close to sound sources to make the effects we were recording clean of other unwanted noise, and were virtually impossible to square wave. These 4062’s became our ‘vehicle kit.’ Tom Barrow and our second unit sound team did the same.

We were ready to start shooting. We had been through each scene formulating a creative plan for our approach to the sound recording, and then putting together a technical plan to help us achieve our creative aims. The first week would be a pre-shoot in Norway, shooting the flashback scenes. This meant I needed to get my old Nagra IV-S TC from its display in my screening room at home, and check it was still working. It was, and just putting batteries in it, and loading a roll of quarter-inch tape had me reminiscing about the start of my career. The texture of the alloy case and the feeling of the record lever in my hand were so wonderful to feel again. The Nagra had not been used since 1995, so I decided I had better get it checked over. Back in the day, the man who had regularly serviced the machine and converted it to timecode was the famous David Lane, who has since passed away (RIP). I knew there was another famous London-based Nagra technician who collects, repairs, and deals in old Nagras, a former Sound Mixer called Mike Harris. One of the issues was finding quarter-inch tape stock, and thankfully Mike had a source in Paris, and ordered some. Mike did a fantastic job servicing it and resetting the tape bias for the new brand of tape we would be using, as the old BASF 468 tape is no longer available. Mike said the machine was in perfect condition and saw no reason why it shouldn’t display the same bulletproof reliability in Norway that Nagras have always been famous for.

In Norway, a significant percentage of the scenes would be filmed on IMAX cameras, and for that reason, I wanted to record on our usual Zaxcom Deva 24, alongside the Nagra. We could supply our usual ISO tracks to Oliver and his Post team, and I completely understood Cary’s wish to have an old school analog feel to the recordings. I also wanted to use two Schoeps Super CMIT booms to give Sound Post the best opportunity to remove the camera noise. Using two Super CMIT’s takes up four tracks, and I also wanted to radio mic every actor, so I knew we would potentially be running up to ten tracks on this part of the film. I recorded my mix onto the Nagra to serve as a reference for Oliver to create a ‘Nagra sound’ from my Deva 24 ISO tracks, it would give him the ability to remix and de-noise individual tracks, rather than being forced into doing a more general de-noise on the Nagra mix track. If I only supplied a mono mix on quarter inch, I worried that it would potentially lead to ADR. As per my usual workflow, it was my aim to supply Sound Post with the most choices possible to have the best ability to get usable production sound.

  • Dylan Jones, Video Playback, Simon on main cart, and Robin on Zuca cart,
  • Matera
  • Clockwise from top right: Ben, Arthur, Simon, Frankie
  • Zuca cart contents
  • Simon mixing
    boat to boat, Italy

It was the end of winter when we arrived in Norway. We were shooting in a forest, waist deep in snow, and on a frozen lake, both of which had their own challenges not least due to the extremely low temperatures we were working in. One of my methods of working in these conditions is to keep the sound cart and all the equipment on it powered up at all times, to avoid heat cycles and frozen switches. The best way to avoid equipment failure, and the way to keep it ready to roll at any time, is to not power cycle it at all. We would leave the equipment in a sound truck all night on location, with the driver instructed to keep the heating in the rear of truck turned on. When we arrived at work each morning, we would power up the warm equipment, and wheel it out into the freezing cold temperatures and leave it switched on all day. This is particularly important for the Nagra as it avoids the brakes from freezing.

It was mainly sound effects we were capturing when we started shooting. It became clear that any ‘camera perspective’ sound would be unusable due to the noise of the IMAX camera, despite the Super CMIT’s, so we concentrated on recording close-up perspectives. The sequence involved one of our lead cast members trudging through the forest in deep snow and arriving at the lake (I don’t want to give away story and plot lines here as you’re potentially reading this article before the movie’s release). I decided that the suspense could be built using the different textures of footsteps through the characters’ journey. We placed three lavs on the actor: one on his left calf, one on his right calf, and one on his chest for his breathing, which was heavy from the effort of walking through the deep snow. When I imagined what I would expect a scene in a Bond movie to sound like, it was these effects with the score that I felt would give Cary the ability to build layers of sound and create suspense in the final mix. The reason we mic’d up the actor’s calf rather than boots is because the snow was so deep that, had we have placed them on the boots, the lavs would have been immersed in snow on every footstep. The actor was wearing ‘crampons’ (metal grips used for walking in snow) that really gave us an ominous sound. Mixed with his deep breathing, I was confident we were getting the exact components Cary would use. To make absolutely sure we had everything covered, and were giving the all-important choices to post, we went off into the forest while the rest of the crew were shooting some high-speed MOS shots. We recorded one of my assistants wearing the same boots and crampons, walking and running at different speeds through different depths of snow, and on ice using the Super CMIT’s; effectively recording a library of on-set Foley for the scene, should Sound Post want to use these additional layers. Snow and ice underfoot are very unique sounds and may be difficult to reproduce on a Foley stage, so I felt it was important to make sure we were completely covered.

One night on the way back from our shooting location to the tech trucks, I was wheeling my cart across the frozen lake. I usually move the cart on my own, as it is a relatively lightweight Eurocart. This leaves my crew to manage the rest of the equipment: follow cart, etc. It was pretty dark and unfortunately, there was a dip in the ice that took me by surprise and one of the cart wheels dropped about twelve inches into a watery hole. This resulted in my losing control of the cart and it going down on its side. All of the equipment was strapped tightly to the cart except the Nagra, which I had not strapped up tightly enough since the last reload. It was on top of the Deva 24, and slipped into the watery hole and was completely submerged for about two seconds until I grabbed it. All of our recordings had also been mirrored on the Deva 24, but I was super concerned I was going to have to tell Cary the next day that the Nagra wasn’t working. The first thing I did was quickly remove the batteries from it to try to avoid shorting it out. We then took the equipment back to the warm sound truck and opened the Nagra up. Luckily, there wasn’t much sign of water ingress and we left the machine open, under an electric fan heater all night on the truck. We told the truck driver how important it was to not unhook from the generator overnight. I feared the worst when we arrived the next morning, we put the batteries in and I held my breath and switched the machine on—it powered up! I played back the previous day’s recordings and they sounded great. We weren’t sending the quarter-inch tapes with dailies to avoid causing issues in Editorial. We would give the whole batch to Sound Post at the end of the job. We took the Nagra to set and recorded on it, and it worked for the rest of the Norway shoot faultlessly. It reminded me why we used the Nagra, and why it stayed on top as the industry’s machine of choice for so many decades.

Part 2 of “Bond 25:” No Time to Die continues in the spring edition as Simon Hayes and crew travel to Jamaica.

News of the World

by John Pritchett CAS

Helena Zengel as Johanna Leonberger and Tom Hanks as Captain Jefferson Kyle Kidd.
Photo: Bruce W. Talamon/ Universal Pictures

Rising up over a hill, you turn onto the road leading to the Galisteo Ranch outside of Santa Fe, NM, looking across the cactus-filled desert to observe an odd sight. In the distance is a strange caravan being led by a small buckboard wagon pulled by a single horse and carrying a middle-aged man and a ten-year-old girl. They are having a mostly one-sided conversation. Behind the wagon is an even stranger sight, a parade of vehicles starting with a truck-like thing with a large crane and a bunch of bundled up characters hanging on all sides. That is followed by two very large vans festooned with antennas and such, and then by an extended line of assorted other vehicles comprising several smaller vans, smaller trucks, a trailer carrying supplies, and food, and finally, a trailer loaded with two portable toilets. The entire train is moving slowly across the largely uncharted wasteland, dodging cholla and saguaro and gopher holes to get to an end at some point, only to turn around and go back where it started.

The locations in New Mexico offer tons of unique, at least in this country, challenges for each department and cast alike. Many of the sets sat on dirt fields and roads that turned into mud morasses at the least rain, which we got plenty of. The Grips and Transpo were constantly having to assist in getting us out and back to work. And then toward the end came the snow! Years ago, while working on Wyatt Earp in many of the exact same locations, we had a surprise snowstorm as we were setting up to start a multi-day daylight scene. The decision was made to shoot anyway. By the next morning, the snow was all gone. Effects had to find and bring in snow, foam, and ice-making machines so we could continue to shoot. This is a pretty common experience in New Mexico. But the place is just too awesome to resist.

What I’m describing, of course, is the filming of a scene for a movie, in this case, the movie News of the World. This will be part of Paul Greengrass’ (Captain Phillips, The Bourne Ultimatum) latest effort, and his first Western. It stars Tom Hanks as Captain Jefferson Kyle Kidd, and Helena Zengel as Johanna.

(l-r) Steadicam Operator James Goldman, Director Paul Greengrass, and 1st AD Eric Heffron

The movie, set in 1870 after the Civil War, tells the story of Kidd, who is widowed, before he could arrive back to his home in San Antonio. He then leaves his home to travel to small towns and villages throughout the South and West, bringing news and stories from afar to the people; regaling them with tales of wars, triumphs, joy, and sadness that they would otherwise never know.

Along the way, he is offered money to bring a ten-year-old girl, stolen as a toddler by Kiowa Indians from her murdered family and raised as one of their own, back to her extended family four hundred miles away in San Antonio. She speaks no English, and at every opportunity tries to escape Kidd to get back to her Kiowa “family.” Adventure ensues.

This is also the story of the many challenges that arose trying to, in my case, record the audio on this enterprise. It turned out to be harder than expected. First of all, there’s the location. This was my third outing in the Santa Fe area. I know many of you who have shot here remember the wind, the dust, the sand, the wind, the rain, the snow, the wind and the ever-present Wind. There is a dust storm where Kidd nearly loses Johanna, that is a true brute! But what I hadn’t endured before was the challenge of recording dialog in a real practical buckboard, and driving it over open desert, and hard-packed regolith and unseen pits. It’s a very, very noisy vehicle! The amazing grip crew did yeoman’s work to try and de-rattle it, and often had good success. But some noises were insurmountable. I thought they would have to loop all the scenes in the wagon.

Turns out I was wrong. Thanks to the phenomenal work of Oliver Tarney and crew, the dialog was made usable. It’s here that I need to mention the terrific Second Unit and splinter crew that did so much to aid the soundtrack. David Brownlow came in to do all the insane chase scenes and the physical stuff with Boom Zach Sneesby and Utility Jason Pinney, while the magnificent David Sickles (I owe him money, I think) came in for a few days to cover for me when the company went high into the rock where some old guys (me) dare not venture.

  • Zengel and Hanks
  • Photo: Bruce W. Talamon/Universal Pictures
  • Roberts pondering
  • Boom Operator Dave Roberts; Dave Roberts takes a test drive

This story would not be complete without mentioning the remarkable DIT Ryan Nguyan, who had to keep up with the constantly moving targets, and all the cameras, to give us great images. Finally, the very congenial Video Assist guy, Adam Barth. Adam’s job was especially difficult as we were constantly on the move. He rigged his SUV with monitors on the outside so his “village” could go anywhere it was needed, but still had to put up other villages, as well for makeup, hair, and costumes. He was a real trooper These fellows made it all look easy and helped out immeasurably.

As far as my setup is concerned, I’m a tad old school (or just old) in that I still use, and have for many years, the Zaxcom Cameo mixer and the Zaxcom Deva 8-track recorder. All of my wirelesses are Lectrosonics SMA’s and HM plug-ons. Mercifully, the Santa Fe area is a largely conflict-free zone for RF issues. Added, the costumes, being soft period garb, made wiring actors mostly problem-free. For all the moving shots, I used the Zax Mix 12 sitting on my lap as we traveled in the caravan. For the boom, I use the amazing Schoeps CMIT and the Cinela Piano. It turns out that the boom was extremely important as the winds were often hard to deal with on the wires.

My intrepid Boom Operator, Dave Roberts, would do the insane task of walking (jogging really) alongside the buckboard or the horse Hanks and Zengel rode through a manmade raging “river,” managing to not get hung up on the ubiquitous cactus or fall into a gopher hole, a scary sight to see. Our amazing Third, Rob Hidalgo, kept up with all the wiring and keeping the costumers happy. The large transport van was a godsend, saving us from the cold and wind, and gave shelter for myself, the Director, the DP (the brilliant Dariusz Wolski), Paul, Video Assist, and DIT.

  • John mixing from the “Beast”
  • the buckboard from John’s perspective

Those of you who have been fortunate enough to work with Hanks know what a joy he is, never having any kind of issue with anything any department might ask of him. Add to that, Paul Greengrass’ amazing embrace of sound. Many times during most days, Paul would do something only Oliver Stone had done with me. He would come over to me, or ask me to come to his tent and request something specific for the sound. He had a very Robert Altman approach to crowd scenes in that he wanted to “let her rip” with everyone vocalizing fully which, of course, caused Hanks to play to that. It’s always a risky move, but Paul, for all the right reasons, wanted it. There is an energy there that is hard to get doing it the pantomime way.

About our Producer, Gary Goetzman, I cannot say enough good things. He’s one of the most supportive guys in the business. His encouraging words were always there when our confidence might ebb (remember the winds). I’ve had the great privilege of working for him over the years on several Hanks starrers, including That Thing You Do!, Larry Crowne, and Saving Mr. Banks. Many thanks to Gary’s Co-producer, Greg Goodman, who was always there taking care of everyone’s needs.

Wonder Woman 1984

by Peter J. Devlin CAS

On Father’s Day, June 17, 2018, my sound cart was set up on Pennsylvania Avenue in D.C., but I had a feeling all was not as it seemed. Attorney General Jeff Sessions had just visited the set with his entourage, our Director Patty Jenkins had just rehearsed the camera moves with our DP, Matthew Jensen, and our huge cast of extras were dressed from another era. Ben Greaves, my Boom Operator, was standing on top of a fire truck, gas-guzzling cars were idling in neutral, and everybody was waiting for our First AD Toby Hefferman to call action. I was on the set of Wonder Woman 1984, at a time when social distancing was not part of the vocabulary, and the only masks needed to be worn was when a dust storm hit our production in the aptly named Fuerteventura in the Grand Canaries, later in the year.

  • Peter rolling his cart onto the National Mall in D.C.
  • Chris Pine as Steve Trevor and Gal Gadot as Diana Prince.
    Photo by Clay Enos/DC Comics/2020 Warner Bros. Entertainment

Washington, D.C., was our first month of production on WW84, and we showed off the wonders of the capital circa 1984. However, one of the more challenging locations was the rooftop restaurant, opposite the White House. Gal Gadot (Diana) and Kristen Wiig (Barbara) settled into their characters, with the Washington Monument behind them, but it was the sound of jack hammers that made a greater impression on me. With both actors wired for sound, Ben’s Sanken CS-3 sitting on the edge of frame for Diana, and local Utility Nate Sessions on 2nd Boom over Barbra, it was touch and go as far as getting the dialog above that noise floor. Of course, I had to make a request to the Location Department, “Can someone go over to the White House and see if they would work with us on “Cuts and Rolls”? I have to hand it to our great location crew, they tried but the answer from the WH was “No.” With a Sanken CS-5 pointed toward the White House, capturing stereo ambience, I believed we had managed to capture the performance and hopefully a scene that would not need to be recreated in ADR later.

Boom Operator Ben Greaves and I have known each other for many years. He hails from the UK and is another film craftsman who has spent time in the world of big screen superheroes. We both believe the key to success in radio mic’ing is preparation. It is so important to see the fabrics and costumes early on. Prior to WW84, we had both worked with our Costume Designer, Lindy Hemming, and her Set Supervisor, Dan Grace. Because of this relationship, our attention to detail, and their willingness to understand our concerns, we were able to change the fabric in one piece of wardrobe, as well as resoling shoes for actor Chris Pine. With our reliance on radio mics and the importance of clean iso tracks for editorial for all speaking characters, the relationship between Wardrobe and Sound must be truly collaborative.

In that busy first month, we managed to bring to life the world that Diana found herself in the 1980’s, featuring Watergate, Georgetown, and a nighttime walk and talk with the Lincoln Memorial as a backdrop. Our traveling circus made Leavesden Studios our next place to pitch our tents. Filming in the UK also meant that production licensed all my radio mics through OFCOM for the time I was there. It was here that I started with a new team of Adam Ridge, 1st Assistant, Milos Momcilovic, 2nd Assistant, and our Trainee, Pete Blaxill. I had several days of prep in London and time to get to know the local team. Milos had worked with me on Transformers 5 and so impressed me with a great attitude in challenging situations. Adam had come recommended to me by fellow Belfast Mixer, Mervyn Moore.

We got to know each other at the first order of business, a tech scout of a set that would be at the Royal College of Physicians in central London. I had been warned that it would be a sound problem. It was a critical scene in the film, where characters Diana Prince and Steve Trevor are reunited after a period of almost seventy years. It was a location with lots of background, high heels, and dialog underneath the center point of a semi-spherical ceiling; totally non-conducive to recording quiet dialog. My suggestion was to have acoustical engineers come in and put in some temporary baffling that could be later removed. I was told that nothing could be erected in the space, so our quick-fix solution was to have balloons filled with helium and floated to the ceiling to diminish and break up the reflections. It certainly helped and we managed to get out of that location unscathed. During those three nights, there were some questionable music choices from 1984 that were used as playback to motivate our crew and cast!!!

  • Photo by Clay Enos/DC Comics/2020 Warner Bros. Entertainment
  • Director Patty Jenkins and Peter Devlin
  • From left: Adam Ridge, Peter Devlin, Pete Blaxill, and Milos Momcilovic
  • Ray Milazzo, A Camera 1st AC, and Boom Operator Ben Greaves

As we moved into September the nights grew cooler, and we took advantage of many London locations that would double for Washington, D.C. One particular location called “Black Gold” in the film, was the home of one of our protagonists, Max Lord, played by Pedro Pascal. It was here that I was introduced to “Silent Wind.” Special FX Dave and Mark Holt, brothers, designed a system that would bring a quick blast of air to create movement in our actors’ hair and clothes. They built a system that keeps the main body of the wind machine far enough away and carries the air through tubes to the set, thereby minimizing the intrusion of noise on the soundtrack.

This was a welcome relief from an e-fan and a rheostat just off camera. I can’t thank Dave and Mark enough as they were always mindful of how practical on set Special FX can impact the Sound Department. They were always accommodating in adjusting for many scenes in the film. On one occasion they did ask, “Isn’t there a system on your cart that takes out that background noise?” Well, that is a discussion for another time, and another place!!!

With Carnet’s done, lithium batteries specially packed, we were on the road again mid-September, leaving 1984 behind. We were off to Fuerteventura, aka “Themyscira” for flashback sequences of Diana as a child with the amazingly talented Lilly Aspell. It was here that we encountered some serious dust storms, the residue of which is still making an appearance on my equipment. Fuerteventura is aptly named “strong wind,” yet for many of the dialog sequences we got lucky with a calmness that was uncharacteristic for that time of year.

Milos, who dealt with much of the wiring, was relieved that his rigs designed for 30 mph gusts were not needed, Adam was kept busy with Stereo FX recording when we got into crowd sequences, as well as dialog scenes with our Amazonian warriors. Trainee Pete Blaxill did a great job of cleaning and maintaining the equipment. For much of our time there, I was able to set up camp alongside Video Assist Dylan Jones in a Sprinter van that offered some relief on windy days and was also used for our driving sequences. Dylan was great company and kept a cool head at all times and had a great team of assistants with him.

“Silent Wind”

Once we had finished our work in the Gran Canaries, it was back to Leavesden to concentrate on stage work that would take us through December. Although I didn’t get to see Simon Hayes at the same studio, I know he was very busy in prep for Cats. Fortunately, I was able to sit down with Simon and Chris Munro for the CAS podcasts In Conversation. My thanks to them for making the time on a Saturday to go into De Lane Lea in Soho. If you haven’t heard the podcast, check it out with others on the CAS website.

As I sat in these massive sets so beautifully designed at WB’s Leavesden, it brought me back to being a kid in Belfast, at the the Avenue Cinema, watching Christopher Reeve as “Superman” fly for the first time. With each film that I saw; Jaws, Earthquake, The Omen, and so many others, I became more determined to find a way to work in the film industry. I could never have imagined what would lie ahead.

We finally wrapped production of Wonder Woman 1984 on December 19, 2018. It was an epic journey and a wonderful opportunity to work with Patty Jenkins again. Our first outing together was on the film Monster in 2005. Patty is still as receptive to comments and suggestions regarding the soundtrack as she was then. She understands the practicalities of a set, having been a camera assistant. With a camera team that had Matt Jensen as DP, A Camera Operator, Steadicam, Simon Jayes, and B Camera Operator Simon Finney, we couldn’t have asked for a greater synergy.

Post-production was handled by Warner Bros. De Lane Lea. Richard King was the Supervising Sound Editor alongside Jimmy Boyle. Re-recording Mixers Gary Rizzo and Gilbert Lake were on an AMS Neve DFC in Theater A. Iain Eyre would be the Dialog Editor, along with many others in the talented editorial crew. On completion of photography. I spoke with them giving an outline of areas that could be problematic with noise, and a rundown of my methodology and equipment used.

In November of 2019, I set off to London again to sit in with the post team on the final mix, and to watch and listen as those raw tracks become part of something that connects performance to all the other elements. One of the greatest thrills for me was Patty Jenkins inviting me to watch Hans Zimmer at work as they recorded the score for the film. The orchestra and choir were absolutely amazing. It is that circle of talent that makes the connection, becoming one with picture to make a finished soundtrack.

The back of Peter’s sound cart

Equipment List
I decided I wanted two mixers on the cart. I love the pre-amps of the Sonosax but I wanted the ability to integrate the Zaxcom Mix 16 if I went to a bigger track count.

With the Deva 24 having the settings memories to switch between different setups, it was fairly easy to go back-and-forth or have both mixers working in tandem. The one thing that I find indispensable, in using the Zaxcom TRX743 transmitters, is the ability to remote gain them. When paired with the technology of Never Clip, it is difficult to be caught by surprise when an actor goes from a whisper to a scream in an unrehearsed scene. Zaxnet offers so much control from the recorder. The ZMT’s onboard record capabilities were essential to cover the expansive areas on Pennsylvania Avenue with Wonder Woman. Driving sequences in Fuerteventura were handled with the knowledge that if the picture car went beyond the range of our follow vehicle, all the performances would be captured by the onboard sd cards in the Zaxcom transmitters for later transfer or remix.

Equipment Package
Exterior Mics – Sanken CS-3 in a Cinela Blimp
Sanken CS-1 in a Cosi Blimp
Stereo FX on a Sanken CS-5 or spaced Sanken CS-3’s
Interior Mics – Sennheiser MKH50’s in Cinela mounts, Sennheiser MKH 8050’s in Cinela mounts
Interior Stereo FX recording, crossed pair of Sennheiser MKH50’s
Lavaliers – Sanken Cos 11’s
Button or exposed mic – Countryman B6
Wireless Transmitters
Zaxcom
TRX743’s
ZMT’s
Lectrosonics
SMQV
SMVL
SSMWB
SSM

Cart Front
Mixer 8 Channel Sonosax SX-ST (Digital Busses)
Zaxcom Mix 16
Lectrosonics Venue 1 Blocks 19, 20, and 25
Lectrosonics Venue 2 Blocks A, B, and C
Zaxcom RX 12 Receivers, Wisycom Powered Antenna Distribution
Wisycom HTP40 Transmitter
Meon Life + Meon Plus
Marshall Link Monitors

Cart Back
Jack Field
Lectrosonics Dual Receiver (Boom Ops Talk Back)
Wisycom MTP40S IFB Transmitter to Boom Ops
Sound Devices 3 Channels Receiver for Boom Op Talk Backs
Comtek BST-25 Transmitter
IFBT4 Transmitter
Zaxnet IFB system for remote control of Zax Wireless
Denecke GR-2

I would like to thank Lorenzo Milan and his crew who joined us for Second Unit in Washington, and Shaun Mills in London, who handled Second Unit there. Paul Munro was on hand to handle additional photography whilst I was on Star Trek Picard, and the many others who joined us on dailies through the course of the shoot in London. You can check out who they all are on IMDb as it’s a long list and would take up this entire magazine.

In particular, I would like to conclude in thanking my crew in the US, Ben and Nate for starting the show on a particularly exciting morning at the Air and Space Museum in D.C. In London, Adam, Milos, and Pete, who throughout the course of our many days and nights brought a level of positivity and professionalism that made our time together memorable. Especially our night at the Air Museum at Duxford, where clothing noise became an additional character in the scene that wouldn’t take the hint and leave, and when I considered a career break!!!

Recording MTV’s Video Music Awards

Recording MTV’s Video Music Awards from My Bedroom

by James Delhauer

On September 14, 1984, the first Video Music Awards presented by MTV aired live from New York and began a tradition of excellence that continues to this day. Though initially conceived as an alternative network competitor to the popular Grammy Awards, the VMA’s grew in size and prestige to the point that the Moon Person Trophy is now a distinctly coveted prize among artists. The production became an annual spectacle that united creatives and audiences around the world. Year after year, craftspeople innovated creative solutions to new challenges in order to make the spectacle bigger and grander. Then 2020 happened. In light of the SARS-CoV 2 novel coronavirus pandemic, there was a great deal of uncertainty about whether or not the show would go on. Social distancing orders, safer-at-home recommendations, and travel restrictions made it impossible to assemble the necessary talent to put on a live event of that scale. The transition from a live to pre-taped experience presented new challenges for the Local 695 Video Engineers tasked with recording the show.

Traditionally, this particular awards show would air from a single location. New York, California, Florida, Nevada, and most recently New Jersey have all played host to the VMA’s at one time or another. The pandemic made it impossible to safely achieve that sort of gathering and so artists and craftspeople in different parts of the country had to come together to remotely build a show worthy of the production’s legacy. Performances were recorded by separate teams in California and New York. This normally live event was shot over the course of seventeen rigorous days. COVID testing and contact tracing protocols were put into place. Skeleton crews worked in shifts with sanitation breaks scheduled regularly. Anyone who could work remotely did.

Jillian Arnold worked on location to record the Los Angeles segments of the VMA’s.

Personally, I was surprised that my job could be done from home.

In recent years, Viacom and MTV have opted to record the VMA’s utilizing a server-based media platform called Pronology mRes. Instead of recording a single camera signal to a single memory card that must be offloaded and then transcoded before an editor’s work can begin, server-oriented recording allows engineers to simultaneously record multiple cameras directly to multiple storage units. This unique standalone encoder is capable of recording three tiers of compressed or uncompressed video per input channel—allowing recordists to deliver a high-resolution media file, an edit proxy, and a streamable proxy as soon as the director yells “cut.” This is achieved by routing camera signals into a series of mRes servers and then networking them to multiple pieces of network attached storage via a high bandwidth network switch. In recent years, the go-to network attached server unit for the job has been the solid-state drive-based rNAS, also from Pronology. The video from the SDI inputs is stored in an uncompressed format on the mRes server’s internal drives and then an operator utilizes timecode to select “in” and “out” points to create media files across multiple pieces of storage on the network. In conjunction with a transfer client application, post-production media can be wirelessly delivered from site to post in near real time.

So imagine my surprise when I got the phone call saying, “Yeah, you’re going to be doing all of that from your apartment.”

I had no idea how I was going to do that.

And so, discussions began as to how we would achieve a herculean task. We needed to record every camera of the performances in New York in real time, create a primary and a backup copy on site, forward a copy of the complete show to Geiger Post so they could splice it with content being shot in Los Angeles, and deliver screeners to producers for review—all without me ever leaving my bedroom.

First and foremost, I had to clean my apartment. Potentially historic television couldn’t be made in my home if I was tripping over pandemic clutter. After coming to terms with and conquering a lifelong problem with hoarding, I set to work preparing my space for the show.

I needed to be able to take remote control of up to twelve servers at one time and needed to be able to switch between them rapidly. I utilized two computers as controller devices while a third allowed me to maintain rapid communication via email, Slack, and text messages—all of which came in rapidly throughout the shoot. Every TV and monitor in my apartment was drafted into service just so I could keep track of all the devices on site.

On day one of prep, the leg of my desk broke off and I was forced to replace it with a series of pop-up tables from my garage. The production mailed a comms PL system to my address, which would allow me to communicate with various parties on site and in remote locations during work. Installing this upstairs in my room required me to run an Ethernet cable downstairs and across the living room to my router. Of course, it wasn’t quite long enough and some furniture had to be slightly rearranged to move things closer together.

In New York, mobile truck engineers on site powered on the twelve required servers, routed the camera signals to them, installed the network attached storage, configured a high-speed internet pipeline, and gave me Wi-Fi access to two of the machines on the network. It quickly became a game of the blind leading the blind as I communicated with NEP Group’s Dave Goodman, Jeff McEntire, and Orin Smith in order to describe what buttons to push and what ports to use when I couldn’t see what they were doing. Likewise, I would often have to work off of equally vague and confusing descriptions from them in order to make things work on my end. Eventually, we found success and I was able to take remote control of those two machines using Remote Desktop Connection, a Windows-based networking application designed to facilitate remote computer repair work. I was then able to make the computer I was remote controlling take remote control of any other machine on the network, allowing me to control any of the twelve machines I needed through just the two Wi-Fi-enabled ones and transfer data to all necessary destinations.

Lady Gaga steals the night

Once I had access, I passed control to Thayne Knop, a networking and IT expert at Viacom, who set up a Signiant Transfer Client and watch folder on the network, giving us the ability to upload files directly to Lead Editor Hector Lopez at Geiger Post. During the show, the First Assistant Director would call for a record to begin via the PL system. Using one remote control, I would begin recording the show for the editors and with the other, I would make a screener copy for producers and talent. When they called for a cut, I would migrate a copy of the recorded segment into the Signiant Transfer Client watch folder, which would upload a copy of the media directly to the post-production team over a high-speed connection. This allowed editors to begin cutting the show together mere minutes after the performance concluded. Terabytes of data were transferred across multiple network attached storage pieces and I never stepped in within 3,000 miles of it.

In 2019, this workflow would have been laughable. Setting up a daisy chain of twelve computers remote controlling one another so that the show could be recorded from home without ever having to get out of bed is a joke that plenty of recordists used to make on set. It was the sort of thing that someone would say to a tech manager, who would laugh it off before telling everybody to get back to work. Now that joke has become a potentially life-saving precaution.

This process was not without its flaws. My home internet dropped out once. Sometimes I’d have to hop on the PL and ask someone to go push a button if a machine needed to be rebooted, which is just so much more inconvenient for all parties involved than it sounds like it should be. Watching a video feed via remote control is inherently unreliable as the image degrades and drops out constantly, meaning I had no true quality control capability from my location. The equipment in New York often reacted at a slight delay to my input commands in California, leading to some delays for production. As more segments were added to the transfer queue, the network on site slowed down considerably and I lost my connection to it several times. That was scary.

For the Los Angeles end of production, a fully remote recording workflow was not possible. My counterpart on the production, 695 Vice-President Jillian Arnold, has helped shape and develop the workflow for every VMA’s show in the last four years. In keeping with its tradition of ever grander spectacle, Viacom enlisted XR Studios to facilitate virtual reality-based screens and projection content—necessitating a more hands-on recording workflow. Jillian’s records featured a mixture of content of varying frame rates and resolutions, as well as a mixture of recording platforms. Content captured using an mRes system was able to be sent directly to Geiger in a manner similar to my own workflow but content recorded utilizing AJA’s KiPro decks needed to be manually offloaded to local storage before being uploaded. This required several overnight transfers and careful management of digital real estate as upward of 30 terabytes of data were recorded across the shoot.

But we did it. On August 30, MTV brought their award ceremony back for its 37th annual broadcast and 6.4 million people tuned in to see Lady Gaga steal the night. Now, more productions are following the VMA’s in safely returning us to work.

In spite of the sheer volume of sanity destroying negatives associated with the COVID-19 pandemic and 2020 as a whole, there is something to be said for the innovation that’s been made in what will (hopefully) be remembered as the worst year of our lives. Emerging communication technology and infrastructure have allowed us to adapt in ways few would have imagined just a year ago. In March, the sets of Hollywood went dark as social distancing protocols, safer-at-home recommendations, and travel bans were implemented nationwide. And while the end of the pandemic is still a ways away, production was able to resume in just six months. Comical and outside of the box thinking is overcoming unprecedented challenges. Remote working solutions and new set procedures agreed upon with the AMPTP in compliance with CDC guidelines have allowed us to resume our craft. And what we learned from early productions such as this year’s MTV Video Music Awards ceremonies have informed our workflows on productions going forward. Moreover, our achievement in adapting to the current crisis and continuing our craft as artists cannot be understated. New infrastructure, technology, and responsibilities may put us in the enviable position of being more in demand than we were before the pandemic in spite of the push to move workers off set. Now it’s up to us to prepare for this coming moment of opportunity so that we may grasp and use it to get our lives back on track and begin to move past this entire dystopian tragedy.

Building Diversity in 695

LEADING BY EXAMPLE: Building Diversity in 695

by Steve Nelson CAS

In March of 2020, the world lurched into lockdown, trying and failing to get ahead of the pandemic caused by the SARS CoV-2 novel coronavirus, which, as you read this, has taken well over 250,000 lives in this country and more than a million worldwide. The entertainment industry, like most others, was at a standstill. Virtually all production shut down; no one anywhere was working, but your union leadership was very busy trying to figure out how best to serve the membership during this crisis. Dues reduction, how to continue healthcare coverage and pension benefits, and negotiations with AMPTP on how to get us safely back to work, all required our attention at that moment.

Late in May, in the midst of this global health crisis, thousands of people took to the street in sorrow and rage to protest the Minneapolis death-by-police of yet another unarmed Black person, George Floyd. It was one more in a long tragic line. After four hundred years, it seemed unlikely that one more could move the needle, but this was different. Caught on cellphone video, the world saw a killing that was more than shocking in its cold brutality. Even amidst the pandemic, so many people rose up in solidarity to be seen and heard in every major city, even in small towns with virtually no Black residents, and later throughout the world, in vociferous protest of the never-ending racial disparities, violence, and systemic oppression of people of color in the United States.

Many of our members here in Los Angeles joined the marches, as safely as possible, and our Local 695 Facebook page was boiling over. Where was our statement of solidarity with the Black Lives Matter led movement?

A group of Board members came together (virtually, of course) to craft a statement proclaiming our intention to stand with BLM. Should anyone think that it is a simple thing to put out such a statement, even if it seems so obvious and so right, then that person has likely not been in the position of representing a diverse group of almost 2,000 dues-paying individuals who don’t always agree with one another but are part of a larger and ever watchful organization in an industry where words count and actions have very real consequences.

The Local’s well-crafted and strong statement of support was timely enough to satisfy most of our members but nothing these days is without controversy. Some expressed their unhappiness with this statement; their feelings cannot be discounted.

Then the real question arose. What could we, as a union, actually do to support the goals of this essential movement? Making a “We stand with BLM” statement was necessary, but there is a fine line between meaningful support and mere lip-service; between acting as allies and gentrified “Black-washing.” After all, every corporate oppressor has a public relations department ready to create and publish a full-page statement of support in any newspaper, magazine, or online venue they like. What could we do to make a real difference?

At our next Board of Directors meeting, a new committee was proposed, eventually to be called the Committee on Equity, Diversity & Outreach. Our mission statement:

Recognizing the value in a diverse and inclusive community, the Local 695 ED&O committee works to create an environment where members of all cultural and socio-economic backgrounds can thrive. In the workplace, it focuses on improving access to mentorship and giving new members a chance to forge a career path for themselves.

It was decided that our first action would be a public event on Zoom, during which these issues would be discussed among our members. The proposal was met with resounding support and enthusiasm by the Board.

Beginning on June 23, the ED&O Committee met weekly for about two hours on Zoom. We started small, with just Board members and staff. Word got out and we quickly grew in size as dedicated and enthusiastic participants began to reflect and represent the diversity we seek.

Guests such as Veda Campbell, one of the first women of color to become a 695 mixer, would drop in to join the conversation and lend their unique perspective to the discussion. Without work to occupy their time, our members passionately dedicated themselves to the fight against inequity. This was really the place to be on a COVID Tuesday morning; everyone showed up ready to do the hard work and engage with these challenging and sometimes difficult issues. Our President, Business Agent, Field Reps, Officers, Trustees, and Board members were regular participants. The support given to this work by the Board and membership was so encouraging. As an institution, we are committed to positive and inclusive change.

We learned that Local 695 already has a powerful tool for change that has already been deployed. We have in our contract the Y-16A classification of Production Sound/Video Trainee, which allows for the hire of a person without roster status for training purposes. After thirty days of employment, they can join the Local, gain roster status, and all that comes with it. Some of you might be familiar with this and might have been beneficiaries of it. We have brought in several members this way and now it is being formalized as the Sound & Video Opportunity Program for Diversity & Inclusion. The development of this program is already in progress, with the Local partnering with community-based groups in Los Angeles to find and vet candidates and prepare them to begin on the path to a career they might never have considered. This mechanism can create potentially life-changing opportunities where there were none, increase diversity and equity in our Local and in our workplace, and demonstrate to our employers that there is a cost-efficient way for them to achieve their goals and increase productivity. At this time, we have placed five of the first seven candidates on shows going forward. Jamie Gambell and Ben Greaves have been the driving force behind the program since early last year, working with the Local to bring this to fruition, and it is already paying off. Looking forward, this will be a strong tool for change that could bring in an estimated twelve candidates per year. This ensures that our system and labor pool will not be overwhelmed or become oversaturated while ensuring quality candidates are given the opportunity to join our ranks. It should be understood that these trainees do not do the work of any other sound or video person; they work in addition to existing sound teams and their responsibilities are quite limited until they actually become members.

One of the most important aspects of this committee was the creation of a “safe space”—where we could openly discuss and explore solutions to the issues that brought us together, such as a lack of inclusion, inequity, and the mostly monochrome and male dominant nature of our Local and our workplace. These are not easy conversations but these are the issues of our time. There was a lot of learning in these meetings. Those coming from a place inside the dominant power group benefited from the honesty and patience of our members of color and women and progress has been made. In addition to the Trainee Program, the committee has begun development on a mentorship program that will allow members to partner up, informally, with folks who are further along in their careers and gain the benefits of their knowledge and experience. (This may be a slow start due to COVID restrictions.)

Panelists from left to right: Anna Everett, PhD, Willie Burton, Veronica Kahn, Susan Moore-Chong, Chauncy Godwin, Anthony Ortiz, Douglas Shamburger, Yohannes Skoda

Crucial to the underrepresented being heard is organization. There is now a Black Sound & Video Caucus. Women, Latin-X, and our LGBTQ members are all coming together to be heard within our Local. To create change, you must have a seat at the table. We should be seeing more representation from these groups on our Board. On October 3, the Local followed up its public event with a virtual town hall chaired by Ronald Hairston Jr. This event— which was limited to Local 695 members, persons of color, women, and allies—acted as a venue to engage these issues in more depth than the original open forum allowed.

Meanwhile, work continued on the public livestream event “Diversity in Local 695: A Conversation.” The key to a great panel is a great moderator. It seemed obvious that this person should not be one of our own, and of course a person of color. On the list of potential moderators was Anna Everett, PhD, whose many accolades include professor emeritus of film and media studies at UC Santa Barbara; a scholar of Black film history; former Interim Vice Chancellor for Diversity Equity and Academic Policy; activist; and author of several books and innumerable articles. As an academic, she is a veteran of many panel discussions. As a friend of many years, I knew she would be perfect for this role. Still, it was quite a pleasant surprise when the Board passed over all the big industry names on the list to unanimously select Dr. Everett, who was honored and thrilled to accept the opportunity.

In addition to moderating the panel, which included a fair amount of prep to understand what we do, who we are, and the Y-16A Program (a major talking point), I tasked Dr. Everett with opening remarks in order to provide a broader historical context to frame the discussion. This she did brilliantly, drawing on her research to illuminate the importance of sound in Black cinema since the earliest days and to Black audiences. If you missed the livestream, you can view it here: http://www.local695.com/html/diversity.html

ED&O Committee Zoom on a COVID Tuesday morning

Diversity. That is a word with a lot of possibilities and subject to a great deal of misinterpretation, a word heavily freighted—especially these days. It is one of those buzzwords like “affirmative action,” “empower,” “identity politics,” “quota,” or “minority hire” that, while seeking to describe and remedy the baked-in inequities that permeate our world, have insinuated themselves into the conversation in a manner that serves to inflame and divide.

Nevertheless, this often-controversial word best describes our goal: to increase the heterogeneity of our membership and give voice to those usually unheard and unseen, and to show not only who we are, but who we aspire to be.

The composition of the panel would be critical for a successful discussion and to achieve the representation we’re seeking. It required the right balance of our members, professional people, engineers, and craftspeople at the intersection of race, gender, age/career trajectory, craft, and discipline/classification (so many Y-…’s!) And that is to say nothing of the people willing put themselves out there, in public, for what could be a challenging discussion on a sensitive subject. From what we’d learned in our meetings, we felt we could extend our safe space to include this panel and with the help of our experienced and gifted moderator, the conversation would flow.

We did well. A talented, accomplished, eloquent, and brave group representing who we are and what we do opened up about their unique histories, journeys, the challenges they’ve faced and continue to confront as people of color, as women, as professionals, younger, older, Black, Latin-X, Asian-American, Sound, and Video.

One of our panelists had joined via the Trainee Program and could speak to its benefits. We had only one Video Engineer on the panel and it is a major lapse that we had not one Projectionist. It is not easy to distill our essence into only seven little boxes on a Zoom screen!

The committee brought some thematically relevant questions to get the conversation started, which was facilitated in part by the hard work of Eva Rismanforoush and Jennifer Winslow. Dr. Everett skillfully worked the questions to facilitate a dynamic event that encouraged and modulated the flow of conversation to allow our panelists the opportunity to dig deep and bring out aspects of their lives and careers that would otherwise remain unseen. These experiences are essential for the rest of us to understand and appreciate as we move forward. If you missed the livestream, I strongly encourage you to take a couple hours and have a look. You may begin to see things in a different light, which is the first step toward the change we need. http://www.local695.com/html/diversity.html

Over one hundred sixty people registered to attend “Diversity in Local 695: A Conversation,” which took place the morning of Saturday, August 1. Most were 695 members, but there were many from outside, thanks to our publicity effort. This topic is high on the agenda of many organizations in our entertainment industry and most everywhere; I’m proud that we are taking the lead.

Zoom Town Hall: Lifting Up Your Sisters & Brothers in Local 695

This groundbreaking event would not have been possible without the strong support of our Board of Directors, particularly President Mark Ulano and Business Agent Scott Bernard. Much gratitude to all who gave their time to do the work on ED&O for many weeks, to our guests who dropped in to share their experiences, their wisdom, and sometimes outrage. Laurence Abrams did a stellar job of making this a seamless Zoom experience; Vice President Jillian Arnold and Representative Heidi Nakamura expertly handled the Q&A. Much appreciation for our astute and talented moderator, Dr. Anna Everett, and especially to our panelists, consummate professionals all, who brought their wealth of experience to the proceedings. Thank you, Willie Burton, Susan Moore-Chong, Chauncy Godwin, Veronica Kahn, Anthony Ortiz, Doug Shamburger, and Yohannes Skoda for being such a stellar panel.

I’m well into my fourth decade as a Production Sound Mixer, lucky enough to work on some great projects with amazing and talented and brilliant people all over the world. I’ve spent more than a few years serving on the Board of Directors of this Local. This part of the journey has been perhaps the most exhilarating and rewarding ever. Like on a good show, there is the joy of working with a great team, new people, learning new ideas and techniques, meeting challenges, and that difficult-to-describe, very rare sensation of working on something much bigger and more important than oneself; feeling that it might make a difference. Open, supporting, dedicated, courteous, and respectful, together we created a safe space where we could go places I’d never been, where mistakes could be made without fear, and where progress was achieved. Where a more-than-middle-aged well-intentioned white man could blunder his way into becoming an ally and with some gentle but firm guidance, stay on the right path. We are living through a time like no other. Existential crises beset us from every direction. Predating all, sadly, is the matter of race in America. For four hundred years, race has always been the defining issue. From the days of colonialism to a bloody Civil War and through decades of Civil Rights Movements which have led to this very day, we have never achieved the American promise of equality for all. In spite of this, I am proud that, however belatedly, Local 695 is taking steps to address the systemic racism that has characterized our industry for more than a century.

Living With Hearing Loss

Living with Hearing Loss Attenuation, Isolation & Adaptation

Bruce Beacom performing at The Troubadour. April 2017. Photo by Reid Murphy

by Bruce Beacom

At the age of twenty-five, I started my career, humbly in 1995 as a studio PA, at a recording studio in New York City called National Sound at the National Video Center. I worked my way up to become an Assistant Engineer; handling all the daily loads of projects to be mixed, creating music bed playlists to be licensed and used in editing sessions, extensive mic setups in sound booths for music, and voice-over recording sessions and handling duplication requests of all formats for nationally broadcast and syndicated TV shows. It was a great way to cut my teeth in the TV industry, but after three years of not seeing the sunshine, I knew I needed to be out in the field. I left post and started ENG mixing doing corporate freelance work during the day and for four nights a week, I worked at a live music venue called ‘The Living Room’ in the Lower East Village, mixing three to five bands a night. It was exhausting, but I was immersed in mixing on two separate fronts and loved it. In 2000, I moved back to Los Angeles, where I had gone to college, transitioning solely to mixing for unscripted and reality TV. Notable productions I’ve worked on are CBS’s The Amazing Race (fifteen consecutive seasons) for which I received three honorary Emmy certificates for my contributions as a Sound Mixer. HBO’s Project Greenlight (Season 4) for which I was nominated for an Emmy for Sound Mixing, Bravo’s Top Chef, ABC’s American Idol, The Bachelor and Netflix’s recent notable documentary, Jeffrey Epstein: Filthy Rich.

As a singer, songwriter, guitarist, I have produced two records, and I am currently writing material for my third. My band and I have performed regularly in LA at iconic venues such as the Troubadour, The Roxy Theatre, The House of Blues, The Viper Room and many more. If it weren’t for my love of music, I realize I would not be a Sound Mixer today, as the two are inextricably connected and both rooted in the other.

If anyone were to tell me at thirty that I would be spending the next eight years of my life losing ninety-five percent of my hearing to an invisible hereditary disorder, and fighting to get it back, I would’ve never accepted the challenge. I had no choice though. With everything I’ve been through, I clearly understand that when a challenging event occurs in our lives, it’s not the event that defines us, but how we choose to handle it. I chose to never give up hope, and I painfully learned what it means to become an advocate for myself, and to never stop searching for answers.

Losing my hearing educated me to appreciate what I had lost, and to take nothing for granted. It taught me about survival through adaptation, by which I found ways to keep writing music and mixing sound. In the first three-plus years of my hearing loss, I adapted to mixing by monitoring my headphones in a “mono” setting, and turning the volume up to a point that I could feel the vibrations. I developed a keen sensibility for paying close attention to the VU meters like never before. As for my music, I adapted by playing my acoustic guitar as an electric. I can feel the acoustic vibrate against my body, which puts me more in tune with the instrument. I still continue to play this way even after getting my hearing back, as it led me to develop a sixth sense with my acoustic guitar. It’s a style and sound which I would have otherwise not discovered

  • Bruce at The Viper Room, 2019. Photo by Darren Bunkley
  • Booming rapper/recording artist Ludacris on YouTube show
    ’Best Cover Ever’ in 2017
  • ENG bag mixing/booming on American Idol 3. Oct. 2019. Photo by Kako Oyarzun

The ringing in my ears started in my late twenties and to say it caught me off guard would be an understatement. I had normal, healthy hearing until I was about twenty-nine years old, but that’s when I first began to notice signs that something was very wrong. It started slowly in the beginning with sporadic ringing, sometimes accompanied by buzzing, clicking sounds, and even pulsating sensations, like crickets chirping that would sometimes have the ability to throw me off balance. I didn’t think much of it at first as it was very irregular, happening only one to three times a week just for a few moments each time. As I reached my early thirties, these episodes became frequenter, more pronounced and almost constant to the point where they began interfering with my daily life and communications with others.

As a musician and Sound Mixer, protecting my hearing has always been one of my highest priorities. I was confused as to what was happening to me, but I was very proactive in searching for answers to uncover the underlying cause. The path to my diagnoses was uncertain, fraught with dead ends, with many unanswered questions, and at times it was simply terrifying. I spent the next few years going to numerous doctors, ENT’s and specialists, undergoing many forms of diagnostic testing, but not one doctor could give me a clear answer as to why my hearing was in such decline. The vast majority said I had severe tinnitus, with many misdiagnoses; one doctor even telling me I had lupus, which was later retracted. The entire experience was exhausting and unnerving.

At the age of thirty-two feeling helpless, I begrudgingly got fit for a cheap pair of hearing aids; not an easy choice or reality to accept for any musician or Sound Mixer. To say the least, it was humbling. The hearing aids were helpful at first, amplifying external sounds above the volume of the ringing inside my head, and allowing me to adapt. It was short-lived as my hearing continued its rapid decline, so much so that within a year, these cheap hearing aids had no benefit in overcoming the ringing at all. Even if they were turned all the way up, they would just feedback in my ears and create more confusion in my head. After years of searching and still no definitive answers, I was in a dark place, isolated and ready to give up.

I felt the only option left was to seek out a better and more powerful pair of hearing aids. With my wife Holly by my side, we found an Audiologist in Culver City, by the name of Dr. Sol Marghzar. During my initial screening, he discovered that I now suffered ninety-five percent hearing loss in both ears. To our surprise, he said he wasn’t willing to sell me new hearing aids until I had specialized surgery on both of my ears. Holly and I were both stunned, but it was the exact bit of elusive information we’d been searching for. There were so many new questions: “Surgery?! What kind of surgery? What’s it called? How? Where? When?” After all this time of being lost, perhaps there was a reasonable explanation to my dire situation. Honestly, this was the first moment I recall feeling some semblance of hope.

Dr. Marghzar suspected I had a rare genetic hearing disorder called ‘otosclerosis’ which is inherited, and caused by an abnormal overgrowth (mineralization) of bone in the middle ear. This condition stops one of the three bones (the stapes) from vibrating, therefore limiting the transmission of external sound from the outer ear (tympanic membrane/ear drum) through the middle ear (hammer, anvil, stapes) to the inner ear, (cochlea) where the signals are then sent along the auditory nerve to the brain.

A1 on baking show at Tastemade Studios in Santa Monica, CA. Feb. 2020. Photo by Wes McLean
Mic’ing Micky Dolenz for Spotify Music Happens Here shoot, Feb. 2017.
Mixing interview with actor/comedian Elon Gold for web docu-series,
The World According to Jeff Goldblum, on March 2, 2020. Photo by Ross Alexander Wilson

If his conclusions were correct then I could be a candidate for surgery called a ‘Stapedectomy.’ This is a specialized procedure where the diseased “stapes” bone in the middle ear is removed, and replaced with a titanium prosthetic. Once healed the new bone allows sound to pass normally through the middle ear to the inner ear.

Dr. Marghzar recommended I go to the House Ear Clinic in Los Angeles. He then put me in touch with Dr. William H. Slattery and got me an expedited appointment with more blood tests and another CT scan to confirm his suspicions. In the meantime, Dr. Marghzar loaned me a pair of higher quality hearing aids, which helped me to continue working and adapt to daily life.

After further testing at the House Ear Clinic, the results confirmed Dr. Marghzar’s findings; my prayers had been answered. I was scheduled for my first of four ear surgeries with Dr. Slattery, between 2004 and 2007, which restored as much of my hearing as possible. Most cases of otosclerosis requires only one surgery for each ear, mine was so bad that I needed two for each ear (initial surgery and a revision). You can only have one surgery done at a time which required a six-month recovery between surgeries. As they were staggered over four years, it felt like a long endless road.

To better illustrate what was happening to me, it helps to draw a comparison between the anatomical functions of the ear and the mechanical functions of a live PA system. Think of the ear drum (outer ear) how the diaphragm in a microphone works by picking up acoustic energy—sound waves and transmitting them through a connected XLR cable, (middle ear) into the PAs amplifier, (inner ear) where they are then amplified and made distinguishable (the brain). Now, imagine if you were to impede the signal flow through the XLR cable, or even worse, cut it off altogether with wire cutters? The outcome is obvious. No sound waves can now pass between the microphone and the amplifier. This is exactly what happens to a person living with Otosclerosis. It’s also called “conductive hearing loss,” because the bones in the middle ear (XLR cable) stop working as a conduit to the inner ear (amplifier).

As for the internal ringing which accompanies otosclerosis, it’s my understanding that much of mine was generated by the abnormal growth of the stapes bone. In a normal situation, the bones in the middle ear only move when the ear drum vibrates in reaction to external sound waves. As the diseased stapes bone mineralizes, it slowly freezes into place against the cochlea, therefore generating the excruciating internal ringing, screeching, clicking, and buzzing sounds that otherwise do not exist externally. It can be absolutely maddening.

  • First show back since lockdown. Quibi socially distant Zoom interview, June 22, 2020
  • Mixing on a Soundcraft Vi4 console for a live satellite
    feed for Farmers Insurance, January 2017.

Today, I have titanium prosthetic bones in both my ears, and I have regained sixty percent of my hearing. Additionally, Dr. Marghzar fitted me with digital hearing aids gaining twenty percent more and bringing my hearing to about eighty percent. I remain vigilant in managing my hearing loss and preserving it. Otosclerosis is a progressive disease, which if left unchecked, and without taking a daily prescribed dose of a fluoride-calcium called ‘Florical,’ it will revert to where it was before my surgeries, and do further damage to my inner ears (cochlea), which is non-correctable.

After everything I’d been going through, the one hurdle I never expected to face—and one that completely blindsided me—was discrimination at work. I know it sounds unimaginable but the bias is very real and exists within the TV industry. This challenge I found to be just as hard to cope with as overcoming my hearing loss. Specifically, because you can’t control someone else’s lack of compassion, education, or willingness to understand a person’s handicap. There were many occasions where I found out, after not getting a job, that I wasn’t hired because an Executive Producer or Supervisor didn’t want to hire a Sound Mixer who wears hearing aids. I became so dismayed with this reoccurrence that it led me to ask many new questions and also take note of certain biases within our industry; how many Camera Operators wear eye glasses (a lot)? I ask this; “If it’s perfectly OK for a production company to hire a Camera Operator who has ‘assisted sight,’ then why is it not ok to hire a Sound Mixer who has ‘assisted hearing’?” This was an epiphany for me and it became my rallying battle cry from that moment forward in order to open people’s minds to this prejudice within our industry. I’ll never forget the first time I employed this analogy when speaking to a Line Producer who was clearly on the fence about hiring me by asking, “aren’t you the deaf sound guy? The one who wears hearing aids?” I responded, “Yes, I do wear hearing aids, but how many Camera Ops have you hired who wear glasses? What’s the difference?” First, I was met with silence on the other end of the phone, and then a reply I didn’t expect, but was happy to hear “Well, you make a very good point there.” It was literally music to my ears because for the first time, I felt like I had gotten through to somebody. I also realized that I was still having to learn to be an advocate for myself, but only now in this completely different context.

There’s not one day that I’m not thankful to Dr. Marghzar and Dr. Slattery for saving my hearing. First thing I do every morning when I wake up is put my hearing aids in, turn them on and I am beyond grateful when the sounds of the outside world begin to filter through and I can simply hear my wife and son’s voices. To be able to continue writing music and working professionally as a Sound Mixer is even more than I could have ever hoped for. Without the love and unending support of my wife Holly, I’m keenly aware that I would never have been able to get through any of this. I have so much gratitude for it all with the lessons I’ve learned. It is my hope that by sharing my story, I can help to inspire others to keep searching, especially in the face of overwhelming circumstances, and in turn, become advocates for themselves to never give up.

Tenet: A Journey

A Conversation with Willie Burton, Douglas Shamburger, and Rene Defrancesch

by Richard Lightstone CAS AMPS

Elizabeth Debicki and John David Washington film a complicated speedboat race scene.
Photo: Melinda Sue Gordon. © 2020 Warner Bros. Entertainment Inc. All Rights Reserved.
  • Willie Burton on the way to the set
  • Doug Shamburger and Willie enjoying
    a relaxing dinner in Talin, Estonia.

The release date of Christopher Nolan’s Tenet was delayed three times due to the COVID-19 pandemic, and finally reached screen audiences in the United Kingdom on August 26 and the United States on September 3, in IMAX, 35mm, and 75mm. Tenet opened in over fifty territories worldwide and was available to about eighty percent of the screens in the U.S., among the forty-five states that permitted indoor viewing. Unfortunately, moviegoers in both New York City and Los Angeles were denied the opportunity. To date, the film has grossed three hundred and forty-one million dollars, which demonstrates the enthusiasm audiences have for a Christopher Nolan film.

I had the privilege of speaking with Production Sound Mixer Willie Burton, his longtime Boom Operator Doug Shamburger, and Utility Rene Defrancesch in Atlanta, New Mexico, and Glendale respectively.

Willie, Doug, and Rene began to describe the nearly five-month shooting schedule with the seven countries they filmed in. It went like this:

Rene: We started in Estonia. We went from Estonia to the Amalfi Coast in Italy, then to London, from London to Oslo, Norway, and from there to… Do you remember the name of the city in Denmark? I can’t remember the name of that city.

Doug: Copenhagen.

Willie: We were in Southampton right before London.

Doug: Yeah, the Isle of Wight. We were shooting off the coast. And we forgot Mumbai, India, that was another…

Rene: And then, of course, Indio, California, and Victorville.

I counted over forty-three locations in all, basically it was Tenet—the World Tour. The initial interview with Chris Nolan went well for Willie as he explains, “I did say one thing, that I’m a little old school/new school, and I think he liked that because he likes the old school way.” Doug picks up the conversation, “It was really bizarre. We were doing a scene on stage and I come out with the wireless boom, ’cause it’s a dolly. We’re dollying backward in the corridor with two actors walking and talking, and Chris looks down at me and doesn’t see a cable, and says, “You’re not hardline? I like to do hardline sound, I don’t like the compression from wireless.”

From that point onward, their department had microphone cables, lots of cable. Rene explains, “It’s easy to get a cable in there, you’re not worried about it too much. But there were a couple of takes in a couple different countries where we’re in a big open space, and a ton of background, and crew members working, and we got hundreds of feet of cable out there, and just people dodging it.”

Doug Shamburger continues, “While I’m back-pedaling, Rene is pulling my cable with two or three other PA’s all trying to help, as we’re doing a Steadicam shot. It was quite a feat.” Willie adds, “Also there were times in long dolly shots that I had to dolly my sound cart, pulling the sound cart and mixing. Rene and a couple of other people are helping him out pulling cables, and I’m dollying at the same time. Chris Nolan looks around at me and he says, “That wheel on your dolly’s makin’ more noise than anything.” “We did what we had to do, and there were times that I had to go portable, while doing three sixty shots. Doug and I, we’re dancing around, I’m running with the recorder on my shoulder and Doug is getting the boom in there. We made it work because that’s how we used to do that old school style. It was great, it sounded good.”

Willie and his team often had to wrap the gear at the end of a long shooting day and get it ready to ship to the next location, usually another country. “We worked a lot of hours,” says Willie, “we would be in one location sometimes just three days. I think, in Oslo, we were only there for a couple of days, we would finish shooting and we would have to wrap the equipment, and get it ready to ship that night. Then we would go out the next morning. It was a lot of hours spent packing, and unpacking equipment, getting ready to ship.”

The entire schedule was not always like this. They spent six weeks in Estonia, filming in Linnahall, shooting a complicated car scene. Willie spent the bulk of that time in the chase vehicle with rooftop antennas, as well as a Deva Fusion in the picture car. The actors wore wires, as well as hard-wired mics in the vehicle. They also filmed in Tallinn, which doubled for the Opera House in Kiev. Then there was a three-week stint at the Amalfi Coast, in Italy filming on the luxury superyacht the Planet Nine measuring just over two hundred forty feet long, it has six decks and its own helicopter pad.

Himesh Patel, Robert Pattinson, and John David Washington
  • Doug Shamburger, Rene Defrancesch, and Willie Burton on their truck in Mumbai, India.
  • Christopher Nolan laying out the scene with Washington. Photo: Melinda Sue Gordon. ©2020 Warner Bros. Entertainment Inc.
    All Rights Reserved

In South Hampton, England, they filmed a complicated speedboat race scene. Willie describes the challenges, “The boat had to be launched by seven o’clock. So Rene and I had to arrive at six to wire the race boat with my Deva 5.8, set it up, and test it out. We used a quarter-watt transmitter at the stern of the boat in order to transmit the sound to the chase boat that we would be on. I would turn on the recorder and from seven in the morning, it ran the whole time until the boat got back in. The Deva on the picture boat transmitted to the chase boat, where I was also recording the dialog. We used the headsets worn by the actors on the speedboat. The speedboat could go much faster than our chase boat, so sometimes they would take off and we’d be trying to catch up. There was so much wind and water hitting them, it was pretty incredible and very challenging. I think we did a really good job on that.” They also had a mock-up of the hero speedboat attached to a picture boat, where Doug was able to boom the dialog, of course hardlined.

As in all Christopher Nolan films, the plot is complicated, with many scenes where the characters move forward and backward in time, as well as wearing breathing masks. Rene explains, “Because of the nature of the story, you needed a specific type of oxygen. Whenever you were in reverse mode, they had to have oxygen to breathe.”

Fortunately, Willie and his crew had time to prepare how to mic the masks. Willie said, “We did research with Trew Audio and also Location Sound, and found that the Sennheiser lavalier was the one that had the lower sensitivity. There was a tube that came from the mask to their body, and we would mic the very end of the tube, it worked fine. This was based on all of us testing and testing.” Rene adds, “It had to work with the masks, you couldn’t hear the actors clearly with the boom. This is probably the only time Chris accepted the use of wires on his show.”

Every few days in prep, Chris would have meetings called “results meetings,” with every department attending. “The wardrobe department would let us have the mask and the helmets to take with us,” explains Willie, “Rene and I would be doing tests, as Chris allowed us to work a couple of days testing while they were doing camera tests. It made it so much easier because without that, you start a shoot and you’re cold. We had time to figure it out, which was really most important.”

Nolan filmed with 70mm cameras and Willie and crew also assisted in engineering blimps for the IMAX camera. It knocked the camera noise down slightly, but Cinematographer/Operator Hoyte Van Hoytema would hoist the IMAX camera on his shoulder doing it handheld so the blimp proved too cumbersome. Doug Shamburger would have the actors re-enact their movement and dialog immediately after a successful shot to capture a clean performance, wild.

Willie Burton prefers the Zaxcom Deva, with a Fusion, two Deva 5.8’s and a Mix-12. His wireless are Lectrosonics, with a varied mixture of microphones; Sennheiser MKH 50’s, and the Schoeps CMIT 5U, whatever is best for the situation.

I asked them how it was working with Christopher Nolan. Their first comment is that Chris does rehearsals, which allowed them the opportunity to figure out how to boom the scene effectively.

Willie elaborates, “You just have to be prepared with Chris. Chris works hard, he’s always there and you have to pay attention to what he’s doing, ’cause he’ll change things. Thanks to Doug and Rene, who were always there diving in. We could be shooting in one place, the next thing is, oh, we’re over here. But we had everything that we needed. One thing I can thank Chris for is that he took me to scout on all the locations we were filming, all the department heads and now how cool is that!”

Willie’s small sound cart

“I had this large sound cart, this huge sound cart,” continues Willie, “but by being able to scout the locations, I sized down to a very small cart and now I would never go back to my large one. It’s small and very simple, we could pick it up, move it around, it really paid off.”

“There’s no way you can’t know what’s going on ’cause you’re standing right next to the man,” explains Doug, “I’m talking rain, sun, he’s standing, we’re all in the rain, he’s got his hood off and the rain is pouring down, while he’s looking at these little monitors, these little Casio monitors—was the only video village to speak of, and that was around his neck. When you’re right around camera, he’s standing all day, so if anything, you feel like a soldier, and you’re gonna stand right there next to him shoulder to shoulder with the dolly grip, the DP, a Camera Operator, the First AC, all the immediate people that are primarily involved. It’s just an old school way of doing it, but it’s quite effective. I felt that’s the way movies should be made, not fifteen- to twenty-minute takes where no one can reset or adjust. Chris’s takes run three minutes, four minutes, five minutes, and then we make our adjustments afterward, and improve upon the next take.”

Rene continues, “There was no video feed, but Willie was often close enough that he could see the action.” Doug jumps in, “There’s a sense of camaraderie working on a Chris Nolan movie. He’s a foot soldier, he’s right in the trenches with you. There’s just such teamwork. You’re out there, you may be on a boat with a camera, the camera operator, focus puller, we’re handing mags over to load the camera. We’re all tugging on the same rope trying to make this quality project and it’s just really a unique set of circumstances. Chris Nolan sees, he sees it all and he’s watching how we all work collectively.

“No one’s disconnected or looking at their phone,” says Doug. “You’re totally one hundred percent vested in every given moment throughout the course of a twelve- to thirteen-hour day. He’s right there with you, and he’s got a good sense of humor too. We laughed a lot. Throughout the course of the day, he’s not uptight, but he’s no nonsense. Chris jokes around, with a dry sense of humor, but he’s still right there, making it. It was really an adventure.”

Willie sums it up: “Obviously, it’s a challenging film and when you’re working with Chris, it’s very challenging. But we all lived up to that. Some films are simple, you’re mixing two or three faders and that’s all you do. I like the challenge of figuring it out to get the best possible sound and I like the way Chris works. He’s definitely demanding, but that’s how it should be. I think as a department head, you go to work to give one hundred percent, that’s how Chris works, he won’t ask you to do anymore than he would do. You do a movie like this (unfortunately, I haven’t seen it), but the end result is the most important thing that counts. For me, the performance is in the voice, and when it clicks, it’s very musical.”

How Important Is the Production Mix?

by Simon Hayes AMPS CAS

This question has been asked again and again, over the last two decades on production sound forums, and in conversations between Production Sound Mixers, Picture Editors, and Sound Editors. This is a divisive subject often leading to heated debate especially on forums and professional social media groups. I thought I’d share my thoughts and opinions.

Years ago, we PSM’s mixed to a mono quarter-inch Nagra track, and professional reputations were forged or lost by our production sound mix. There were no ISO tracks to save us should we miss time a fader cue or miss an actor ad-lib. With the careful blending of score and effects, our production sound dialog mix was pretty much what the audience heard in the theatre (give or take some equalization and level changes in post), and if the production mix did not work, the scene would be marked down for ADR.

There was change necessitated by some Directors shooting style; a well-known example being Robert Altman, who required multiple tracks of lavaliers, so his cast could overlap each other, and the advent of multiple track tape-based equipment; the Nagra D or Tascam Hi 8, followed quickly by nonlinear systems from Zaxcom, Fostex, Aaton, and later, Sound Devices, leading us to where we are today.

Similarly picture editorial and sound editorial were moving into nonlinear systems with multi track audio capabilities, the prototype systems of which are in use now; picture editorial using Avid with the ability to import multiple audio tracks, and sound editorial using Pro Tools.

The movies I work on now, I’m finding that the picture editing team is becoming increasingly adept at creating a really great sounding Avid playout with score, and sound effects added seamlessly to the production sound mix.

I have also found that for the last twenty years or so on the projects I work on, that the Dialog Editor will generally rebuild the production sound mix from the component ISO tracks I deliver.

Is this a bad thing? Does this process compromise the PSM’s importance in the filmmaking process? Has the PSM given up an element of control that we previously had when providing a single track mix? Has the advent of the Dialog Editor rebuilding the mix been helpful or a hindrance to the Production Sound team working on the set? And finally, how important IS the production sound mix in modern times?

I’ve been party to recent discussions that PSM’s have become ‘recordists’ rather than ‘mixers,’ and the importance of the production mix has been relegated. In my experience, this could not be further from the truth. I have actually found that the production sound mix is actually becoming more valuable rather than less so, even though the Dialog Editor is likely to rebuild the dialog mix using Pro Tools from the ISO components provided by the PSM.

“When Directors are hearing a really great burgeoning soundtrack in the cutting room from day one, they are more likely to be supportive of production sound rather than feeling ADR is the answer.”

There are a number of factors, the main one being the audio integration of Avid software, and the huge increase of audio skills with Picture Editors and First Assistant Editors using the Avid platform. Directors are increasingly expecting their Avid cut to sound polished, like a finished product. Picture Editorial are committing more time to getting the cut sounding great. The First Assistant Editor is literally working on the Avid sound mix in real time on a lot of the films I work on. The Picture Editor is making shot decisions so that when the Director arrives in the cutting room to catch up after the day’s shooting, they can watch a cut and be completely immersed in a scene that has added score and sound effects.

Increasingly, I am seeing Picture Editors cutting in 5.1. Ten years ago, this was rare but the phenomenon has become more and more the norm over that time. In my opinion, this attention to audio detail in picture editorial is great for production sound.

Hearing the sonically polished Avid cut from the very beginning of the project promotes confidence in the performances we capture on the set. When Directors are hearing a really great burgeoning soundtrack in the cutting room from day one, they are more likely to be supportive of production sound rather than feeling ADR is the answer. A pessimistic view of the quality of the production soundtrack from Directors means the production sound crew is less likely to get that all-important directorial support, which could lead to more collaboration and respect from our colleagues in other departments.

Another positive for our production sound mix as the Avid cut has started sounding better and better is that Directors and studios are more likely to use it for early test screenings of carefully chosen audiences to gauge opinion while still editing. There was a time when it was rare for the Avid cut to get shown without a proper temp mix by the sound editing team closer to the end of picture editing. Nowadays, using the Avid audio mix, the test screening process can begin far earlier.

The ability for the Director to be able to show the Avid cut at almost any stage is incredibly positive for the production sound mix. The temp mix is still vitally important when the test screenings audience gets bigger as the movie is closer to picture lock. Every time the production sound mix is screened the Director and Picture Editor become more confident in the mix and its ability to support the performance and narrative, and less likely for the Director needing to use ADR for technical purposes. I am fully supportive of the use of ADR for performance or storytelling reasons, but I personally feel it is a shame when performances are re-recorded for technical reasons, unless absolutely necessary due to poor location sound.

The time from picture wrap to theatrical release is often growing and the production sound mix remains within the Avid for many months (sometimes over a year!), due to a number of reasons, the main one being VFX delivery. Since this is the only reference the Director has to the performances, our production sound mix has to be great, and instil confidence, not just in the technical aspect of the recording but in the creative realm as well.

So why is it necessary for the Dialog Editor to rebuild our production sound mix from the ISO’s? I always look at it from the Dialog Editors’ perspective. They understand that often we are shooting rehearsals; dealing with ad-libs, watching a monitor as we mix, assessing if we could get more carpet in for the next take to reduce footfalls, giving our boom ops edges of frame through comms as we shoot with two or three cameras, particularly if the cameras are using zooms.

With the additional ISO tracks we record, there is so much more we are having to cope with. Along with the critical part of our jobs, adjusting input gains on mic pre-amps as we record. We are having to react to so many more variables during a take than purely mixing our faders. I feel it is more important to get the input gains on my mic pre-amps absolutely dialled in to provide technically excellent ISO tracks, rather than making sure the modulation on my mix track is perfect to nearest 1db. I am confident that the Dialog Editor will read my sound reports to find out which ISO components I used in my mix track, and work through my ISO’s to decide whether my instinctive and fast-paced decision to use a boom for a line of dialog was misplaced, and the track would have benefitted from the actor’s lavalier. We are able to provide choices for the Dialog Editor and it would be arrogant to assume that we always make the right choices between the lavs and booms when we are in the moment, during the creative process of a take. For that reason, I carefully write the elements of my mix on the sound reports, going into detail if need be. I am confident that I am delivering the best mix possible for what I perceive to be the Director’s vision.

I am also conscious that if I was the Dialog Editor, I would use the mix as a starting point, but go through the ISO tracks and listen to other choices. Even if the PSM made all the correct choices, I would still go back and rebuild the fades and balance levels using the ISO’s in the comfort of a quiet cutting room, listening through studio monitors critically, with the time to audition a mix, replay and adjust if necessary. Rebuilding the mix using the ISO components allows the Dialog Editor and the Re-recording Mixer the opportunity to “steal” words or even syllables from previous takes to help performance, or remove unwanted background noise that is unfilterable.

When I spoke to Re-recording Mixers and Dialog Editors about this article, what was most prominent was their use of de-noising plugins in subtle and creative ways with the ISO tracks. They can achieve finer detail with each ISO track rather than far broader brush strokes leading to obvious artefacts when using the same process on the mix track. This workflow is incredibly important to reduce ADR, and protect the creatively fragile original performances we capture on the movie set.

Mixers on the set have read the script, understand the story, and do their best to convey the narrative that best fits the Director’s vision. Once shooting wraps, the Director and Picture Editor may decide that two scenes in the script in linear form will be better served by intercutting to build tension or other qualities the Director wants to convey.

“It is wonderful that we are able to give creative choices to our Directors and colleagues in picture and sound post, that previously we were unable to.”

Let’s say on the first scene the PSM has mixed using the booms, with beautiful perspective; the wides sound wider, the mid shots sound mid and the close-ups sound close. In the next scene, there was an ambience or background noise issue with the booms, or perhaps the Director wanted to shoot wide and tight at the same time, so the booms never got close up coverage, forcing the PSM to prioritise the lavaliers in the mix. This scenario is very common, and it simply doesn’t work to be cutting between the two scenes with one playing air around the mics with camera perspective on the booms, and the other scene using the forced perspective of the lavaliers. Thankfully, the Dialog Editor has the ability to completely remix the first scene with the lavalier ISO tracks to intercut seamlessly with the second scene. When the audience watches those scenes in the theatre, there are no uncomfortable shifts in audio perspective that takes the viewer out of the cinematic experience.

Another pertinent point in this discussion is the picture cut and the relationship with the score. The Director may have been presented with a beautifully written and performed score by the Composer. The Director decides that this new piece of music really enhances the emotional performances in the scene and he/she asks the Re-recording Mixer to really push the volume. In our example, the PSM has been presented with a beautiful location or set to record in, with no acoustic or background noise issues, a single camera or two with matching headroom allowing us to play the camera perspective, letting the acoustics breathe and use the booms alone in our mix.

Now that the Director has asked to push the volume of the score, they will reach a point, while using the fantastic sounding camera perspective boom mix, that the Re-recording Mixer has to say to the Director, “We can’t push the score any louder without swamping the dialog on the wide and mid shots as there is room acoustics and the dialog is less upfront.” The Re-recording Mixer can offer to use a lav-only mix which allows the score to be pushed a few decibels higher without swamping the production dialog, as the lav perspective is all close regardless of camera angles. A second bonus is that the louder score will potentially hide any clothing/costume rustle from the lavalier tracks that we were concerned about while shooting the scene!

There are numerous ways in which ISO tracks are being used creatively to support the Director’s vision, both visually and acoustically. I am particularly proud we are able to supply the components, and it is wonderful that we are able to give creative choices to our Directors and colleagues in picture and sound post that previously we were unable to. With this reason, I see no negative in the fact that on films and higher budget television shows our production sound mix is usually re-mixed to support the narrative and final picture cut. It simply means MORE production dialog is likely to make it into the movie theatres with LESS reliance on ADR for technical reasons, which can only be a good thing for our craft, the theatre-going audience and the protection of the actor’s original performances.

Emmys

Creative Arts Emmy Sound Mixing Winners 2020

OUTSTANDING SOUND MIXING FOR A VARIETY SERIES OR SPECIAL

The Oscars
ABC • The Academy of Motion Picture Arts and Sciences

Paul Sandweiss, Production Mixer
Production Sound Team
Audio Maintenance: Jeff Peterson, Alex Guessard
Monitor Assist: Phil Valdivia
Lead A2: Steve Anderson
A2’s: Bruce Arledge, Debbie Fecteau, Eddie McKarge,
Larry Reed, Craig Rovello, Ric Teller
 
Orchestra Setup: Dan Vicari
Pop-Up Mic Tech: David Mounts
Pls: Keith Hall, Stephen T. Anderson, Juan Gallardo
Tommy Vicari, Orchestra Music Mixer
Biff Dawes, Music Mixer
Pablo Munguia, Pro Tools Mixer
Kristian Pedregon, Post Audio
Patrick Baltzell, House PA Mixer
Michael Parker, Monitor Mixer
Christian Schrader, Supplemental Audio
John Perez, VO Mixer
Marc Repp, Music Mix Engineer
Thomas Pesa, Orchestra Monitor Mixer

OUTSTANDING SOUND MIXING FOR A NONFICTION OR
REALITY PROGRAM (SINGLE OR MULTI-CAMERA)

This image released by Neon/CNN Films shows a scene from the film “Apollo 11.” (Neon/CNN Films via AP)

Apollo 11
CNN • CNN Films, Statement Pictures, Neon

Eric Milano, Re-Recording Mixer

OUTSTANDING SOUND MIXING FOR A LIMITED SERIES OR MOVIE

Watchmen • This Extraordinary Being
HBO • HBO Entertainment in association with White Rabbit,
Paramount Television, Warner Bros. Television & DC Comics

Douglas Axtell, Production Mixer
Production Sound Team: Chris Isaac, Jesse Parker,
Steven Willer, Patrick Anderson, Colt Logan, Josh Tamburo
Joe DeAngelis, Re-Recording Mixer
Chris Carpenter, Re-Recording Mixer

OUTSTANDING SOUND MIXING FOR A COMEDY OR
DRAMA SERIES (HALF-HOUR) AND ANIMATION

The Mandalorian • Chapter 2: The Child
Disney+ • Lucasfilm Ltd.

Shawn Holden, Production Mixer
Production Sound Team: Ben Wienert, Veronica Kahn, Jamie Gambell, John Evens, Daniel Quintana, Phil Jackson
Bonnie Wild, Re-Recording Mixer
Chris Fogel, Scoring Mixer

OUTSTANDING SOUND MIXING FOR A COMEDY OR
DRAMA SERIES (ONE HOUR)

The Marvelous Mrs. Maisel • A Jewish Girl Walks Into the Apollo
Prime Video • Amazon Studios

Mathew Price CAS, Production Sound Mixer
Production Sound Team: Carmine Picarello, Spyros Poulos
Ron Bochar CAS, Re-Recording Mixer
George A. Lara CAS, Foley Mixer
David Boulton, ADR Mixer


*Names in bold are Local 695 members

Back to School Season

Back to School Season:
How to Prepare for the Return to Work in Hollywood

by James Delhauer

Our industry has changed. A disease that would have felt right at home in a dystopian sci-fi flick brought the world grinding to a halt and Hollywood stopped right alongside everything else. The soundstages, sports arenas, and production sets where we make our art have been quietly empty for months. Debate continues to revolve around new safety etiquette and protocols to be implemented on set and while strict regulations have yet to be codified. It has become apparent that all of our jobs will have been changed by the pandemic by the time we return to them. For 695 Video Engineers—whose responsibilities can include media playback, on-set chroma keying, off-camera recording, copying files from camera media to external storage devices, backup and redundancy creation, transcoding with or without previously created LUT’s, quality control, and syncing and recording copies for the purpose of dailies creation—our role is going to become critical. With what time remains before a full-scale reopening, it is highly recommended that industry workers take advantage of current learning opportunities and endeavor to prepare themselves to meet these new challenges.
The first and most important priority going forward is safety. The Center for Disease Control (CDC) and the World Health Organization (WHO) advisories regarding sanitation, wearing masks, and social distancing are still in effect and should be adhered to strictly. Beyond that, it is imperative that workers familiarize themselves with the industry white papers and documentation being compiled by the IA, DGA, SAG-AFTRA, the AMPTP, the Teamsters, and the studios. These protocols are for the protection of everyone and must be followed consistently and correctly if they are to be effective. To do this, everyone on set must have a thorough understanding of what these new protocols entail. Moreover, it would be wise to look up  local city, county, and state health guidelines for any production on which you are hired as different regions present different degrees of risk.
 
On an equally important note, the coronavirus has either created or exacerbated negative mental health issues across the world. After months out of work, civil unrest, and seemingly unending uncertainty, emotional burnout is a growing problem. As we strive to navigate this brave new world, it is crucial that personal well-being and care be taken into account. Depression and anxiety, which have become pandemics in their own right, impair executive function and will make returning to work more difficult for many. Fortunately, there are many mental health resources available. Mental health services are available through the MPI Health Plan, the Motion Picture and Television Fund, Optum Health Services, and many private health insurance plans. Optum Health, in particular, is the mental health service provided through the MPI and should be taken advantage of by those to whom it is available. If you do not qualify, services such as BetterHelp and TalkSpace work with individuals to match clients with a licensed counselor at a price that they can afford. There are also public and private agencies available to help those in need of low- or no-cost mental health services that can be found by searching for “Federally Qualified Health Centers” within a local community. For daily guided meditations and assisted relaxation, the app Headspace is currently available for free for those who have lost their jobs as a result of the COVID-19 outbreak.

On the set, the most notable change that we are likely to encounter is decentralization. The days of congregating behind video village after a quick stop at the buffet-style craft services table are over. Social distancing guidelines will require the usage of the cloud-based and streaming services to communicate information and content quickly between relevant parties on set. While it may sound rudimentary, Skype and Zoom are going to be a part of our lives for the foreseeable future. Learning one or both of these programs now, while things are slow, should be a top priority. Free platforms such as OBS (Open Broadcaster Software) can be used to simultaneously record and stream camera inputs in compressed formats so that content can be sent directly to the required parties’ personal devices in real time but can also be uploaded to the cloud for future use. This is a practice already commonly used in the creation of gaming content, giving film and television production an ample supply of examples to draw from. In2Core’s QTake—a commonly used video assist platform—can be configured to stream media to intended recipients over an end-to-end encrypted cloud service while allowing clients to view metadata, enter annotations, and comunicate with Video Assist Operators in real time. Services such as MediaSilo and Frame.io are used to share dailies among necessary individuals and allow for metadata tagging, notations, and near real-time feedback.

Much of this will require the integration of computer networking in ways it wasn’t previously being utilized. Closed network access can allow productions to collect and distribute digital assets as needed in a manner that secures files from unwanted access. More digital files will require more network-attached storage devices, such as the rNAS from Pronology—a solid-state-based system developed by Local 695 member Jon Aroesty. Server-based recording in conjunction with file-acceleration services such as Signiant and File Catalyst remains the fastest way to deliver content from set to post-production. As such, a baseline familiarity in storage, network switching, routing, and IP configuration could prove invaluable.

As many video engineers serve as an intermediary between production and post-production, it is also necessary to possess a fundamental understanding of post-production workflows. Remote editing work may continue for some time for our brothers and sisters in Local 700, meaning that there is no room for error when it comes to the files we provide. Nonlinear editing applications such as Avid Media Composer, Adobe Premiere Pro, and DaVinci Resolve form the core of their workflows and so they must be a part of ours. Each application relies upon or excels with specific video formats or codecs, meaning that engineers will need to be familiar with the most prevalent ones, which include the Apple ProRes suite, the Avid DNx family, REDCODE .r3d  files, ArriRAW, and the various h.264 formats. More recent codecs, such as ProRes RAW, Blackmagic RAW, and HEVC h.265 have not seen wide adoption on set as of yet but are projected to become more prevalent as products support for them continues to grow. Many of these formats require the usage of high-end workstations to process efficiently (or at all) and so a basic understanding of computer hardware may prove advantageous.

There is a wealth of information available for all of these products and services online. White pages, product manuals, workflow guides, and technical support information can be downloaded from most manufacturer websites. Many offer certification programs for the purposes of familiarizing users with the ins and outs of their products. LinkedIn Learning (formerly Lynda.com) is an educational platform where users can take classes across a staggering variety of subjects and is available for free to all IATSE members at https://www.iatsetrainingtrust.org/lil. Experts across just about every subject imaginable have flooded no-cost platforms like YouTube with videos, overview, courses, tutorials, and discussions that allow a layman to become an expert in due time. With the majority of the nation’s higher education institutions opting not to reopen their doors in the fall, course loads have migrated online. Low-cost community college courses can be taken more conveniently than before the plague. As a bonus, workers enrolled in accredited online courses may be eligible for student discounts on computers and software, potentially nullifying the cost of the class entirely. Many four-year universities offer free online continued educational opportunities to their alumni. In light of the COVID-19 outbreak, the Local 695 Board has allocated funds to continue education programs for its members. Members who are interested should be sure to view the “Education and Training” page of the Local 695 website and keep an eye on the 695 Announcement emails for news on upcoming training. Requests for new training content can be submitted to edu@local695.com.

When all else fails, Google is your friend.
 
But more than just education and learning, we need the two things that Hollywood has always thrived on most: diversity and creativity. In this time of unprecedented change, there are no experts in what the new sets of tomorrow will look like. No one person has all of the answers. Our membership is one of diverse backgrounds and experiences. The lives we have lived have prepared each us for today’s challenges in different ways. Moreover, Local 695 members remain the best in our fields and we all have different tools at our disposal. As our responsibilities evolve on set, it is up to us to lead the charge in finding solutions to new problems; to find new and unconventional ways of utilizing the resources at our disposal for the safety and betterment of the entire set. This is the time when standing in solidarity with one another is going to matter most.

Soon, productions will begin again. When they do, it is likely that all of us will be facing a deluge of work as content creators strive to make up for lost time and appease a starving audience. This is a very real light at the end of a very dark tunnel. New infrastructure, technology, and responsibilities may put us in the enviable position of being more in demand than we were before the pandemic in spite of the push to move workers off set. Now it’s up to us to prepare for this coming moment of opportunity so that we may grasp and use it to get our lives back on track and begin to move past this entire dystopian tragedy.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Interim pages omitted …
  • Page 16
  • Go to Next Page »

IATSE LOCAL 695
5439 Cahuenga Boulevard
North Hollywood, CA 91601

phone  (818) 985-9204
email  info@local695.com

  • Facebook
  • Instagram
  • Twitter

IATSE Local 695

Copyright © 2025 · IATSE Local 695 · All Rights Reserved · Notices · Log out