• Skip to primary navigation
  • Skip to main content
  • Login

IATSE Local 695

Production Sound, Video Engineers & Studio Projectionists

  • About
    • About Local 695
    • Why & How to Join 695
    • Labor News & Info
    • IATSE Resolution on Abuse
    • IATSE Equality Statement
    • In Memoriam
    • Contact Us
  • Magazine
    • CURRENT and Past Issues
    • About the Magazine
    • Contact the Editors
    • How to Advertise
    • Subscribe
  • Resources
    • COVID-19 Info
    • Safety Hotlines
    • Health & Safety Info
    • FCC Licensing
    • IATSE Holiday Calendar
    • Assistance Programs
    • Photo Gallery
    • Organizing
    • Do Buy / Don’t Buy
    • Retiree Info & Resources
    • Industry Links
    • Film & TV Downloads
    • E-Waste & Recycling
    • Online Store
  • Members
    • ★ News & Announcements
    • ★ Membership Services
    • ★ Job Reporting
    • ★ Working Conditions Report
    • Available for Work List
    • Membership Directory
    • Rates ~ Contracts ~ C&B’s
    • Education & Training
    • MPI Hours
    • Safety Pass Training
    • Projectionist Contacts
    • Local 695 Elected Officers
    • 695 Member Discounts
  • Show Search
Hide Search

Features

Ric Rambles

by Ric Teller

Ric’s Grammy credentials

When I was a kid, I wanted to be an A2.

Said no one. Ever. Perhaps a cowboy, a fireman, or an astronaut. A teacher, a musician, or an athlete. Back then (the 1950s-60s), maybe a small percentage wanted to be a TV cameraman. But an A2? No way. Admittedly, I didn’t know this job existed until 1981 when I started doing shows at KTLA. By then, I was approaching thirty years old. A few years later, when I left the station to begin freelancing, I had gained knowledge. Not exactly expertise but I had some skills. You could say I was making a run at Malcolm Gladwell’s 10,000-hour rule. Now, some 60,385 hours into my post-KTLA career, I am happy to report that I’m still learning. Retention is a bit problematic at times, but I still enjoy the challenges, especially of a big, difficult, live TV show. By the way, if anyone reading this is a Gladwellian scholar, forgive me if I have misused the rule, I have enjoyed some of his books, and many articles in The New Yorker, an esteemed publication where my singular contribution can be found in the Caption Contest number 355.

2022 Grammys Patch World

This year, because of some complicated COVID shenanigans, the 64th Annual Grammy Awards show was held in Las Vegas, at the MGM Grand Garden Arena. We traveled to begin the setup just a few hours after wrapping the Oscars. You remember the Oscars, doncha? The MGM has been a familiar location for many award shows, but never the Grammys. There was a bit of a learning curve … ’nuff said.

Personally, I learned a couple of things on the technical side, and one big lesson about show organization. The three-and-a-half-hour live show (it comes with the gift of a three-and-a-half-hour dress rehearsal), consisted of nine awards, a couple of pre-taped segments, and sixteen musical performances divided between the A, B, and Dish (house) stages. The technical lessons were provided by Denali (production truck), Audio Engineer Hugh Healy, and ATK/Claire Global Project Manager/FOH Production Mixer Jeff Peterson. Hugh’s lesson was about a piece of equipment called the Lance Box, used to send audio signals on a single strand of fiber in both directions between the split world near stage and the production truck. It is often used for emergency backup mixes in case a console has issues during a live show. Jeff taught us how to properly set up a DiGiCo rack (the brand of audio console used at FOH and Monitor positions) to send Madi signals to the Pro Tools operators. The Madi streams include all the vocal mics individually, which can be accessed by the Pro Tools operators for tuning purposes. Watching the excellent engineers teach these tasks is educational. Both follow Einstein’s rule: If you can’t explain it to a six year old or an A2, you don’t understand it yourself. For me, retaining the information is a challenge that, on some days, might require a Vulcan mind meld. The organizational lesson I learned was about the importance of paying attention to the entire show, and how one segment affects another. I’d been looking at each performance as an individual entity. That works fine for rehearsal, but on show day, when everyone is dressed in black (a side note, the only other place you might see this many people dressed in black is at an art gallery), the show becomes the sum of its individual parts. I won’t go into the details of what I missed or how it affected the action, but thanks to the stellar work of the A-side and B-side A2’s, all seventy-five inputs for the performance in question were patched, line-checked, and ready to play right on time. Of course, six minutes later, that one was pushed off stage and two other bands arrived. Live TV, gotta love it.

A recent exchange:
Edward Nelson: “Hey Ric, did you know this show is going to be live?”
Ric: “Yes, I know.”
Edward Nelson: “Are you nervous?”
Ric: “Yes, I am.”
Edward Nelson: “Is this your first time?”
Ric: “No, I’ve been nervous before.”

With Eddie McKarge. The original B-side boy
Damon Andres, Stage A lead band A2

In 2000, the Grammys moved from the Shrine Auditorium to the then-new Staples Center (now Crypto.com Arena) and changed from one performance stage to two. Each Grammys since (except the unusual 2021 version) has featured A and B stages. That first year, Eddie McKarge and I were assigned to the B stage and, with a couple of exceptions, have anchored the B-side ever since.

This year, the A-side led by Damon Andres had the more difficult stage. Of course, they did excellent work.

I do wish I had a better memory of more of the wonderful Grammy performances I’ve seen over the years. If only there was some electronic device to provide a reminder…

My first was the 29th Grammys in 1987. Paul Simon was one of the performers, I was excited. At that moment, I wanted to be an A2. A couple of years later, Linda Ronstadt performed a song from the beautiful album, Canciones de Mi Padre. In those years, amazing voices like Ronstadt, Whitney Houston, and Luther Vandross were regular contributors. Luciano Pavarotti offered “Nessum dorma,” accompanied by an orchestra in 1999, the last Grammy Awards at the Shrine Auditorium. The 45th Grammys at Madison Square Garden was filled with challenges, not the least of which was to set up the New York Philharmonic across both stages (on the floor, not on risers), in the middle of the show, so they could play some Bernstein, then make them disappear just as quickly. In 2005, Melissa Ethridge, bald from chemotherapy, sang an amazing tribute to Janis Joplin with Joss Stone. Pull this one up on YouTube.

Go ahead. I’ll wait.

2012 brought Paul McCartney performing songs from the Abbey Road medley with guest guitarists Dave Grohl, Bruce Springsteen, and Joe Walsh. The 57th Grammys began with AC/DC. The best opening in my tenure. Just a few years ago, Bonnie Raitt gave us the gift of performing “Angel From Montgomery” for John Prine, who would pass away a few weeks after the ceremony. I am so grateful to have been there for each of these special moments and many more.

2022 Grammys B-side boys: Eddie McKarge, Ric Teller, Alex Hoyo, Craig Rovello

The Grammy shows, like all live award and music shows, can be physically tiring. The length of days and the mileage on cement does take a toll. This year, at the MGM Grand Garden Arena, by show day, “my dogs were barking,” an idiomatic phrase attributed to Tad Dorgan, a journalist and cartoonist whose other slang offerings include “dumbbell” (a stupid person); “for crying out loud” (an exclamation of astonishment); “cat’s meow” and “cat’s pajamas” (as superlatives); “applesauce” (nonsense); “cheaters” (eyeglasses); “skimmer” (a hat); “hard-boiled” (tough and unsentimental); “drugstore cowboys” (loafers or ladies’ men); and “as busy as a one-armed paperhanger” (overworked). Personally, to avoid sore feet, I depend on Merino wool socks mostly from Smartwool, and hiking shoes from Oboz or Merrell.

It so happens that I do have a friend who grew up to be a cowboy and another who is a retired fireman. Childhood dreams. I learned early in my career to take the path of most resistance. Anyone can do the easy stuff. Take the hard road. Make a place for yourself in some part of this business that makes you happy. Push through a barrier or two. I was nearly thirty years old when I found out that I wanted to be an A2. By then, astronaut was out of the question. To have been present for the iconic performances listed above has been very special. Standing on the Grammy stage and telling Paul McCartney that to hear him play songs from Abbey Road is one of the pleasures of my life, is certainly a moment I’ll remember forever. But always foremost in my mind are the truly remarkable audio engineers that, for so many years, have made all the long challenging days worthwhile. I’m grateful for each of you and the time we have spent together making a lifetime of memories and lifelong friends.

Near the end of the show, after the last B-side performance had been cleared and the close-down wall was in, I walked out onto the empty B stage realizing that it could be my last time. Got a lump in my throat. It has been a big part of my life. Quite a few years ago, I was in a conversation about how long an A2 should work on the Grammys. The consensus was that age fifty was probably enough. Coming up on twenty years past that, it might be time to pass the torch, it might not, but I don’t regret even one minute.

Modern Motion Pictures

The Playback Company That’s Also a Software Developer

by Dave Henri

At Modern Motion Pictures, a boutique playback/graphics company run by three Local 695 Playback Supervisors (Chris Cundey, Matt Brucell, and myself), we were as busy as we’d ever been—supervising video playback on The Morning Show, For All Mankind, Black-ish, How to Get Away with Murder, and several other shows. On top of that, I was preparing to fly to London to look into opening our first office outside of the U.S. At the same time, we were also expanding the capabilities of the software tools we had created for our team in-house.

Video Supervisor Chris Cundey (back, center-right) with cast and crew on the set of HBO’s Silicon Valley

The company had grown significantly since I founded it in 2009 as a way to invoice the occasional graphic I designed while working as a Playback Engineer on set for supervisors like Matt Morrissey and Dan Dobson. I had only been in the union for a few years at that point, having started my career as an assistant to directors such as Rob Cohen and Nancy Meyers. As I gained experience on the set, I was also drawn to creating the content that we played back on so many TVs and computer monitors. The company was an outlet that allowed me to do that without giving up my time on set—time which I really loved. Within a few years, Modern Motion Pictures grew, especially as I began supervising shows of my own. It soon became unfeasible to operate as a one-man show, which is when I turned to my good friend Chris Cundey to join as a full partner.

Chris grew up in a film business family (his father is DP Dean Cundey) and had spent several years running his own video assist company, Graphic Nature; which also created graphics for on-set use and post-production visual effects. When Chris joined Modern Motion Pictures, things really took off. We had complementary skill sets and worked really well together. We would tackle large shows such as the third season of HBO’s The Newsroom together (a massive video show, which was originally supervised by Matt Morrissey before schedule conflicts took him to other projects; Steve Irwin was also instrumental to the show’s success as Lead Engineer), and assist each other as we ran some shows more or less individually. For example, Chris would run Silicon Valley while I would work on several projects with Director Steven Soderbergh, such as Contagion and Magic Mike.
As things continued to progress, we added a full-time staff of coordinators, graphic artists, and coders.

In 2018, when we were asked to supervise video for The Morning Show and For All Mankind for Apple’s new streaming service, we knew that we would need even more help and approached another old friend, Matt Brucell, who had recently won his third Emmy Award for broadcast graphic design while he was at ESPN. Matt had been the playback department PA on The Hulk when I was a Graphics Coordinator, and he and I had stayed in touch in the years since, even after he moved to broadcast television graphic design. We knew The Morning Show was going to need a lot of realistic broadcast graphics, and that Matt had the proper background to understand the specific needs of playback. Matt joined Local 695 as a Playback Specialist and Modern Motion Pictures as our third partner. The Morning Show, much like The Newsroom, was massively complicated to get off the ground. We leaned heavily on the mad skills of Playback Engineers Justin Edgerly and Justin White, and the close coordination with Sound Mixer Bill Kaplan and Utility (and “Master of Comms”) Tommy Giordano, as well as dozens of other video and sound professions each season.

Video Supervisor Dave Henri, Director Steven Soderbergh, and Zoë Kravitz on set of New Line Cinema and HBO Max’s thriller Kimi. Photo by Claudette Barius

As our company continued to expand, we began to look for ways to streamline our on-set workflows. For example, many Playback Operators use off-the-shelf tools such as Keynote or ProtoPie for creating and cueing phone graphics during a scene. However, we found that both of these programs had serious limitations, both in their capabilities and the time it took to program and design individual graphics. A custom solution was needed by 2020. Chris Cundey had already spent several years developing our proprietary programs, Magic Phone and Scene Builder. This allowed us to effortlessly recreate common smartphone actions so that they could be easily cued in an on-set environment. With these two applications, we could create simple phone calls and texting gags in just a few minutes. Even more complicated actions, like posting a live photo to Instagram or scrolling through Twitter, became straightforward to build and easy for an actor on set to operate under the guidance of a Video Playback Engineer. In fact, the accuracy of the graphics was noticed by phone manufacturers and Magic Phone quickly became a preferred program amongst product placement teams in order to ensure the proper look and feel of the most popular smartphone brands. We made plans to license Scene Builder and Magic Phone to other Playback Engineers and companies, but our commitments to film and television clients kept us too busy to devote the necessary resources beyond sending alpha versions of the programs to a few friends.

And then COVID hit.

Like everyone else, we were stunned by how quickly everything shut down. Between Wednesday, March 11 and Friday, March 13 all of our shows pulled the plug. My trip to London was canceled and no one knew what was coming next. Once we realized things were going to be locked down for more than “just a few weeks,” we pivoted to the very large list of tasks on our internal project board. Adding functionality and flexibility to Magic Phone and Scene Builder was Item #1 on our list.

A few months into lockdown, we got an interesting call from Josh Levy, a Video Assist/Video Playback Engineer, with whom we’ve worked many times over the years. He had been contacted by an HBO project called Coastal Elites, which was going to shoot a series of remote sets with actors in their homes. They were looking for a solution which would allow them to stream secure high-quality video from location to the director and other stakeholders in real time. Josh knew we had been developing a remote camera software (which would allow remote control over certain camera functionality in addition to the standard video feed) and wanted to know if it could be modified to work for this use case.

After several meetings with Josh and DP Jim Denault, we created SetLink Live, which securely streamed an extremely high-quality video feed with sound to the production team with a 200ms–300ms latency, or approximately the same lag you’d experience on a cellphone call. Additionally, Josh was able to record the feed on his QTAKE system and play either live or recording for the director.

(It is worth noting that QTAKE has since added similar functionality to their software.)

The shoot was extremely successful, and we realized that once production came back, a similar service would be needed. We revised the software, added the ability to stream from cameras, and were ready to meet the demand when production returned in the fall of 2020. As expected, there was a huge need for streaming from the set. SetLink Live went on to be used on multiple shows, such as The Morning Show, Black-ish, and Insecure. Even today, nearly two years later, this application continues to see daily use on several shows; often with two Local 695 Engineers running both the software and the safety cameras. It remains a valuable tool that can be used either standalone or in conjunction with video assist tools like QTAKE. Additionally, it can be used for custom remote video playback feeds (such as scripted Video Chats), whether from one stage to another; or from one side of the world to another.

Streaming video from set has become an integral component of production in the post-COVID world and we’re pleased with the adoption of SetLink Live and the opportunity to expand the skill sets of fellow Local 695 members who are now working as full-time streaming engineers. The program is available for license to Video Assist Engineers, Playback Engineers, and directly to productions (where we can provide 695 Engineers).

Our success in the past two years led to the development of several additional programs to make on-set life easier and more productive in the ever-expanding world of the Video Playback Engineer. Modern Gamma is a color-temperature control program used to handle multiple displays, powered by either Intel or Apple Silicon processors. Modern Chroma, our green screen generator, allows the operator to quickly dial in a preferred hue and brightness level, place or remove tracking marks, and even using various effects when the camera is shooting over the monitor toward an actor. It can also cue between various colors/effects, allowing an operator to start with a green image at the start of a camera move and then automatically change it to gray once the monitor is no longer blocked and then change again to another effect once the screen is no longer being photographed. Simple Cues, our QuickTime control program, allows a user to quickly jump between multiple clips and then set separate loop, pause, and advance rules for each one. All three of these programs are available to license monthly as part of a package with Scene Builder and Magic Phone.

Additionally, we have also developed Vitals, an easy-to-control heart rate monitor program for hospital and EMT sets. The program can run multiple custom cues, including various heart rates, blood pressure, temperature, etc. It has multiple looks and can run on monitors of nearly any aspect ratio. Vitals is available as part of a standalone package, which can be licensed on a monthly or per-show basis.

We’re excited to be able to offer the tools we’ve been using for years to our fellow supervisors and engineers. All of our product licenses are priced to be affordable to operators so as to be sustainable within the industry. As for that canceled trip to London, well, we ended up changing plans slightly and have just opened a new office in Berlin to cover European and UK productions. Europe tends to shoot most screens as green screen on set. We’re hoping to introduce productions there to a workflow where skilled engineers can provide a better experience on set for a better price than constantly relying on a burn-in. Two years after the initial COVID shutdown, it’s fascinating to see how our industry evolves and adapts. And it’s exciting for us at Modern Motion Pictures to be a small part of that change.

From Ukraine to the Final Frontier:

The Story of Vadym Medvediuk

We all recognize the hum of a helicopter’s blades slicing through the sky. It’s something that most of us are used to, right? Now imagine getting used to an air raid siren. Picture that surreal noise becoming a part of your daily routine to the point that you just stop reacting to it—you stop going to the bomb shelter. Why so reckless? Because “it might just be another false alarm” or “it’ll probably miss home again” or you are “too tired to walk seven flights of stairs a couple times a day.” These are the quotes my mother told me after the first week of the war in Ukraine. She was trying to explain something that I, luckily, never experienced before and hopefully never will. 

I was born in 1993, right after the USSR collapsed, in the free independent country Ukraine. I studied journalism in Poland and, during summertime, I would go back to my hometown to work as a local news correspondent. You see, being able to study abroad and travel to Europe was very much an “eye-opening” experience. When I’d go back home and see all the unfairness my own people would go through, my heart would ache. I needed to do something. So I started writing blogs, attending demonstrations and protests, finding likeminded people to unite, and hopefully help our country fight the internal enemy. It happened right before the revolution that took place in the end of 2013. During those years, I was getting more into film craft and understanding the possibilities of the art form—the way you can interact with the audience, communicate with them. Speaking your mind is as subjective as it is liberating. It’s up to the viewers to agree or to disagree after all. I didn’t feel that my message ever went far enough in journalism, so I started working in film. Parallel with that, I started to have a lot of problems with the government of my country. They did not take kindly to all of the protests over as quaint an idea as “free speech.” Pfff who needs that anyway? Things got so bad that in 2015, I had to leave Ukraine to protect my freedoms, liberty, and probably even my life. I was forced to seek political asylum in the U.S. 

It took about three years of travels across America for me to experience life here and to have some solid integration into the local society. I knew eventually I’d have to end up in LA. All of my goals were there. As I would explain it to my friends, “Yes, you can make films anywhere, but why hit blood vessels if you can target the heart?” And so the pursuit of the big dream began. Targeting the heart of the industry was a bit of a long and difficult process. As is the case for most of us, I started as a PA, working for free, chasing coffee, driving trucks, and taking out the garbage. I worked on a lot of college and passion projects until my first big breakthrough—reality television! I booked my first solid big gig after almost a year of trying to break in—the overnight success—PA on an MTV reality show. I was proud of myself. Though it may not have been exactly what I aspired to, I now had a massive opportunity to network and develop contacts. And so I did.

While I was hustling on small gigs and nonscripted productions, I spent the next year writing and shooting personal passion projects. While shooting in my apartment complex, I met Michael Kish, one of my neighbors who has since become a close friend. We met in the elevator in the middle of the night after we’d both come home from our long workday.

I said, “Hi, long day at work huh?”

He replied, “Are you the guy who’s shooting porn in his apartment?”

For the record, no, I was not. But after a bit of an awkward silence, we both chuckled and introduced ourselves. Mike, as luck would have it, would be my insider into the scripted world. He was a background PA for ABC’s Black-ish at that time and he would always do his best to bring me on board. I will always be grateful to Mike for the opportunities he gave me. I started developing a new circle of colleagues who would help me find film and TV work. I started gaining more knowledge and understanding of what I want to do in the industry.

Then came March of 2020 and the entire world locked down.

Like most people reading this article, I was absolutely terrified by the pandemic. I lost my job. I felt cut off from my friends. I was afraid for my family and I was afraid for myself. And I had nothing but time on my hands. So I started writing and developing projects with my roommates, Dilek, Kyle, Natalie, and Emre. They may not have been filmmakers but they were passionate artists and I am thankful for all of them. It took bravery to go out and shoot in those strange days. We, of course, made it as safe as possible. Experience shooting during the pandemic helped me to start work with a German reality/challenge show as early as May of 2020. That job led to another one at the reboot of Saved By the Bell, which I believe was one of the earliest shows to resume production. And that show brought me to Star Trek: Picard. Boom baby, was back at it!

I started working on set of the second season of Star Trek: Picard in the COVID Compliance Department. I was that safety guy for a whole season of the show. But I was overjoyed just to be in that environment. I could watch and learn so much. Obviously, the safety of the set was the priority, but my job was literally just to observe people’s behavior and compliance. So I made a point to pay attention to the specifics of each department while I was at it. That’s how I met Amber Maher, the Video Assist Operator on the show. The possibility of watching the creation of the show through the QTAKE system sounded absolutely marvelous. She got to see and hear every single take and even strung together light edits on demand. It was the coolest thing. So, naturally, I went up to her and said, “Hi, my name is Vadym. Can I ask what you’re doing on set?” That went a long way.

Amber fought tirelessly to get me placed in the Y-16A Training Program with Local 695. I got my first official day in the end of October 2021. Since then, I have worked as a Y-16A Video Assist Trainee for eight hundred hours and about one hundred more as Y-7 Video Assist Engineer. Local 695 has been very supportive; keeping contact with me almost every week, checking on extra practical trainings or theoretical material I might need and you know, and just supporting me in any way they can. I mean you guys are reading this article right now, I’m blessed that the union has offered such an amazing opportunity just so I could share my story with you and hopefully, that will inspire something good in you. Thank you, Casey and James. 

Star Trek ended up being more than just a job and the Star Trek family has done so much for me. They gave me experience, they gave me friendship, they gave me support, and a place to call second home for a while. Were it not for Amber’s decision to become my friend and mentor, I may never have made it into Local 695. And were it not for the compassion of my cast and my crew, I may never have been able to send money back home to evacuate my family to Poland following Russia’s invasion of my homeland. Words cannot express my gratitude to all of them and, particularly to Sierra Haworth, our on-set Camera Utility for organizing the crew’s GoFund me campaign to save my family.
My family escaped the acts of genocide that have spread across Ukrainian land and killed thousands of innocent people while forcing millions more to flee their homes. My family was lucky to have the support of my Picard fellowship. They could just as easily be among those we bid farewell to in our prayers each night. Under different circumstances, I could already be among the fallen. Any of us could, really.

That is why I am coordinating with my friends from the Filmanthropy nonprofit organization to gather funds in support our friends and loved ones still struggling in Ukraine. Our efforts ensure that any donations will go directly to those in need. If you have the means to donate, consider going to https://givebutter.com/SupportUkraine/vadymmedvediuk and making a contribution. With the help of my Local 695 kin, I hope we’ll be able to gather more support and do the right thing, the most humane thing, in order to help the ones in need.

Thank you for reading the glimpses of my life story and all the best to you, reader.

The Queen of Stream

by Amber Maher

In December of 2020, I got the call.

Video Assist Streaming Engineer Amber Maher

I had just finished working on King Richard as the Video Assist Streaming Engineer. Jeb Johenning from Ocean Video called me up and told me he had recommended me for another streaming video assist position. He had described me as the go-to wizard for anything to do with video assist and streaming media to the team he was working with. He and I had developed a good rapport together while I was working with Dempsey Tillman, Jeff Snyder, and others over at Man in the Box Video Assist. After working on King Richard, WandaVision, and Space Jam: A New Legacy, I had Jeb on speed dial. He was our go-to QTAKE Rep for the West Coast. QTAKE is the gold standard premiere software platform used for Video Assist Operators in Hollywood to record, playback, composite effects, stream, and live view everything that we film.
I’d often call him up to discuss QTAKE jobs. He had helped train most of the video assist people in the union on both coasts, including my mentor, Lee Hopp of LH Video in New York City. I was profoundly flattered by his recommendation and appreciation of my technical know-how but I never expected how that call was about to change my entire world. We discussed my workflows on previous projects like King Richard and Space Jam: A New Legacy, and I told him a little bit about what I’d been dealing with.

COVID-19 changed video assist work on a profound level. No longer could executives and Producers huddle behind a single monitor at video village. On King Richard, I had three Apple TVs that needed to be set up and broken down every day in the trailers. There were about five Executive Producers on iPads that could stream footage from wherever they were working in the world. We had about thirty-five iPads with a local stream for the crew, which I had to charge, disinfect, and maintain every day. Everyone was streaming the live and playback feed from the main QTAKE Pro computer system. Getting all these systems to work seamlessly required a lot of trial and error but we pulled it off and it was a big hit. However, if one screen ever went black, I’d get a text from almost every Executive Producer asking why they can’t see picture. From that experience and from working with the beta test version of the QTAKE Pro Stream on Space Jam: A New Legacy, I had added some new tricks to my bag to get it all to work.

Jeb was impressed with what he heard and began to tell me a little bit more about the show he wanted to put me on. I was told I’d have basics like my cart, cables, gear, etc., and that I’d be moving around different stages and locations. Everything would need to be streamed live. Production required a very complex workflow. The scope of it was almost overwhelming. He said that there would be between one hundred and one hundred fifty clients on the stream, including the executive team and those working remotely in Canada. We would be on six to eight different stages in addition to doing on-location work in downtown LA. The art department, production department, and all of the regular on-set crew would be utilizing the stream and would need service. COVID Zones A, B, and C would all need to be set up for streaming. New Directors would be coming in every other block and would need to be set up, along with their assistants. After doing the math, I realized this would be about ten times more clients than we had on King Richard or any other film I had worked on. And just one video assist person? It was massive! What show was going to need so many people? I had a lot of ideas of how this all could work. Every scenario was rather a tantalizing puzzle to solve. So I asked Jeb, “Well … when do I start?”

It wasn’t until later that I learned that the show in question was Star Trek: Picard and that we would be shooting show’s second and third seasons back-to-back. This was a dream come true. I grew up watching Star Trek with my dad. Having very keen knowledge of the show, characters, and episodes, I realized that this was monumental and could showcase all the new technologies that Video Assist Engineers could utilize.

Star Trek: Picard had a very strict COVID protocols, as most of the cast was over the age of fifty. The Producers were thorough and safety was their prime concern on the set. We had very limited crew allowed to be on set, as well as in Zone A. Everything that production had gotten familiar with over the past fifty years was thrown out the window. It was an odd experience, filming a science fiction story whilst seemingly living in one as well. COVID made its presence known on the set. One day, a co-worker would be gone and someone would inform us that they were working from home in quarantine. At the height of January’s COVID surge, we never knew who would show up to work each day.

Therefore, everyone had to be able to utilize the QTAKE. It became a vital tool for production. In fact, the production became so reliant on it that our Showrunner personally thanked me and told me they couldn’t have done it without my workflow. If video assist went down, we all went down. Jeb and I spoke about how this show, and this role could really set a precedent for video assist. This was the moment that I could get every Producer, Director, crew member, and the staff on our show to see and experience video assist and the stream like never before. Video assist would become the eyes and ears of production. There’s no other way to describe it. So we had to start from scratch and created it on the Star Trek: Picard set.

It was exciting to figure out all the possibilities. I got to work on the design and infrastructure of it. I ended up making a lot of flow charts and maps. On a show this large, it would have been impossible to wing it. I needed a complete plan in place. Maybe two or three of them. Thankfully, I was working with Todd Marks of Images on Screen, who was the Video Department Head on Picard. He was a veteran Video Playback Engineer and he spoke to production about getting me the essential prep time needed to create an ideal scene to stream for this show. He and his team of Video Engineers were able to understand the more technical aspects of our craft and vouch for it if something was needed. I was not alone and knew I had the backup of a full team of tech wizards at the helm of the show. Having such a large amount of Local 695 representation on set was wonderful. If I was getting a hard no from production and really needed the help, I was no longer a one-person band raising my hand.

For studio work, I decided to create a local QTAKE network for streaming by utilizing the stage’s IT department. I needed a bandwidth of 50mbps up and down on every stage. Then I created a secondary network for all the production offices, the art department, and to communicate with the other stages. Each remote stage required its own VPN (virtual private network) so that we could all be on one network that pointed to my system. The Ruckus (my Wi-Fi access point) beams a signal in a radius of about 100-150 feet, so I needed to set up several of them to boost the signal each time we were shooting. By setting it up this way, everyone on the QTAKE network could stream locally while everyone working remotely could stream from the cloud. This allowed the studio’s network to power the bulk of the Wi-Fi without having to rely on my individual Wi-Fi access point to supply the stream to the entire team. So in essence, a Producer or crew member could walk from one stage to another stage, into the office, out into the art department, through the production offices, and still be connected to the stream all using the QTAKE network I set up. This allowed for more crew to be able to be farther away from set and not have to rely on their cellphone signals (there was barely any service on our sets) to stream.

Local 695 Representatives on the set of Star Trek: Picard

The minute I tapped into the Ethernet plug in the wall and my system was up and running, we could all be streaming. Once we were, I found it worked better than anyone could of imagined. At one point, we had two teams and six cameras on the stream. The team could open up an iPad and see everything being shot at one time. It was actually quite fun to see the look on a Director or Producer’s face when they realized what they had available at the tips of their fingers. They were thrilled that there was one place they could go to see everything happening at once.

Anecdotally, the team got so used to seeing things in real time that whenever cameras were turned off and the signal broke, I’d be inundated with texts asking what was wrong. On one occasion, a battery swap resulted in more than a hundred texts from staff and crew who thought they’d lost the stream. To solve this, I created a handy Star Trek-inspired graphic that read, “Please Standby for Assimilation” so that Producers would know they weren’t missing any of the action. This was appreciated, especially by my fellow Trekkies on the team.

On location shooting proved much more challenging, especially on the day I found out that we would be set up on an insert car. The Director, DP, DIT, camera crew, and I were to follow the action and I had to stream it back to the production team while going 45 mph. There had been some debate as to whether or not I’d be joining them but a word from the Executive Producers settled the matter. They’d grown so accustomed to being able to watch every frame in real time that they weren’t going to let a little thing like high speed vehicle logistics deprive them of that luxury. I’ll never forget the moment when the DP pulled out his phone, read a text from our Showrunner, and asked, “How can they even see what we’re shooting?!” Then he looked at me.

I was fortunate in being able to take advantage of Local 695’s Y-16A Trainee Program. Early on in production, a member of the COVID compliance team approached me and said, “Hi, my name is Vadym. Can I ask what you’re doing on set?” I soon learned that Vadym Medvediuk was a political asylum recipient from the Ukraine and that he was interested in finding his craft on the set. With a little work, he became a part of the Local’s program and became my trainee on the set. He was fascinated by the video assist system and was eager to help me in whatever way he could, so I decided to train him. Like my previous trainee (Antonio Rodriguez), Vadym was a hard worker and did his best to learn all the ins and outs of the video assist role. In many ways, this was the perfect show as it threw him right into the deep end right out of the gate. By the end of the show’s run, I could entrust him to set up one stage while I worked on another and even run the second unit video assist while I pulled his signal and fed it to the Producers alongside my own. This resulted in a seamless integration where the production team could simultaneously watch both units work on a single screen. Now, after completing his stint on Star Trek, Vadym is in the process of becoming a Y-7 Engineer and has already begun doing video assist work of his own.

I must say that in my dealing with this show, the support that I received from the Producers, the crew, and everyone involved was like nothing I had ever encountered working in the business before. The Star Trek family really was just that: a family. It was wonderful and it is and was one of the best and most professional crews in Hollywood. From the Executive Producers, Producers, Directors, DP’s, and the legendary cast, it was such an honor and privilege to give my all to these two seasons on this remarkable show. Creatively, it was a treasure trove to walk into the Star Trek universe and know that from my job, I was able to create an impact on one of the most beloved shows this world has seen. The cast was stellar and there were so many laughs. Forever friendships were created, tears were shed, and love could be found all around. Thank you everyone involved for trusting me and choosing me to do this job. We really did become family on this set during a very unique time in history. We created some amazing storytelling, filmmaking at its best, and I can’t wait to carry that future on other projects too.

Sound Mixer: A Family Business

by Anna Wilborn

Career Achievement Award honoree Charles Wilborn CAS with daughter at the CAS Awards 2003

The scent of the top drawer on my dad’s sound cart has never left me. An odd combination of black foam rubber padding, electronics, and Juicy Fruit gum. It was the gum I was hunting for every time I visited my dad on set. I can still feel the satisfying way the two side locks unlatched allowing the drawer to slide out on its own to reveal my prize. His crew christened the giant road case cart “The S.S. Wilborn” and printed it across the lid in big cutout stickers. He thought it was hilarious and left it. I loved watching their easy camaraderie.

My dad, Charles M. Wilborn, begged me not to go into production. The hours, locations, and grind were no way to live, he insisted. Oddly enough, on this very rare occasion, I listened. When visiting my dad on location, he’d always arrange a day for me with the Editors who, back then, went with the shooting crew to prep and screen dailies, as well as begin cutting the film. Everything was still on film and 35mm magnetic tape. I’d watch Billy Anderson ACE work the flatbed and puzzle together a scene for Peter Weir’s Dead Poets Society. It blew my mind.

I started my film career right out of USC in post production. I was thrilled to get an unpaid internship on Cruel Intentions, with John Morris, Sound Supervisor at Sony Studios. Pro Tools was in its infancy and besides the manual, there weren’t any books about it yet, nor an internet to learn from. On-the-job training was the only way. I was using version 3.4 to load sound effects and dialog off of DAT tapes in real time onto clunky SCSI hard drives that only held a few gigs. I’d then deliver the drives to the Editors who cut the clips into the soundtrack. Even with this advancement, we still had to lay back their Pro Tools sessions to 35mm mag for the dub stage. I’ll never forget the day Sony threw all their Moviolas in a dumpster out by Cannery Row. The Sound Editors were incensed. Even at 22 years old, I knew this was a tragedy. I got a drive-on and heaved one into the trunk of my big old 300SD. I still have it in my garage, the giant letters “MGM” in white vinyl across its side. How can you throw away that kind of history?

Every day, I’d bring my highlighters and the only books I could find about editing and the new Avid system. John clocked them one day. “Are these your books?” he pointed. “Yes…” I said, not knowing if I was in trouble for reading at work. He nodded and walked away. Next thing I knew, they were helping me get my days to join the Editors Guild. I was over the moon. From there, I got work in assistant sound and film editing, and eventually became a Music Editor. But years of sitting in a room by myself in front of a computer were starting to take a toll. It was lonely work and it just didn’t fit my personality. I needed a big family to work with every day. I needed the set.

Anna on the backlot of Universal Studios,
Chivalry, BBC. Photo: Jake Simon, Sound Utility
Anna in scrubs for the show Scrubs
Joe Foglia, Anna, and Boom Operator Kevin Santy
on the set of Castle

In 1998, Local 695 ceded the jurisdiction of Re-recording Mixers to Local 700 and I was able to switch my card over to Local 695. With one fateful morning phone call from Joe Foglia needing a Utility for his show Scrubs, my career was finally on track. I absolutely loved being a Sound Utility. I now had fifty people to say hi to every morning. I loved getting to know the actors, forging life-long friendships with costumers and camera assistants, ensuring our department was running smoothly, and learning all the new technology that had come out since my dad’s days. We were still on DAT, but the CD-RAM was in the wings. Wiring, cabling, running around, snacks, whatever we needed. It was so much fun.

Here’s a fun fact: I never wanted to be a Sound Mixer. Anytime someone would ask me, my eyes would glaze over. The thought of buying all that gear (twice over), and worse, keeping track of it all is like herding a million little kittens made my stomach hurt. From tiny BNC barrel connectors to expensive, delicate mixing panels.

No thank you.

Watching mixers wrangle that headache was a cure-all for me. Plus, as a Utility, I basically had no homework, no off-the-clock stress. If the gear broke, I’d offer my sincerest condolences, send it out on a rush order, and run right back with the spare from the truck so we could keep rolling. By the time I hit thirty-eight thousand hours, I’d definitely reached my cruising altitude with a quick fix for any sudden fiasco. Nothing but a sports bra? I can wire that. Director threw their Comtek? Hello Keith, it’s me again.

In early 2020, Joe Foglia picked up a pilot and he, Kevin Santy and I were all set to have our usual great time for ten days. But on March 13, 2020, something truly terrifying happened. Transpo didn’t show up to pick up the gear. If that doesn’t send a chill down your spine, you’ve never worked in this business. If I ever write a horror movie, that’s how it’s going to start. Life had just been flipped upside down and inside out. COVID had arrived.

I think it’s safe to say we all re-prioritized our lives during the lockdown. Suddenly, we as hyperactive, workaholic film people, HAD to stay home, sleep in, and (hopefully) enjoy time with wherever and whomever we were stuck. We called people we hadn’t spoken to in far too long, spent time in group Zooms with old and new friends, drank too much, or didn’t drink enough.
We rediscovered what was really important. LIVING. I frickin’ loved it.

Script Supervisor Cori Glazer, Anna Wilborn & Misty Conn, Boom Operator, on a
California Almonds commercial.

When things started to loosen up, Misty Conn, Yvette Marxer, and I started meeting for drinks at whatever watering hole would have us. We’d talk about everything, but conversations often revolved around the business. “Anna, when are you going to MIX?!?” Yvette suddenly hollered in her South African accent, slamming her hand down on the table for emphasis. There it was again. That question. Only this time my mind didn’t flood with the millions of tiny kittens I’d have to buy. Misty chimed in. “Dude. You have to do it.” This wasn’t just her third Diet Coke talking. Their sincerity, encouragement, and belief in me was completely overwhelming. From then on it was like I didn’t have a choice in the matter. The kittens were out of the bag and this time there was no herding them back in. Not only did I want the challenge, I needed the challenge. I’ll never forget that moment and will be forever grateful/totally blame those two.

I started workshopping the idea with others whose opinion I really valued. My husband Ted Mayer, Kevin Santy, Tom Williams, Glen Trew, Tom Caton, Chantilly Hensley, Hanna Collins, Forrest Brakeman, Scott Solan, Gunnar Walter, Carrie Sheldon, Michael Reilly, and Scott Farr— all were so supportive. I remember Kevin thoughtfully saying, “I really like this for you.”

Then I had to rip the band-aid off with Joe Foglia; sixteen years of insane locations and hilariously inappropriate jokes. “I’m going to buy a package.” I declared, feeling like I was asking for an unwanted divorce. “Good for you!” he replied. It was definitely the end of an era, but the future was way more exciting than I ever imagined it would be. I honestly didn’t even know how much I wanted it until it finally started to happen. So many hours, so much history, so many sets. I was totally ready.

I excitedly called my parents up in Santa Barbara. “FINALLY!!!!” My mom, also a former Local 695 member, screamed out. “I’m popping the champagne!” She handed the phone to my dad. I honestly didn’t know they cared one way or the other, but turns out they’d both been secretly hoping I’d do this for years. “I’m so happy for you!” my dad exclaimed as the Taittingers burst open in the background. Then he launched into his usual “I should have gotten you into mixing sooner, I could have helped you…” spiel. Nope. I like that I did it my way. My years in post and as a Utility/Boom Op are invaluable to my job and will only make me a better mixer and department head. I know what to fight for and almost more importantly, what not to fight for. I’ve earned my position through decades of training. As we all know, to truly do this job right, there is no shortcut.

Choosing my gear was an absolute no-brainer and my timing could not have been better. A few months later, the shelves at our local sound houses would have been bare. When I’m saying the stars aligned, I mean it was borderline eerie. I’d been curating the perfect package in my mind for years by osmosis, so the money quickly flew out of the bank account. I gave myself six months, but in less than three, I was already off to mix my first feature in Northern Idaho, a horror film called The Outpost, written and directed by Joe LoTruglio from Brooklyn Nine-Nine (Thanks, Kevin Compayre!).

I chose the Sound Devices 888 for my recorder with a CL-16 control surface fed via Dante from two Lectrosonics DSQD’s which I absolutely adore, thus leaving me all the free XLR and T3 inputs I need for playback and extra receivers. A Sound Devices MixPre-6II provides a true standalone backup, as I feed it directly from the DSQD’s XLR outputs. I have a mix of brand-new DBSM transmitters, along with SMWB’s and workhorse SMv’s when I need a little 250mw. Bulletproof HMas send me the Schoeps CMIT’s and CMC4U’s with MK-41’s, the latter being my dad’s. They still sound fantastic and once sucked up the voices of DeNiro and Redford, so it’s fun to have a little history in the kit.

I selected DPA 6060’s for my lavs since I was starting from scratch. “Buy once, cry once,” as they say. They’re just heavenly and have no equal in my opinion. I also have some Sanken COS-11’s and Countryman B-6’s, because they make for a well-rounded lav kit for any scenario. I have Denecke JB-1 timecode boxes that are so smal and awesome, and I just love the smile on the camera crew’s face when they see I have them. They go along with my dad’s Denecke slates that after eighteen years in a dark case, still shockingly powered up with the same batteries that had been left in them. A quick trip to Santa Clarita and they were upgraded to our new 23.97 standard. I went with tried-and-true Comtek PR-216’s and Lectro IFB’s for clients and crew. Power is distributed by a PCS Power Star Life that has definitely saved my life on a couple occasions.

Anna in Albuquerque, NM, filling in for Joe Foglia on Amazon’s Chambers.
Mixing Daisy Jones and the Six for Amazon.
Anna in the process trailer with Boom Operator Scott Solan and Ted Mayer, Best Boy Grip and Anna’s husband.
Photo: Charee Savedra,

All is packed into an awesome 80/20 Blackbird cart by Matthew Freed that will roll over anything with ease. Drew Martin custom-built all the cables for a super-clean final touch. I then loaded it up with remote-controlled color LED lights that complement the Sound Devices color scheme for a little flair (and so I can see). It’s a small, powerful cart that fits me perfectly. I affectionately nicknamed her Tina after Ms. Turner who is also small and powerful, and whose real name just happens to be… Anna. I won’t be writing “The S.S. Tina” across her however. OK, maybe in a small corner. As I conclude my first year, I’ve stayed pleasantly busy with a substantial amount of series and commercial work, even a couple Super Bowl spots!

My dad and I went through his gear a couple years back and that drawer still smells just like it did twenty years ago when he dumped it in the back of his garage, turned out the lights, and walked away for good. Now, I can’t wait for COVID restrictions to end so my girls can come see their mom at work as a Sound Mixer and they can make some of their own quirky little life long memories of what inevitably became the family business.

Top Gun: Maverick

by Mark Weingarten CAS

Tom Cruise plays Capt. Pete “Maverick” Mitchell in Top Gun: Maverick from Paramount Pictures, Skydance and Jerry Bruckheimer Films.

It’s hard to believe that it’s been more than three years since I first began to write this article. A lot has occurred since then which either derailed and or postponed many of our plans. We started shooting Top Gun: Maverick in San Diego in May of 2018. We ultimately were in production for more than a year.

In 2018 during pre-production for Top Gun: Maverick, I met the film’s Director, Joseph (Joe) Kosinski. He explained that Top Gun: Maverick would not be a remake. It would be a standalone second act to the very successful Tony Scott-Jerry Bruckheimer-Tom Cruise-Val Kilmer film, Top Gun from 1986. Tom Cruise would be returning in the lead role as Maverick, and Val Kilmer would also be coming back to make an appearance as Ice Man.

In that meeting, I learned that Top Gun: Maverick would be shooting almost entirely in California, which is where I live. I had been shooting on distant locations for most of my projects over the preceding years. The opportunity to shoot a whole movie at home was very welcome.

Monica Barbaro and Tom Cruise on the set of Top Gun: Maverick from Paramount Pictures, Skydance and Jerry Bruckheimer Films.
Glen Powell plays “Hangman” in Top Gun: Maverick from Paramount Pictures, Skydance and Jerry Bruckheimer Films

I was very pleased when Joe and Executive Producer Tommy Harper decided to offer me the opportunity to be the Production Sound Mixer for Top Gun: Maverick. I could tell it was going to be exciting. It was going to present some very unusual problem-solving situations that would be rewarding to figure out, and it was going to be LOUD, REALLY LOUD! For certain, it was not going to be boring.

Joe and I discussed some of the major sound challenges that he knew we would face, and we kicked around what we thought might be the best approaches to tackling them. The most immediate issue to address was that we were going to film all the actors’ flying sequences actually inflight, real supersonic sorties while the actors and the sound equipment were being subjected to extreme G Forces. The request was for me to record all the inflight dialog in sync with the six cameras that would be mounted in the planes, with high-enough quality to be used in the final mix. There were to be no green screen-simulated flying sequences for this version! Everything was to be filmed actually in flight. I can’t say I had ever been asked to record in those conditions, but I was certainly game to give it a try… Going forward, I quickly learned that Joe was extremely calm, extremely prepared, and extremely supportive. He is an absolute pleasure to work with.
The TOPGUN Flying Team is a branch of the Navy. As such, we shot primarily at naval air stations. Those naval air stations are where the aircraft that we needed to shoot were hangared. We did shoot a little bit on stage at LA Center Studios, and also at some practical locations in and around Los Angeles and San Diego, but mostly we were at relatively distant locations all over California. It turned out that “shooting almost entirely in California” meant China Lake, San Diego, Lake Tahoe. To get to most of the naval air stations, it took a lot of driving, but at least I was able to make it home for most weekends.

We began shooting in May 2018 in San Diego for a one week pre-shoot at Coronado’s North Island Naval Air Station.

After the pre-shoot week, the whole crew went immediately into a couple of months of intensive scouting and prepping. During this time, all the departments scratched their heads and went hard at the work of figuring out how to approach filming and recording the flying sequences. We all had multiple scouting trips to several different naval air stations to look at, measure, inspect, and learn as much as we could about the F/A-18 supersonic jets that we would be using for filming.

Once each department determined what equipment they thought would be needed to do their inflight work, our Key Grip, the late Trevor Fulks, had to tackle how and where, to attach the equipment on the planes. We were told that any gear placed in the planes needed to be able to stay attached at up to 7G’s, and the gear could not interfere or prevent any aircraft functions or communications, nor could it interfere with the pilot or co-pilot’s ability to eject in an emergency situation if that ever became necessary. Having figured out how we could meet each of the Navy’s requirements, we began interacting with the Navy to submit our plans for their approval. We were asked to state every detail about each piece of gear: its purpose, its height, length, width, weight, the type of internal batteries it used, etc. The Navy had to sign off on exactly how and where we planned to mount each piece of our equipment for it to safely stay in place while the planes were in the air.

During scouting, we learned that in the Navy, all tasks are broken down and pieced out to very specific departments. Each department is referred to by its own specific abbreviated nickname. It took a lot of hunting to figure out which departments were responsible for exactly which functions. Each of these departments usually consists of a combination of naval personnel and civilian contractors. The mystery for me was which department(s) could provide me with the information that I would need to be able to figure out how to record our actors’ inflight dialog in a manner that would render it one hundred percent usable for the production? It took me many hours of scouting time to track down which departments were responsible for which aspects of the pilots’ communications: plane to plane, plane to ground, archiving of all the planes’ communications.

The planes have an internal communication system (Comms), which is fed to mics mounted inside of the pilot and co-pilot’s oxygen masks, and monitored by headsets built into their helmets. Who was in charge of the mics in the masks and the headsets in their helmets? How and where did the pilots connect the masks and helmets to the planes?

I learned that within the air station’s hangars, there are many doors. Each of these doors lead to the offices of one of the Navy’s various aircraft-related departments. Figuring out who did what and which unmarked door they were behind was both very time-consuming and very confusing. Eventually, I was able to find the right combination of departments to help me.

Monica Barbaro plays “Phoenix” in Top Gun: Maverick from Paramount Pictures, Skydance and Jerry Bruckheimer Films.
Jon Hamm plays Adm. Beau “Cyclone” Simpson in Top Gun: Maverick from Paramount Pictures, Skydance and Jerry Bruckheimer Films.

The mechanics were able to show me all the possible access points where I could tap into the plane’s Comms. Unfortunately, none of those access points would work for what I needed to do. I considered the possibility that I could record all of our planes’ transmissions from the ground (as the Navy does record all of their communications for archiving). However, the archive’s voice quality isn’t very good. Certainly not good enough to use for a final movie soundtrack.

Plus all of the Navy’s archived communications are classified, so it would be challenging to get clearances to use the recordings. Even if we were to receive clearance, trying to sync up all those un-slated recordings from all those flights would have been a nightmare. Obviously, this was not an option, the dialog had to be recorded in the plane while it was inflight at the highest quality possible, and it had to be available for us to be able to sync up and use immediately after each flight.

In addition to having the sound recording equipment properly secured to the aircraft, it was also imperative that its operation could not have any effect on the ability of the pilots’ mask to deliver oxygen to them at higher altitudes. Nor could it interfere with any communications between pilot and co-pilot. Given these restrictions, how could the inflight sound be safely captured, in sync, at the quality that the movie deserved?

During prep, I learned that there are specific connectors for fixed wing aircraft, and for helicopters, and that the wiring configurations for those connectors differ for the various branches of the service. All of the connectors involved in aeronautics are unique. They are connectors the likes of which I have never encountered before.

With guidance from the Navy and the help of the Parachute Room Team, I found the right place to tap into the plane’s Comms to be able to get the best possible inflight recording, while still allowing for uninterrupted communications, not interfering with the mask’s oxygen flow or compromising an emergency ejection.

I have made plenty of adaptor cables in my time, but considering the unfamiliar aircraft connectors and the absolute cannot fail operation of this mission-critical piece of equipment, I needed to seek out someone one hundred percent familiar with all of the elements involved to fabricate the Y cables I required. A Navy pilot introduced me to Par, at Pilot USA Communications. Pilot USA makes all sorts of cables and adaptors for all the branches of the military. I spoke with Par to explain all the specs for the Y cable I wanted him to fabricate. I explained that it was imperative that the cable would not interfere with any aircraft functions while being able to feed pure undistorted audio to two Lectrosonics transmitters via TA5F connectors; I would have a dual feed to two SM’s, in case failed. He asked me to draw up a wiring diagram. I sent him a very crude drawing of the cable, along with a copy of Lectrosonics TA5F wiring diagram. Par was able to read my drawing and understand it. He said he felt confident that he would be able to fabricate a Y cable that would meet all my requirements.

My initial concept for the flying sequences was to put two Lectrosonics SM transmitters on each actor, with two Lectro 411 receivers attached to a single recorder in the plane, and two additional mics attached somewhere inside the canopy for stereo sounds of the jet’s engines, rattles, and groans inflight. Four tracks would be enough to do the job so I chose a Sound Devices 744. I chose the 744 because it was the smallest four-channel recorder that I had, and because I thought it was one of the sturdiest, best built devices I knew of. I figured it was most likely to survive the potential forces of up to 7G’s. Given the intense G Forces involved, I thought it best to eliminate any moving parts. As such, I replaced the 744’s original spinning hard drive with a solid-state hard drive.

I brought the 744 and the 411’s to Trevor Fulks, for him to design and fabricate the mounts needed to secure them to the plane. While Trevor was doing that, I sent our Navy liaison all the requested specifications for the sound gear, which he forwarded to the appropriate Navy departments for approval.

Tom Cruise plays Capt. Pete “Maverick” Mitchell in Top Gun: Maverick from Paramount Pictures, Skydance and Jerry Bruckheimer Films.
Danny Ramirez plays “Fanboy” in Top Gun: Maverick from Paramount Pictures, Skydance and Jerry Bruckheimer Films.
Jay Ellis plays “Payback” in Top Gun: Maverick from Paramount Pictures, Skydance and Jerry Bruckheimer Films.

Tom Cruise had requested that there would be a remote control in the plane that would give him the ability to start and cut all the cameras and the sound recorder simultaneously with a single button press. Keslow Camera was building the remote for the cameras. Dan Ming, our incredibly helpful A camera 1st AC, spoke with Keslow for me and they told him they thought it would be possible to incorporate record-stop control for the 744 into the remote. I sent Keslow a Sound Devices CL-1 remote control interface, along with the schematics for it. They were able to cannibalize the CL-1 and incorporate it successfully into the remote. Once completed, the remote worked perfectly. It was able to start and stop everything with a single button press.

The cameras we used were the Sony VENICE. They are large sensor, 6K, IMAX capable cameras. The VENICE lenses and sensors could be connected to each other by fiber optic cables. This gave them the capability for the lenses to be mounted separately from the camera bodies. Ultimately, the Camera and Grip Departments found that there was enough room to safely secure six cameras in each F/A-18. All six camera bodies were mounted in one cluster midway under the plane’s canopy. The separately mounted lenses were placed wherever was ideal for picture.

Curiously, we found that the camera bodies would fit better in one plane than another. We learned that no two plane canopies are the same; each one is handmade and is slightly different from any other.

Things were starting to come together. The Navy was reviewing our equipment lists. The remote was being made. The Y cables were being built. Trevor was fabricating the mounts to attach everyone’s gear to the planes. We just needed to get everything delivered in time to do a little testing before it would be needed for shooting.

The mounts, the Navy’s approval of the gear, and Keslow’s remote all arrived the week before we were to start shooting the flying sequences. In the hangar, we mounted all the cameras, as well as my recorder and radio receivers into an F/A-18. The Keslow remote worked perfectly, it reliably rolled and cut all the cameras together with the 744. We did one final shakedown test flight without an actor to make sure that everything Trevor had mounted for us stayed in place. It did. Unfortunately, the Y cables hadn’t arrived in time for that test flight. The next opportunity to test would not be until we started shooting, with an actor in the plane. Fingers crossed that the Y cable was going to work!

Time now for the rest of sound crew to come on board: Tom Huck Caton (Boom Operator), Cara Kovach (Utility to start), Kevin Becker (Utility to finish). Additional sound crew that worked on the film were Mark Agostino, Eric Ballew, Lawrence Commans, Jeff Haddad, Zach Wrobel, and Jeff Zimmerman.

Production officially started. The day had come for our first flight. Tom Cruise was fully suited up, the still untested Y cable had been connected by the indispensable Neville, a civilian contractor who worked for the PR Department. The SM’s were on, the 411’s were powered up, the 744 was awake with its timecode jammed to the same code as the cameras, and it was ready to record. Cruise climbed into the plane. There was silence. I heard absolutely nothing at the sound cart where I had tuned its receivers to the same frequencies as the 411’s in the plane. However, once the F/A-18’s system was energized, it came alive; I was able to hear through the Comms as the plane went through all its trouble warnings (it does this every time on startup) “engine fire,” “flap failure,” etc. When the warnings finally ended, Cruise spoke. I heard him loud and clear. Yes! The Y cable was working! Phew!

Soon after the F/A-18 Hornet took off, it disappeared and returned about an hour or so later. We retrieved all the media, brought it back to the hangar where we downloaded all the camera and sound files, backed them up, then brought the masters to the DIT for us to watch.

The pictures were spectacular. Claudio Miranda’s inflight cinematography is extraordinary. You can absolutely tell that the actors are really flying in the F/A-18’s, it is very clear they are not being shot against green screen. The Keslow remote worked perfectly throughout the flight. Cruise had been able to start and stop all the cameras, along with the sound recorder as he requested. The sound the Y cable provided was fantastic, crystal clear, absolutely, usable. A triumph. High fives all around… It turned out that on that very first sortie, the plane did, at some point, hit 7G’s, and nothing came loose. Trevor’s mounts worked perfectly, the 744, the 411’s and all the cameras and lenses remained solidly in place. In short, it worked. Everything worked!

Well, so we thought. During dailies, Cruise pointed out my 744 and the 411’s ended up very much in the frame. The sound gear had to move, however, I was told by Trevor that the sound gear was mounted in “the only place in the plane that it could be” and there was “no other option.” Even if we had been able to find an alternative mounting place, the Navy would have had to approve of that change, which would not be a quick process.

The custom built Y cables and a
Lectrosonics PDR

What to do? We had a week to re-group before we were scheduled for our next flight. I went back to my on-base Navy housing to try to come up with a new strategy. The Day One flight had confirmed that the Y cable that Par had made worked perfectly, but without the 744 and 411’s, how could I capture the inflight dialog? The Y cables were built with two female TA5 connectors to feed two SM’s. I got on my laptop and began looking through the Lectrosonics website, and found that they made a little standalone recorder called a PDR. The PDR accepted a female TA5 for an input. There were two versions of the PDR. The single AAA battery model was almost identical in size to a single AA battery SM, and the PDR was capable of jamming timecode. It would record two tracks of WAV files onto a removable micro CF card at very high quality. Great! With a 16GB micro CF card, the PDR would easily record audio for many hours, certainly long enough to record any length of flight (sortie) we would be filming. According to the specs, the PDR would be able to make recordings of a quality similar to a 744. I immediately ordered two of them from Trew Audio. We found that indeed the PDR’s easily fit in exactly the same place where the SM’s had been. When attached to our existing Y cables, they should produce recordings equal in quality to the SM’s. Because they would be placed where the SM’s had been, they would not require Navy approval.

However, the change to the PDR’s meant that Cruise would lose the ability for remote-controlled starting and stopping of the sound recorder in flight. With the PDR’s, my plan was that just before the actors were to climb into their planes, either Huck or I would put the PDR’s into “record” and attach them to Y cable. This meant that the PDR’s would be recording continuously from that point on throughout the entire sortie. They would not stop recording until the plane was on the ground and one of us could retrieve the PDR to push “Stop.”

We explained to Cruise the switch to the PDR’s and the reason for it, and how the change would negate his ability to remotely start and stop the sound recorder. He completely understood the situation and was fine with it. I had never worked with him before. I found him to be extremely pleasant, reasonable, and fabulous throughout.

Our next batch of flying dailies demonstrated that the PDR’s worked perfectly. Their timecode was accurate, they synced up easily, continuously recorded throughout, regardless of G Forces or altitude and they sounded great.

After our initial single actor flights, we started having sorties with two planes taking off at the same time, sometimes with plane-to-plane dialog being recorded. We often had two planes going up in both morning and afternoon sorties. We needed more PDR’s and more Y cables. I ordered a bunch more of both. All the PDR’s performed impeccably. Of the literally hundreds of sorties, we only had one single PDR failure. Sometimes the sorties would last for hours, but no matter what, the PDR’s would keep on recording. We even had an instance, when because of a mechanical problem, the plane had to make a forced landing at a different naval air station and we weren’t able to retrieve that PDR until it returned the next day. Regardless, we were able to recover the previous day’s recording easily.

Among the many nice features of the PDR’s was their ability to record two mono tracks from a single input at two different levels. I would always record one track at a level a few db’s lower than the other track. If someone got really loud and started blowing the hotter track, the lower level track always stayed undistorted. OK, I can see that I’m starting to sound like a spokesman for Lectrosonics. I’m not, I have no affiliation, but I do love their products and obviously I am now especially enamored with the PDR.

Working around the planes, we learned that there was great concern about something the Navy called FOD, which stands for Foreign Object Damage. (Everything in the Navy is represented by an acronym.) Anything that could get loose in the cockpit inflight would be considered FOD. Any foreign entity on the runway that could potentially interfere with take-off or landing would also be considered FOD. We had to make sure nothing that we were attaching to the planes could ever turn into FOD.

For Navy airmen, every day begins with a “FOD walk.” The entire squadron lines up across the width of the runway and walks its entire length, eyes to the ground, retrieving pebbles, loose screws, etc. The Commander told us that each morning before the FOD walk, he would place a tiny brass button somewhere on the runway. If the brass button was not found and returned to him during the first sweep, he would order a second, then a third. Until that button was retrieved, no planes would be cleared to take off or land. They also do FOD walks every morning on all aircraft carriers. FOD is serious business.

It was surprisingly quiet inside the cockpits of our supersonic fighter planes. After the first flight, we listened back to the recordings from two FX mics I had mounted inside the plane’s cabin. They essentially recorded a steady flow of air. The extra mics weren’t really giving us much, and they posed the risk of possibly becoming FOD, I decided to abandon them. The mics in the pilot’s masks gave us perfectly acceptable sound whether their masks were open or closed. Ultimately, there was no a need to put a lav on the outside of their survival vests. Eliminating the lav and eliminating the FX mics enabled us to use only one PDR per actor, per flight. Given the FOD concerns, the less gear we put in the planes, the better.

I also did do a lot of stereo recordings on the ground; the planes starting and warming up, taxiing away, taxiing toward us, taking off, landing, and doing flyovers. Fighter jets are LOUD. I mean crazy LOUD!!! You can feel the heat from their afterburners even when they are nearly an entire runway length away.

The carrier itself was a challenge. Aircraft carriers are enormous! They can be the equivalent of up to twenty stories tall. Everything you bring onto them has to be hand carried up many flights of very steep narrow stairs called ladders. Carts can’t be rolled through their hallways because there are multiple watertight doors that constantly have to be stepped over. All the walls, floors, and ceilings are made of metal, which does not make for a great environment for radio reception. Additionally, there is a ton of onboard Navy equipment that is transmitting a lot of RF. Naturally, on an active ship, anything that’s transmitting cannot be turned off. Somehow we were always able to find enough clear radio frequencies to get the job done; it took a lot of scanning to make that happen.

My main cart for Top Gun: Maverick was several stacked SKB cases. The larger top case contained a Zaxcom Mix 12, a Deva 16, two Lectrosonics Venues, a Meon power supply along with Comtek and Lectro IFB transmitters. This section held essentially everything needed to record all of the dialog on the carrier. The top section is relatively easy for two people to carry wherever it is needed. By adding a PSC Pelican lithium battery to the onboard Meon, it had all-day power.

When the carrier was docked, I did all the scenes. When the carrier went to sea for a few days, I reached out to Eric Ballew, who had been my go-to consultant for carrier research. Eric is a Sound Mixer, who, while enlisted in the Navy, had been deployed on a carrier. Being “deployed” was very familiar territory for him. He did a great job on the work at sea. Thank you, Eric.

In addition to all the inflight recordings and the work on the carrier, there was one other big sound challenge; a sailboat scene that was not boom-able and had to be lav mics only. This scene was filmed multiple times, in multiple locations. The big issue was that there would be a lot of wind on the lavs. This is one of the places where I’ve found that DPA lavs really shine. To me, the DPA’s seem to be the most wind resistant of all the lavs I’ve ever tried. Each time, I would set up down below with a Deva big rig and a couple of 411’s.

Below deck on a boat is where those who get seasick, really get seasick. For some reason, I am not afflicted. On Dunkirk, I spent forty-plus days below deck on the Moonstone, a small wooden boat that would pitch like crazy in heavy seas. It was not a boat that was really meant to sail on the ocean. For some reason, while almost everyone else was incredibly seasick, I found I was impervious. I never even took Dramamine. I don’t know why, but evidently I am immune to sea sickness.

Each time we shot the Top Gun: Maverick sailboat scene, we wired the actors up with SM’s and DPA 4071’s. The lavs worked and sounded great, despite the high winds on the water. The DPA’s always worked.

Most of Top Gun: Maverick was shot at several naval air stations that we returned to multiple times. Each location had its own set of particulars that we had to figure out. At one base, this block of frequencies didn’t work, at another, a different block of frequencies didn’t, or there was a big humming noise in the corner that we couldn’t turn off in that hangar.

Top Gun: Maverick ended up filming for more than a year of principal photography, during which many old friends came and went, and all of us who went the distance had the opportunity to really bond with each other. With all the problem solving that each department had to do, we all did our best to help each other figure out how best to get the job done. Everyone on the main crew was extremely supportive of each other. It really was an extremely collaborative undertaking.

For me, it certainly was a learning experience, both technically and philosophically. I came away from the experience with nothing but respect and admiration for every service member I met. From the pilots and mechanics who helped guide me to the PR Department that showed me the path to connect to the plane’s Comms, to the members of the Navy brass that I later spent time with. Everyone was kind and thoughtful and extremely helpful. On more than one occasion, I had an Admiral sitting next to me at the sound cart for most of the day. They were thoughtful, caring, well-educated people, whose main objective was to do whatever they could to act in a way that would always minimize casualties for those whom they commanded. None of them took that responsibility lightly. I was very impressed.

Summing up, Top Gun: Maverick was a very challenging shoot for all departments. Everyone, cast and crew, worked extremely well together. We all did our best to support each other, and to problem solve all of the unusual situations we were given. I’m proud to say that every line of the inflight dialog in the movie is from the production tracks. Even the labored breathing you hear from Tom Cruise in the trailer is production track. Everyone who worked on this film went above and beyond, and it shows. In the end, I think Top Gun: Maverick turned out to be a pretty darned good movie.

Now, grab some earplugs and go see it on a big screen in an actual movie theater, with a nice loud Atmos surround sound system!

Over and out, Mark Weingarten.

Clear-Com FreeSpeak on Crater

by Paul Ledford CAS

Spacesuit comms

You get the phone call to do a space movie and after the first rush of excitement, the mind conjures the thought that a space movie on the moon means ten spacesuits on stages. Spacesuits mean helmets, and the need for a no-fail two-way communication across all actors, stunt actors, and many key positions on set. It is a must for safety, a must for the workflow to make our day, and then there is the sound capture thing. The basic concept is not new here, but every year, we have new equipment with new techniques to bring forward to fulfill the needs while working around new equipment from other departments that also need to be factored in or resolved. I have had my share of good turns at single- and double-billed shows with spacesuits, along with a most disastrous project of multi-crew spacesuits that proved to be two weeks plus of pure stomach aching hell. That toxic trial provided many lessons and I was a bit reluctant to repeat that same experience given this Disney production of Crater had the bulk of days in spacesuits with five teenagers and lots of wire work in the air. While not a huge cast count by most production standards, this could not be a show up, hook up, and go project. It gets complicated quick and the charting of signal flow starts to cross over the lines early on.

There has to be supported homework.

After lots of war stories about past projects facing these same issues and recommendations about possible solutions, production did the homework of speaking with folks I had pointed to. With the support of Linda Borgeson of Disney Post Production, and Producer John Scotti, they agreed to move forward with adequate support for homework, extra gear rental, and hard targets on testing before shooting was to commence. The concept of safety was the lead path in all decisions.

Helmets waiting for actors or stunt doubles.
Communications Mixer Kyle Lamy at the Yamaha QL1 mixer cart, reacting in real-time as the actors requested level changes to their mix.
The WiSpy app showing Wi-Fi use in the 2.4 GHz range. Remote control of the Shure system occurs over 2.4 GHz. It’s important to manage that spectrum, since it’s shared with WiFi, as well as camera and lighting control. We freed up channels by asking the stage’s IT Dept, to shut down any of their access points’ use of 2.4 GHz.

I was on another show, so from the very beginning, I asked Peter Schneider of Gotham Sound and Communications about his interest in providing the prep design of the comm system and gear rental for the spacesuit scenes. Peter and Gotham have provided support on many other shows for me when the needs were off the normal path and they made this prep as much of as a comfort zone as possible. Peter cleared the way for things like full RF coordination of all departments on set, manufacturer support of the equipment pieces selected, and follow-up on the interfacing of our gear with the suit and helmet design phase being done in Los Angeles by Legacy Effects. We hired Ed Novick for a day of test recordings in Los Angeles at Legacy with the helmets. From those recordings, we selected which lav mic we could use and where they had to play out of sight. The Shure Twinplex lav was selected for most of the helmets given the high max SPL and had some DPA 6060 for a couple of actors due to head versus helmet space and
tonal differences. The stunt team used Sanken COS-11 with the Clear-Com FreeSpeak II packs.

The Clear-Com FreeSpeak II package was the glue for our full system.

The design had to be a full duplex comm system where everything needs to go through, and be fingertip-controlled on set by our AD, Benita Allen, along with our Director, Kyle Alvarez, and the Stunt Coordinator, Dave Macomber, and his stunt team.

All of the leadership needs not only to hear all of the primary actors and stunt actors, but to be able to speak to them as a group and at times individually. All controlled from their own belt packs or base station.

The design needed to be able to move to different sound stages, as well as be duplicated for a separate unit to shoot stunt actors performing on different sets, while our first unit continued or reverted to our more traditional task of dialog capture without helmets for interior set work.

With Shure Axient transmitters, our lav mics, and Shure IEM units for the actors selected, the stunt actors were fitted with the Clear-Com FS II units since we were already using that system for the floor communications. The Clear-Com FS II provided the duplex needed for stunt actors rather than an expensive full duplication of what we required for our actors. The Clear-Com FS II could network with the stunt actors and on-set leadership by our dedicated comms technician who used a separate Yamaha QL-1 console. Auto mix and Cedar NRS were used for the comm feeds. We divided up the full task knowing it was too much for one person on one console to focus on because things were going to change. It is the nature of our work.

Kyle Lamy was hired to be our Comms Tech. Kyle came to us from NCIS: New Orleans which had just wrapped. He was their Playback Operator and had the skill set from stage and music venues to operate the mixer, the wireless system, and IEM system with intercoms. From there it was a quick step to fold in the new flavor of the Clear-Com FreeSpeak II on a network via a laptop.

I now cue it over to Kyle for the hands-on experience…

Range and Battery Life

I will admit that I was very skeptical when first learning this unit ran on a 1.9 GHz cellular band. In my experience, wireless microphones that run on a 2.4 GHz band will run out of range quickly. The FS II should never be put in this category.

On our largest stage with the full cast of stunt personnel and production, I was using up to twelve belt packs on three transceivers. The belt packs quickly jumped to the closest transceiver, and I was able to keep a 90%-95% stability on all belt packs throughout the filming.

The range of these belt packs are only outperformed by the incredible battery life that they can achieve. Very few rechargeable batteries can stand the test of all-day feature film working hours, but the batteries on the FS II proclaimed and delivered sixteen hours of life. After testing these on our setup days, the crew moved into a routine where the batteries were charged and added to the belt packs first thing in the morning and first thing after lunch. I was in no danger of ever losing a belt pack due to a battery, and we had many hours left by the time we wrapped every day.

The 1st AD’s beltpack—she can separately talk/listen to actors in space suits, the Director, and stunt people. The “Child Quiet” button triggered the “mute” switch for the corresponding actor channels on the QL5 desk
Sound Utility Colin Byer, Spacesuit Technician Jonathan Faber, and Comms Mixer Kyle Lamy place microphone and earpice in the helmet.
Screenshot of Shure wireless workbench showing perfect RF levels and decoded bitstream quality, despite running the tx at only 10mW.

Network Ability

The FS II really shines when networked together with console and other transmitters. Peter and I were able to quickly formulate a workflow that included role assignments, battery and range health indicators and gain structure. Logging into the Static IP address was a convenient way to make changes and monitor the different aspects of the belt packs while in use.

On our Splinter+ Unit, the stunt actors all had the FS II wireless belt packs assigned by their roles to be Force Talk and Listen. This allowed all of the stunt personnel to freely talk and listen to our stunt coordinator at all times.

As the day moved on, certain stunt actors not in the setup were quickly assigned to Force Listen. The off-camera stunt actors were kept in the chain of communication and allowed to listen to the conversation, but with their microphones turned off, clearing that line of fan noise, breathing, and chatter for everyone else.

All changes could be achieved on the hardware unit on set, but being networked turned this into a few clicks of the mouse and did not hold production up in the slightest. We could work from the behind the curtain stage position at the carts without having to go onto the set itself and keep in line with the COVID protocols.

Clarity and Expandability

The FS II provided the clarity and level that was always sufficient and quickly adjustable by the network ability to change the levels on the fly as each actor or AD’s requested. Not once did I notice the dialog or direction from our AD was in danger of being misunderstood. The preamps were clean and headphone outputs were more than loud enough for all various earpieces and headsets.

In short order, our 1st AD desired a bit more control from her belt pack in order to quiet everyone trying to communicate through the system. Our solution was to have the AD’s FS II belt pack trigger a GPI on the Yamaha QL 1, turning on or off the Actor’s Mute Group. The implementation of this mute group trigger allowed the AD to stop all side chatter and gain the attention of everyone in the system. This was particularly important working with excited teenagers or when a little focus was lost as the day grew long.

Deeper in our schedule, the actors and stunt personnel would move from main unit to splinter+ unit, and the belt packs were very quick to drop the base station that it was initially linked to and join the network of the new base station once it was in range. This worked seamlessly and almost became forgotten about in the workflow.

Over to Ledford’s side of the cart in pure production recording world…

I got a raw feed of each actor from the Shure Axient receivers for the iso tracks and the comm feed to monitor the Clear-Com FS II channel for on-set instructions. I got slates and did a mix of our capture on my board for dailies. I could also talk back with our AD via my board to her belt pack, if that was needed.

My crew of Zach Lancaster on Boom, and Colin Beyer, our Utility Sound, loaded up the spacesuit backpacks with the comm gear and guided the helmet on and off routine with the Legacy crew and Costume Department. Actors got a transmitter and an IEM unit in the backpack connecting a harness to the helmet. Lav mic and ear pad pieces were built into each helmet. The backpacks also held power, lights, and fan controls for the suits. Zach and Colin had my IFB system feeding into the FS II pack to help check the actors and on-set staff for any issues. The Clear-Com FS II packs could be programmed to fit each person’s need.

The original request had the actors being fully dressed with helmet fitted in a dressing trailer off stage. Then, like all the NASA footage we have seen, walk them to the stage and on set. Challenges were revealed by the amount of crew in an expanded trailer with many frantic hands dealing with our young actors. Add in our technical challenge of having a good RF signal to ensure we were good to go. A harness was made to run through the stage wall to a small box version with antenna tree of our system inside the trailer out in the parking lot. This extended our ability to hear, talk, and network with the gear and our crew. However, we could not see anything, and what seemed like a good idea did not complete in a smooth or timely fashion on the first day of spacesuits.

Sometimes the better solution is not always about more gear. The kids were not happy to stand for so long while this whole suit thing took place. Putting the helmets on is a routine of hold in the air, connect the harness, squeeze down, and fit to lock-in, without pinching the harness. For the unpracticed, this is another barrier if you are standing up. To listen to all of this and discern the flow while blind was a bit riotous in a kind observation.

We evolved to having a simple, long wood bench made for our talent to sit on within eyesight of our carts on stage. Clothing and packs were done in the dressing trailer. The helmets and final fits were done on the stage. This helped us to calm the energized actors and work both front and back of the actors and stunts suiting up. We always got a “good to go” check before anyone left our care and walked onto the set. If there was an issue, we could address it right there with eyes on and all within COVID protocols with less people and more space at each step. Production adjusted the expectations to give us the needs to make good. Then we all got better at the process together.

On first unit, we had six Shure transmitters going along with six IEM systems for the mix minus back to the talent, and then I had some Lectrosonics units running for boom and plants plus my crew IFB and video village on a Lectro IFB system. At no time did we have interference due to the homework in design and frequency selection Peter Schneider performed for us. We had full scans of all stages ahead of time, and we had the full cooperation of all departments using RF devices, i.e., walkies, running lights, camera crew, crane, controlling special effects gear, or visual effects capture devices.

As much effort we put in to making the helmets work, we had to consider setups with the helmets off. OFF due to storyline and OFF due to the actor being off camera, and the helmets are a pain to put on and off long into the day. Having the same connectors on the harness break point as my own lav mics was a convenient savings. This was important since my transmitter system was different from the spacesuit units. Zach, our Boom Op, could see the first time an actor snuck up on set without a helmet and quickly pull a lav out of his boom stand kit to connect the actor back into our channel lineup … so the actors isolated in the helmets could hear the non-helmet person in the dialog runs or just any other interaction. Having these extra lav mics were good for backups, and good for on-set workflow, as well as those extra lines of dialog for post.

I had lived through and learned more about other shows having the on-or-off face shield issue. Our helmets could remove the face shield for reflection issues. That was a bonus for us giving a more natural voice to record without lots of pressure build-up on those loud lines. Post can then decide what the flavor of that is to be in the final. With the shields on for the wide coverage, there is of course, the enclosed bubble sound of the helmets and then the fans running to keep the shield clear of moisture. Auto mixing and Cedar NRS on that comm channel did improve the clarity on the comm feeds in keeping with the FS II system. I got decent scratch recordings to hand off to post on those. On the tighter coverage with the shields off, I was happy that we could quickly flip our gain settings for a better base line voice worthy of the cinema venue. The helmet shields changed often and at times without much warning, so again having networked gear to keep up was impactful to us all.

The helmet connections required for audio in and out, as well as multiple power connectors for fan and lighting control.
The TASF connection made the helmet mics compatible with Letrosonics transmitters in case a quick line is needed from the actor without needing the full comms system—useful because both First and Second Unit Mixers use Lectrosonics wireless.
The “Moon Rover” set was completely enclosed on all sides, including top and bottom. Concerned about RF propagation into and out of the set, we worked with the Art Dept. to build antennas into the set, fully decorated and fully functional, with BNC connections made outside and under the rover.
The custom ear pad perfectly encapsulates and isolates the Amazon-purchased motorcycle helmey speaker used for comm audio to each actor.

I want to thank our splinter unit crew of mixer Richard Schexnayder and Boom Op Leonard Suwalski, and Jared Lawrie and Lewis Rhodes as our 2nd Tech Comm Daily Operators.

This was a fulfilling project to work on because we had support in time and resources to select people and the gear suited to the task. Computers and the ability to network the gear were our friends. We were given the time to sort out issues as they revealed themselves to us. We had a plan and committed to it and identified the bits that were not working and got rid of those in favor of something better. We had leadership that led by the idea to be fully engaged, stay calm, and carry on because the pumpkin time is coming for our young actors.

No doubt being contained on stages was a big help versus a live expansive location exposed to the weather elements and more RF density. Without the homework, networked gear and evolved knowledge, the friendly walls of that stage keeping our outer space cold in and the hot summer rains out, could very well have been those same prison walls of stomach aching hell lurking from the past.

58th CAS Awards

We Congratulate All the Winners!

MOTION PICTURE – LIVE-ACTION

CAS Award Winner Ron Bartlett poses at the Cinema Audio Society Awards at the InterContinental Hotel in Los Angeles, CA, on Saturday March 19, 2022 (photo:Alex J. Berliner/ABImages)

DUNE
Production Mixer: Mac Ruth CAS
Re-Recording Mixer: Ron Bartlett CAS
Re-Recording Mixer: Douglas Hemphill CAS
Scoring Mixer: Alan Meyerson CAS
ADR Mixer: Tommy O’Connell
Foley Mixer: Don White
Production Sound Team: György Mihályi, Senior 1st AS,
Áron Havasi, 1st AS, Eliza Zolnai, 2nd AS

MOTION PICTURE – ANIMATED

CAS Award Winners and CAS Members Doc Kane, Paul McGrath, David E. Fluhr, David Boucher and Award Winner Alvin Wee pose at the Cinema Audio Society Awards at the InterContinental Hotel in Los Angeles, CA, on Saturday March 19, 2022 (photo:Alex J. Berliner/ABImages)

Encanto
Original Dialogue Mixer: Paul McGrath CAS
Re-Recording Mixer: David E. Fluhr CAS
Re-Recording Mixer: Gabriel Guy CAS
Song Mixer: David Boucher CAS
Scoring Mixer: Alvin Wee
ADR Mixer: Doc Kane CAS
Foley Mixer: Scott Curtis

MOTION PICTURE – DOCUMENTARY

CAS Award Winners Jimmy Douglass, Emily Strong and Paul Massey pose at the Cinema Audio Society Awards at the InterContinental Hotel in Los Angeles, CA, on Saturday March 19, 2022 (photo:Alex J. Berliner/ABImages)

Summer of Soul (…Or, When the Revolution Could Not Be Televised)
Production Mixer: Emily Strong
Re-Recording Mixer: Paul Hsu
Re-Recording Mixer: Roberto Fernandez CAS
Re-Recording Mixer: Paul Massey CAS
Music Mixer: Jimmy Douglass
Production Sound Team: Alan Chow, Aisha Hallgren,
Rich Mach, Mike Stahr, Emily Strong

NON-THEATRICAL MOTION PICTURE OR LIMITED SERIES

Mare of Easttown winners: (From left) Joseph DeAngelis CAS & Chris CarpenterCAS Award Winners Joseph DeAngelis and Chris Carpenter pose at the Cinema Audio Society Awards at the InterContinental Hotel in Los Angeles, CA, on Saturday March 19, 2022 (photo:Alex J. Berliner/ABImages)

Mare of Easttown
S1 Ep. 6 “Sore Must Be The Storm”

Production Mixer: Richard Bullock
Re-Recording Mixer: Joseph DeAngelis CAS
Re-Recording Mixer: Chris Carpenter
Production Sound Team: Tanya Peele, Kelly Lewis

TELEVISION SERIES – ONE HOUR

CAS Award Winners Diego Gat and Samuel Ejnes pose at the Cinema Audio Society Awards at the InterContinental Hotel in Los Angeles, CA, on Saturday March 19, 2022 (photo:Alex J. BerlineYellowstone winners: (From left) Diego Gat CAS & Samuel Ejnes CAS

Yellowstone: S4 Ep. 1 “Half the Money” 
Production Mixer: Andrejs Prokopenko
Re-Recording Mixer: Diego Gat CAS
Re-Recording Mixer: Samuel Ejnes CAS
ADR Mixer: Michael Miller CAS
ADR Mixer: Chris Navarro CAS
Production Sound Team: Andrew Chavez, Danny Gray

TELEVISION SERIES – HALF HOUR

attends the Cinema Audio Society Awards at the InterContinental Hotel in Los Angeles, CA, on Saturday March 19, 2022 (photo:Alex J. Berliner/ABImages)

Ted Lasso: S2 Ep. 5 “Rainbow”
Production Mixer: David Lascelles AMPS
Re-Recording Mixer: Ryan Kennedy
Re-Recording Mixer: Sean Byrne CAS
ADR Mixer: Brent Findley CAS MPSE
ADR Mixer: Jamison Rabbe
Foley Mixer: Arno Stephanian CAS MPSE
Production Sound Team – Emma Chilton, Andrew Mawson, Michael Fearon

TELEVISION NON-FICTION, VARIETY, MUSIC SERIES or SPECIALS

CAS Award Winners Alexis Feodoroff and CAS Member Michael Hedges pose at the Cinema Audio Society Awards at the InterContinental Hotel in Los Angeles, CA, on Saturday March 19, 2022 (photo:Alex J. Berliner/ABImages)

The Beatles Get Back: Part 3
Production Mixer: Peter Sutton (dec.)
Re-Recording Mixer: Michael Hedges CAS
Re-Recording Mixer: Brent Burge
Re-Recording Mixer: Alexis Feodoroff
Music Mixer: Giles Martin
Music Mixer: Sam Okell
Foley Mixer: Michael Donaldson
 

OUTSTANDING PRODUCT PRODUCTION

Outstanding Product – Production winners: (From left) Shure Incorporated, Rick Renner, Adi Neves, Nick Wood, Sam Bicakattends the Cinema Audio Society Awards at the InterContinental Hotel in Los Angeles, CA, on Saturday March 19, 2022 (photo:Alex J. Berliner/ABImages)

Shure Incorporated for the Axient Digital ADX5D Dual-Channel Wireless Receiver

OUTSTANDING PRODUCT POST-PRODUCTION

Outstanding Product – Post-Production winners: (From left) Dolby’s Andy Potuin, Bryan Arenas, Jim Wright, Bryan Pennington, Jonathab Lesso

Dolby Laboratories for the Dolby Atmos Renderer 3.7 

STUDENT RECOGNITION AWARD

Student Recognition Award Winner Lily Adams poses at the Cinema Audio Society Awards at the InterContinental Hotel in Los Angeles, CA, on Saturday March 19, 2022 (photo:Mark Von Holden/ABImages)

Lily Adams, Savannah College
of Art and Design

Career Achievement Award

Paul Massey CAS

Career Achievement Award Paul Massey CAS (from left)
David Giammarco CAS, Bernice Massey, Sean Massey, Andy Nelson CAS

Filmmaker Award

Sir Ridley Scott

Filmmaker Award honoree Sir Ridley Scott

AMPS AWARD WINNER 2022

DUNE
Ron Bartlett, Gyorgy Mihalyi, Doug Hemphill, Mark Mangini, Theo Green

AMPS winners Dune: (From left) Mark Mangini, Ron Bartlett CAS, Doug Hemphill CAS, Theo Green, Mac Ruth CAS

BAFTA AWARD Winner 2022

DUNE
Mac Ruth, Mark Mangini, Doug Hemphill, Theo Green, Ron Bartlett

Sound – Dune – Mac Ruth, Mark Mangini, Doug Hemphill, Theo Green, Ron Bartlett

OSCARS AWARD Winner 2022

Dough Hemphill, Theo Green, Mark Mangini, Ron Bartlett and Mac Ruth pose backstage with the Oscar® for Sound during the live ABC telecast of the 94th Oscars® at the Dolby Theatre at Ovation Hollywood in Los Angeles, CA, on Sunday, March 27, 2022.

DUNE
Doug Hemphill, Theo Green, Mark Mangini, Ron Bartlett, Mac Ruth

Ric Rambles

The new Vega

I’d like to thank the tens of readers that have responded to my articles for Production Sound & Video. (My friend, Robyn thinks I am eloquent adjacent.) Please know I appreciate hearing from you

by Ric Teller

Today we’re gonna talk about RFs.

First, a disclaimer: My viewpoint on the following subject comes entirely from my personal experiences in entertainment television. I readily admit I am not familiar with the history or the current state of RF’s in what is commonly referred to as single-camera work. Rather than try to fake my way through, I’ll sum up my knowledge. They’re mostly Lectrosonics. You hide ’em.

Sherman set the Wayback Machine for 1981. More than forty years ago, while I was on staff at KTLA, I filled in as the PA mixer on a daytime talk program called Hour Magazine, starring Gary Collins. It was one of my first show calls at KTLA. The audio crew consisted of a mixer, a booth A2 playing music and sweetening with a stack of cart machines, and a PA mixer. At KTLA, boom operators were part of the stage crew, not audio. The Yamaha PM1000 PA board was somewhat familiar. I had seen that console at the iconic Stanal Sound in Kearney (it’s pronounced Car-knee, not Keer-knee), Nebraska. Sitting on top of the mixing board were two Vega RF receivers. The ones that were taller than they were wide, I can’t remember the model number. Neither can Google or Siri, they’re too young. Almost all the audio for the show was done on a Fisher boom except the occasional exercise demonstration segments that required RF lavs. So, we had two RFs but no A2’s. The PA mixer put mics on the host and guest. That was, I believe, the only show I’ve ever done with wireless, but without an A2. In entertainment these days, every game show, talk show, awards show, and contest show employs one or more A2’s to put mics on the talent, contestants, and guests. I believe we set the record at the Academy of Country Music Awards in March.

Someone recently asked me, “What is the origin of the term A2?” How the hell should I know? That term predates the start of my career. I may be old, but I was not the A2 on The Sermon on the Mount … yes, I have worked with him.

KTLA used those Vega RF mics for talk shows, game shows, and the annual Hollywood Christmas Parade where they were handed off to celebrities in cars and on floats on Sunset Boulevard. Other RF mic manufacturers included Nady, Samson, Swintek, and Sony, but in those days, Vega was the nearly unanimous choice for the entertainment.

By the time I started freelancing. the more familiar Cetec Vega R42 Wireless Microphone Receiver Pro Plus Diversity Dynex II, better known as “the new Vega,” had become the industry standard for our programs.

2021 Kennedy Center Honors RF world. L-R: Tricia Reilly A2, James Stoffo RF Coordinator, Wendy Cassidy A2, Kate Foretek A2, Ali Pohanka A2

They were often delivered in miscellaneous cases and cardboard boxes always requiring the A2’s to create a setup with blocks and boards, not unlike a college dorm bookshelf. These VHF Hi-Band (174 MHz to 216 MHz) units were not frequency agile. Some of you will remember the long antennas on the body packs that were fastened to clothing with a small safety pin or an alligator clip. Although each receiver came with a pair of whip antennas that could be attached to the back panel, the mic package was often accompanied by an additional box filled with antenna splitters and a pile of cables, like a “real find” at a Winegard garage sale. With those parts, we would piece together a useable diversity antenna system that would optimize RF reception.

Members of Local 695 and other Audio Engineers throughout the television landscape have many varied jobs. I don’t know how to do most of them. Quite often, overlapping but different skill sets are crucial in the A2 world while working on complicated shows. You may be an accomplished band tech, a trusted lead A2, the patch master, a specialist in hair mics and hidden lavs, or a proficient troubleshooter. You’re probably a combination (my nephew Jonah Horn is a pharmacist and an expert at tossing pizza dough). The fact is that with successes at work, we get hired over and over for the same specific tasks. In time, we achieve a level of efficiency and try to adapt to new technology, but we are not interchangeable in our audio expertise. You can be a Y-7A, a Y-me, or a Y-not, in the end, you know what you know. One of the rarest of these positions is the RF Coordinator. There are so few, they could barely have a minyan to mourn the loss of many frequencies to government auctions. This job is now part of nearly every entertainment special and awards show because producers have decided everything that moves should have an RF mic. And that everything should move.

In 1987, my first Grammy Awards, the show opened with a little RF kerfuffle. The next year, when we returned to the Shrine Auditorium for the 1988 Grammys, we had an RF specialist. When Andy Strauber came on the scene as the first RF coordinator/technician, things were primitive. Most of us couldn’t spell IFR. He owned one. Much of Andy’s job was inventing, improvising, and sometimes willing the equipment to do his bidding. Calculations to avoid intermodulation were made with pencil and paper. I can’t recall how many RFs were in use at that Grammys, probably not that many compared with the shows we do now, but he made them work flawlessly. We hoped this new RF Coordinator position would catch on. Slowly, it became a reality, more and more shows added the position. For the record, twenty years later in 2008, the Academy of Country Music Awards Show held at the MGM Grand Garden Arena in Las Vegas, still did not have an RF Coordinator. There were thirty-eight RFs, predominately Shure, a few Sennheiser, and a couple from Audio-Technica, ugh. After many prayers to Our Lady of the Clean Frequency, I’m happy to report all went well. By the way, I recently found out that ugh is not officially part of the Audio-Technica name, maybe just an implied post-nominal.

In the ensuing years, there is no doubt that the position of RF Coordinator has changed drastically. The hardware has become more stable with numerous technical advancements, including the ability to make changes in the transmitter wirelessly. For example, if an unfriendly signal (a bogey) is detected, with the right system you can now change the frequency of one of your transmitters while it is on talent. When possible, RF Coordinators make advance on-site readings with a spectrum analyzer. That and knowing the characteristics of familiar venues is very helpful in pre-planning the allocation of available frequencies. Software such as IAS (Intermodulation Analysis System) by Professional Wireless Systems, and Wireless Workbench from Shure, assist in frequency selection and provide real-time information about RF signal strength, audio level, battery life, and potential interference. The strategy for a big entertainment show often includes the organization of RF mics by function (production, performance, instruments, etc.), a layout for the physical arrangement of the RF rack or racks, and an antenna design which may range from simple to complex, usually a function of the coordinator and vendor.

As the quantity of RFs has gone up (I had two shows last year with nearly seventy, and this year at ACM’s, oh my!), the usable frequency spectrum has diminished. In a series of government auctions, all the 700 MHz and most of the 600 MHz bands have been sold to private companies. With the transition to digital television in 2009, a repack or reassignment of TV channels to fit into the remaining space that has not been auctioned further reduced usable frequencies for RF mics. One bit of favorable technology is that nearly all RF PL’s have disappeared from the RF mic frequency spectrum, most now use the 1.9 DECT systems.

An unexpected change in our use of RFs has come as part of COVID-19 protocols. We no longer use a mic on multiple people without a thorough cleaning. This often requires additional units. In the case of a familiar quiz show, pre-COVID, we would use the same three contestant mics all day moving them from one group of players to the next. Now the A2, Mitch Trueg, mics everyone at the beginning of the day requiring much more hardware. I’ll take a dozen RF lavs for $200.

Near the beginning of this ramble, I reminisced about Hour Magazine and the two Vega RFs. Last September, I worked on The Tony Awards, where we had nearly seventy, and in December, we equaled that amount on The Kennedy Center Honors. The Justino Diaz Tribute at the Kennedy Center featured seven principals and sixteen in the chorus, all singing on hair mics. That was immediately followed by the tribute to Lorne Michaels with another fourteen RFs, almost all lavs. Easily the busiest back-to-back segments in all my years on that show. Or any show.

In Las Vegas, for this year’s Academy of Country Music Awards at the new 65,000-seat Allegiant Stadium, RF technicians Stephen Vaughn and Corey Dodd from Soundtronics Wireless began planning several weeks ahead of the early March show date to put together the RF package. About a month out, Corey traveled to the stadium and did a multiple location spectrum analysis inside and outside the venue. Using those readings, a frequency plan was developed. The size of the venue and distances between the four areas assigned for performance and production required two separate filtered antenna systems for the microphones. One was divided into six diversity zones to cover three stages, the other used a Wisycom antenna-over-fiber system to overcome the long runs to the far end of the stadium. Next, hardware was assigned. The show’s thiry-two Shure PSM1000 IEM (in-ear monitor) units were programmed into the quietest available spectrum. The next cleanest frequencies were used for the twenty-four Shure Axient host and production hand mics followed by performance vocal mics, eight Sennheiser 6000’s in the A1-A4 and A5-A8 bands, and thirty Shure Axients in the G57 and X55 ranges. The last bit of the microphone puzzle was the addition of twenty-five Shure UHF-R bodypacks for instruments like acoustic guitars.

RF racks at ACM Awards

So, let’s review. Stephen and Corey found frequencies for thirty-two IEM’s and eighty-seven microphones. If that wasn’t enough (hint, it was), the comms department added in one hundred twenty-five Bolero wireless RF PL’s made by Reidel all on the 1.9G Hz DECT system. The two-hour live show with twenty-two performances, award presentations, and three hosts spread on stages all over Allegiant Stadium provided the lead A2, Steve Anderson, and his record-setting crew plenty to keep busy.

As my grandma Nellie used to say, “And that’s that.”

ACM crew: Justin, Brandon, Tom, Corey, mystery guest, Ric, Steve A., Chaz, Kirk, Jeff, Damon, Steve C., Ozzie, Bruce, Jamie, Victor, Craig, Greg, Kim, Fred, Robyn, Alex (Eddie was working)

32-Bit Float Audio

by James Delhauer

Having grown up at the precipice of the digital revolution, I sometimes step back and marvel at what has become possible in the last twenty years. Limitations have been toppled like empires and as technology has disseminated to the masses, the definition of cinematic has shifted. Gone are the days of practical matte paintings and model-based set extensions. Where a grandiose set piece might have consisted of a few hundred extras running alongside a series of well-timed special effects, it’s now commonplace to see vast armies numbering in the tens of thousands clashing with one another or alien monsters tossing planets around like dodgeballs. The ante for what we see on screen has truly gone up over the years. But equally important are the developments for what we hear. Though not as obvious as the developments in cinematic visuals, digital audio technology has come just as far, and few innovations demonstrate this better than the rise of 32-bit float audio.

To understand this relatively new technology, some context is needed.

Digital audio is created by taking an analog signal and encoding it as a sequence of numerical samples using a method known as pulse-code modulation (PCM). Each sample represents the amplitude of the signal and individual samples are generated at even intervals so that they can be reassembled to create a facsimile of the original analog sound. A file’s bit depth represents the number of bits of information present in each sample, with larger bit depths resulting in an improved signal-to-noise ratio and dynamic range. In practical terms, this means that a signal captured at a higher bit depth will contain less distortion and can be manipulated to a greater degree than an identical signal captured at a lower bit depth.

Traditional uncompressed 16-bit audio files (the format used to encode music onto audio CD’s) store samples in a sequence with each sample being represented by a 16-digit binary number. The numerical value of this sequence represents a voltage level that corresponds to the signal amplitude, resulting in a dynamic range of 96.3 dB. 24-bit files (the format used most commonly in modern production environments) extend the binary number from 16 digits to 24 digits, resulting in a much greater dynamic range of 144.5 dB. This fifty percent increase in audio resolution has often been compared to the leap from standard-definition to high-definition video, with high-resolution audio allowing for far greater manipulation and signal recovery in post production.

However, the jump from 24-bit audio to 32-bit floating audio is far greater and more significant for our industry. Both 16-bit and 24-bit formats utilize what is known as a fixed-point file system, meaning that the representation of data is calculated based on a whole integer. The newly emerging 32-bit float file format calculates data using “floating” decimal points, allowing for a far greater range of values than even a fixed 32-bit profile would allow. The result is a file format that has a dynamic range of nearing 1600 dB—which could very well be the last necessary increase in audio resolution as the full range of sound believed to be possible within Earth’s atmosphere is about 210 dB.

As a result, properly recorded 32-bit float files have the ability to recover near inaudible data in the signal, as well as unclip sounds that exceed 0 dBFS (what was regarded as the loudest signal level achievable in a WAV file). What’s more, both can be done within a single file, allowing a whisper and a bomb blast to be successfully captured without a change in audio levels.

The accompanying graphic shows the audio waveform of a line of dialog recorded in 32-bit float format. The sound has clipped and become greatly distorted. The second image is the same file after its gain has been reduced by 26.1 dB, just enough to bring the entire waveform back into range. The distortion has been entirely removed and the dialog sounds crisp and clear. The file, despite significant clipping, is still perfectly usable. Compare this to the third image, in which the same adjustment was made after converting the original 32-bit float file down to a 24-bit fixed format. Even at reduced volume, the uniform wave pattern shows that the distortion is still present. This file would not be usable under any circumstances.

For Local 695 Production Mixers, this technology represents a useful tool in their arsenal. More forgiving files offer the ability to “split the difference” when recording scenes that vary widely in terms of sound levels being recorded on a single microphone, such as recording two performers who are speaking at different volumes off of one boom or capturing dialog that will be interrupted by a practical effect. This could mean the difference between a good take and a bad one, with takes that would have been unusable in a 24-bit format being perfectly acceptable today.

However, there are some misnomers regarding this technology that must be addressed. On a recent film set, a producer asked whether or not recording 32-bit float files meant that we’d no longer need to be “quiet on set,” in order to get good audio. The answer to this is a resounding NO. The files can’t magically distinguish between an actor giving a performance and the idle chatter of a conversation behind the camera. A cellphone dinging at the wrong time can still blow a take. Professional etiquette is still a must. That same producer went onto ask if production mixing would become unnecessary since levels could be adjusted in post production. Again, the answer is absolutely not. This is akin to suggesting that lighting is unnecessary now that cameras can capture high dynamic range images. Sure, an unlit scene can go through a degree of brightening and manipulation in post production, but at exorbitant cost and to the detriment of the final product. An incompetent production mix (or worse, an unmixed production track) would make dailies of little use outside of visual purposes, would hinder our brothers and sisters in Local 700 when they have to stop to adjust audio levels multiple times during each shot, and would extend the re-recording mix period. The relationship between production and post production has always been that post’s life is cheaper and easier when production does their job well and this technology, impressive though it may be, will not change that.

On a similar note, multiple articles that I read when researching this piece suggested that productions would be able to get away with capturing all of their on-set audio by planting a single microphone to capture an entire scene. That is also patently false. This technology changes how computers process audio signals in digital files, not the physics of how sound carries through the air on set. Someone being picked up by a microphone across the room is never going to sound the same as someone speaking into a dedicated mic that is being boomed directly in front of them or clipped to their lapel. In short, this technology acts as a safety net during difficult environments for capturing sound. It does not erase almost a century’s worth of best practices.

However, looking to the future, this technology will become critical as the world moves into virtual reality production. Spatial sound requires audio files to be manipulated in real time by whatever algorithm determines the listener’s proximity to a supported sound source. When in a virtual environment, listeners can be exposed to just as many disparate sound sources as they can in the real world, and having the ability to work across the entire scope of human hearing with any given audio source will be a necessity when crafting an immersive virtual soundscape.

At this time, only a handful of production recorders support the capture of 32-bit floating audio files, with those being the Sound Devices MixPre-3 II, MixPre-6 II, MixPre-10 II & A20 Mini; the Zoom F2 & F6; and the Tentacle Track E—though more are set to hit the market in 2022 and their prevalence will only continue to grow as the global chip shortage comes to an end. In the meantime, Local 695 mixers interested in investing in 32-bit float recorders are encouraged to download sample files and explore the benefits in a hands-on manner so that they are ready to work with producers whose productions might benefit from them.

As someone who remembers listening to the muffled sounds of VHS tapes when watching his favorite movies and playing video games with 8-bit audio, the distance we’ve traveled is truly staggering. While we may not find ourselves in need of an audio resolution greater than 32-bit until we figure out how to make movies on other planets, I find myself looking to the future with wonder and curiosity. How will the stories of tomorrow sound and, more significantly for us, what sort of tools will our brothers, sisters, and kin in Local 695 use to capture them?

Standing with Steve Evans: Battling Blood Cancer

by James Delhauer

A core belief at the heart of the labor movement is that we are stronger together than we are apart; that what would be impossible for one to accomplish alone becomes possible through the collective. We have a responsibility to look out for and take care of one another, secure in the knowledge that when it is our time of need, our brothers and sisters in the union will be there for us. Now is such a time when one of our own needs our help. In July of last year, brother Steve Evans, a boom operator with more than thirty years of service within Local 695, was diagnosed with Myelodysplastic Syndrome, a rare form of cancer that targets the bone marrow and causes blood cells to become abnormal. This is a call for help to all 695 members and to anyone else who might be reading.

Myelodysplastic Syndrome is a complicated illness to treat, as it works by targeting the sponge-like marrow deposits in the bones where blood cells are produced, thereby affecting the development of new blood within the body. Newly produced blood cells are unable to mature into healthy cells, entering the body at reduced efficiency and effectively poisoning the body over time as contaminated marrow continues to produce more corrupted blood. If untreated, the condition can develop into acute myeloid leukemia. Treatment options range from lifestyle changes to drug regiments to radiation treatment. Steve has been undergoing routine chemotherapy injections three times a day for a week out of each month, which have helped to keep his condition stable. However, what Steve truly needs is a bone marrow transplant.

This is a process by which healthy stem cells are extracted from the marrow of a donor and used to replace the contaminated cells of the recipient. The patient is required to undergo radiation treatments in order to weaken the immune system to the point where the body will not reject the donor marrow as a foreign substance, requiring a period of isolation within the hospital and several more months of near isolation at home while the immune system recovers. During this time, the patient becomes highly susceptible to even the most common of infections. It is a difficult process that impacts every aspect of life. However, it is also the only surefire cure for a case of Myelodysplastic Syndrome such as Steve’s.

However, bone marrow transplants are difficult to facilitate. Unlike other organ transplants where blood type is the primary factor in compatibility, marrow transfers require a close genetic match to achieve. A close blood relation is typically considered the best candidate, though even this is no guarantee as only thirty percent of transplants worldwide utilize a close blood relation as the donor. According to the U.S. Health Resources & Services Administration, approximately eighteen thousand patients are diagnosed with life-threatening illnesses where a bone marrow transplant represents their best chance of recovery. Unfortunately, according to data from the same agency, only around two thousand transplants are performed in the U.S. each year. For his part, Steve does not have any close family that meets the strict age requirements and so another donor must be found. This is where we can help.

U.S. and international donor registries are in desperate need of more volunteer donors. If you are between the ages of 18 and 49, you can go to www.bethematch.org and www.dkms.org and sign up to become a marrow donor. Testing is as simple as requesting a swab kit and the results will be added to the registries going forward. If a match for Steve is found, that information will be forwarded to his doctors and steps toward scheduling the procedure can begin. However, when I spoke with Steve on the phone, he was adamant that this is about more than just him.

A new patient is diagnosed with blood cancer every twenty-seven seconds, and he is hopeful that if a member of our Local is matched with another patient, we will step up to save a life—any life—that can be saved. This is especially important for our members of color, as the ability to find a matching donor is highly skewed around ethnic backgrounds. According to the data on bethematch.org, while there is a seventy-nine percent likelihood of finding a compatible match for Caucasian patients, that probability drops to sixty percent, forty-eight percent, forty-seven percent, and twenty-nine percent for Native American, Hispanic, Asian & Pacific Islander, and African American patients respectively. All of these groups are dangerously underrepresented across donor registries, making it far more difficult for patients of color to receive the life-saving treatment that they need.

If you do not meet the donor age requirements or are not healthy enough to become a donor, there is still plenty that you can do. In January of this year, the American Red Cross declared a national blood shortage amidst the omicron variant surge of COVID-19, with distributors being forced to ration supplies for fear of running out. For obvious reasons, this is of particular concern for blood cancer patients like Steve, many of whom undergo routine blood transfusions in order to compensate for the abnormal cells their bodies produce. Signing up to become a blood donor does not come with the same strict restrictions as becoming a marrow donor, and so I am encouraging everyone reading this to do so if they can. According to the Red Cross, a single blood donation can save as many as three lives, meaning that if every person receiving this magazine were to donate, up to seventy-five hundred lives could be saved.

There’s also a need for monetary donations. Donations of any size to either Be the Match or DKMS can mean the difference between life and death for blood cancer patients, as they facilitate outreach to expand the registry databases, cover the costs of registering new donors, and go toward research for new methods of treating blood cancer. Both organizations have longstanding reputations of good faith conduct when it comes to handling donor money and are relentless in their shared mission to fight blood cancer across the globe.

As far as Steve is concerned, this difficult battle will continue until a match can be found. Between the heightened risk posed by COVID-19 and the toll chemotherapy takes on him, it has been difficult for him to work in the last year. In January, he spoke about his condition at our Local 695 General Membership Meeting.

“I want to live,” he told us. “I don’t like asking for help, but I’m asking for everyone to help get the word out. If we find a match, that would be wonderful. But if we can help save someone else’s life too, that would make me very happy.”

Getting the word out is the very least that we can do. The thoughts of everyone at Local 695 will continue to be with Steve as his fight continues. If anyone would like to reach out to him to offer support of any kind, please reach out to the Local 695 office to be put in contact. In the meantime, sign up to become a donor if you can. Sign up to give blood if you can. Sign up to give money if you can.

Save a life if you can.

Musicals Aren’t What They Used to Be

by Tod A. Maitland CAS

While that may be true about the visual style of musicals, it is absolutely true about sound. Think back to movies like Singing in the Rain and how they were filmed—big master shots on giant studio stages with brightly lit sets. Compare that to West Side Story (WSS) or tick, tick…BOOM! (TTB) or virtually any of the more recent musicals. Of course, cinematography on musicals has evolved immensely, but it pales in comparison to the transition in sound that has taken place on musicals since as late as the early ’80s.

I was there in those early days (meaning the late ’70s and early ’80s) when practically every musical number was filmed using loudspeaker playback. The only sound that post received from production to fill the speakers’ void was Room Tone. The rest was up to Foley, working hard to make the best of things with a lack of in-sync ambience. Audiences just accepted the “canned” sound of vocals. 

Of course, there were exceptions. I remember working with my father, Dennis Maitland, a prolific Production Mixer for forty years who was always on the edge of innovation. For the film The Tempest, he used a very early version of Earwigs to capture live vocals. They were so new, to make them function, you had to run speaker wire around the entire set twice and attach the leads to a powerful amplifier to create an induction loop. Actors/singers had to be inside the loop. Earwig sound quality varied greatly depending on where you were in the loop. It was all very time-intensive. Live songs were a rare breed. 

  • Andrew Garfield as Jonathan Larson in tick, tick…BOOM!
  • Re-recording mixer Andy Nelson visiting
    Tod Maitland on West Side Story
  • Jerry and Terence walking wireless battery speakers to keep close to Ansel for playback

Today, it’s entirely different. It’s all about creating reality, making musicals sound “natural” and not canned. The days of simply turning on the playback machine on set are over. Paul Hsu (Re-recording Mixer, Supervising Sound Editor, TTB) summed up our goal very well during a recent interview, calling it “hyper-reality.” Musicals live in a real/non-real world. Some songs are recorded live on set, some are prerecorded PB, and many are a mix of both. Our goal is to weave between live singing and PB while maintaining an in-sync ambience, so the audience never knows the difference. But that’s so much easier said than done. 

First, a Production Mixer must be equipped with quite a sizeable sound package to accommodate and anticipate everything that can (and will) happen on set. The old saying, “Large package, large rentals,” is true, but being prepared is the name of the game.

Over the last ten years, particularly in the past three, significant advancements in sound technology have made it easier for us to go smaller. Unfortunately, the more compact size hasn’t reduced the price of the equipment or made using it any easier (i.e., every component supports a different operating system!). However, the evolution of our technology has made us more mobile, increasing what we can do exponentially. 

Case in point: West Side Story. When I was first offered the film, it was immediately apparent I was way, way underequipped. Recording production sound on WSS was akin to recording a Broadway show on the sweltering summer streets of NYC for seventy-eight days. The enormity of the project alone required us to build a brand-new $300k state-of-the-art sound cart from the ground up, capable of recording thirty channels of wireless inputs, and outputs sent to twenty-four IFB’s and forty Comtek wireless units using seven different mixes. The system also incorporated Earwigs, Thumpers, and wireless playback speaker systems.

  • Me and Steven at my cart. a lighter moment on one of our few studio days.
  • Mike Scott booming on West Side Story

The film has massive dialog scenes, and musically, it is a mix of live and prerecorded PB vocals. Many songs required us to prepare for live vocal recording and PB simultaneously. For example, Steven Spielberg would shoot a few takes to PB, then a few takes live with very little time for a sound switchover. As you can imagine, each approach has vastly different needs and requirements. We had to prep for every possible scenario—ready for anything with little room for error.

A typical day on WSS entailed wiring up to 22 actors, swinging 3 booms, hiding up to 4 wireless effects mics, deploying up to 50 earwigs, a Thumper system, and many of our 15 wireless speakers. To further complicate things, NYC has sound restrictions. We overcame this by using a lot of midsize wireless battery-powered speakers placed strategically close within the sets for playback scenes. 

The truth was, we needed everything we had for WSS. The dance scene in the gym is a great example. We ringed the gym with our large PB speakers, hid powerful subwoofers in the bleachers for a thump track, earwigged all the primary actors, and wired everyone who vocalized anything. On a film this large, focusing on the singularity of a voice is overshadowed by the need to capture everything! 

  • Jerry Yuen ready for live vocal recording TTB October 2020
  • Jerry Yuen with two effects mics – 416’s with Shure Axient transmitters on armature wire and sand bags for West Side Story.
  • Tod Maitland’s preproduction Lav testing booth for Spirited. A 416 and 6 lavs recorded on separate tracks for comparison alignment

The immensity of WSS repeated itself over and over. For ‘Sargeant Krupke,’ we wired and earwigged and boomed everyone. The entire song floats in and out of live and PB. Filming the song, we did Steven’s—a few takes with PB, then a few takes live. When there is so much sound happening in a scene—vocals, natural effects, ambiences, music—sometimes it’s easier for an audience to accept vocals as reality than in an intimate piece. Sometimes not. 

Almost everything in WSS is choreographed: from the opening scene where they pass paint cans in time with the music to fight scenes and even dialog. What looks like a simple sound scene becomes another moment for earwigs, effects mics, and soundscape building. 

However, a few songs were traditional two-character live vocal records (none without their obstacles, of course). “A Boy Like That” is a classic wireless/boom mix of Maria and Anita as they move through the entire apartment. Their volumes ranged from 0 to 100; I rode the pre-fade volume from -20 to +20 throughout. For “One Hand One Heart,” we acoustically treated the concave ceilings of the church with Sonex to control some of the ambient bounce. “Somewhere” was all about Rita Moreno and her singular voice. In the scene, she started deep in the set, forcing me to start with a wireless (and ambient mic to open up the wireless sound), then switch to a boom when she was close enough. 

While WSS needed every bit of my 450-lb cart, tick, tick…BOOM! required far fewer inputs and outputs. Instead, this film demanded a hyper-attention to the singularity of a voice (similar to “Somewhere”), being inside someone’s head, and staying sonically consistent throughout the entire film. In other words, if the quality of the vocal or ambient sound shifts every time prerecorded vocals are used, you lose a piece of your audience. 

TTB is filled with examples of how we worked to keep it real. The song “Boho Days” was filmed entirely live without music or click track (which doesn’t happen often). We wired everyone, had three booms covering everything possible, and just let it fly. On the other hand, the scenes inside the theatre that tie the film together were filmed live using the practical SM-58 mics. Anytime there are practical mics, I always try to use them. For these scenes, we kept getting ‘popping’ on the SM-58’s. We stuffed as much windscreen as possible under the mic’s grill and worked hard with the actors on positioning the mics to avoid breath pops. 

Terence McCormack Maitland prepping headsets and a couple booms for a dancer wild track on West Side Story

There were also scenes where we had to pivot our plan and shoot from the hip. One example is when Andrew Garfield sings, “Why” at the Delacorte Theatre. In prerecords, he sang with emotion, but when it came time to shoot, Andrew’s emotional state was far beyond where it was in preproduction. It was obvious we needed to record live. Everyone worked together to make it happen, and Paul in post stitched it together.   

Regardless of the size of the musical, the whole process begins in pre-production. This is where some of the most important sound decisions are made. Generally, I start a month before filming and use this time to develop a relationship with everyone who will be integral to what we are doing: Director, Actors, Music, AD’s, Wardrobe, Production Designer, etc. Being the first ‘sound boots’ on the ground, I’m there laying the foundation for sound from prerecords through post. 

The first task in creating reality is eliminating the abrupt difference in sound quality from on-set dialog to singing. So, before vocal prerecords begin, I run a series of tests to match every actor to a particular lav mic that best matches the boom mic (different actors sound vastly different on different lavs. We test seven brands with each actor). The chosen lav then accompanies that actor from vocal prerecords through post. 

In prerecords, the music mixer adds our boom and lavs to the big fat studio mic, giving post the option to start the song using the same mic I used on set for dialog. I also attend the vocal prerecords, placing mics, and helping actors maintain the scene’s energy—if they’re dancing or emotional in the scene, the sound of that dancing or that emotion needs to carry through.

Acoustic treatment to the concave ceiling for the live song “One Hand One Heart” inside the basement of a church.

We approached live singing for WSS and TTB as if we were cutting an album: Microphone placement is always our top priority. It’s everything. We battle hard to place the mics where they need to be. Film sets need to be prepped acoustically. Non-period and extraneous sounds need to be eliminated. Every actor/dancer who needs to hear music for whatever reason gets an earwig and a Thumper for background dancers. 

Creating individual mixes for the music department enabled them to listen to live vocals in one ear and prerecorded vocals in the other. This also works very well to keep singers in sync for lip-syncing PB songs. 

For big loudspeaker playback scenes where the speakers have obliterated the possibility of recording anything useful: After the company finishes filming the music scene, my team hands out earwigs and IFB’s to all the actors/dancers. They then repeat the entire musical piece as a wild track without singing. Instead, they make all the other sounds they did while filming—dance steps, prop sounds, non-scripted vocalization, and ambiences. We plant effects mics for specific sounds and swing booms at various perspectives to capture a true, in-sync Foley/ambience track.

This wild track is another opportunity to record good FX or ambience. The production sound team are the eyes and ears on set, always scoping for anything in front of or near the camera to record. Another trick we like to slip in for vocals is to record the first line of each song live on set with the same mic used for dialog to help the transition in post.

My crew is everything! Without them, I would be dead in the water. For the last fifteen years, my team has consisted primarily of Jerry Yuen, Mike Scott, and Terence McCormack Maitland. Each of them is amazing and genuinely committed to advancing our level of sound on every film. I’ve been very fortunate to work with this talented team.

In the end, whether we’re recording dialog/vocals on a lav and a boom simultaneously, capturing multi-mic sound effects, Wild tracks, musical Foley tracks, or ambiences, my goal is to give most sound as much variety and as many elements as possible for the mix. It’s no different from any form of art-making; you need a full set of tools to realize your vision.

I believe musicals have become the most complex, challenging, and rewarding films to record. There are so many elements to deal with (and so many personalities to navigate!). In addition, musicals bring out the best collaboration between the production and post-production sound departments. Most films I’ve worked on don’t usually lock in their post sound team until after production. So on musicals, it’s refreshing (and incredibly helpful) to get to talk before filming begins.

I couldn’t be happier with the TTB and WSS final mixes. They are rich, complex, subtle, smooth, and beautiful. But, most of all, they are real (or as real as you can get without filming one hundred percent live and painting out booms). In addition, both films’ music is stunning, adding excellent quality and depth to the overall mix—without overpowering the detail.

Ric Rambles

by Ric Teller

2022 KTLA Rose Parade A2 crew L-R: Craig Rovello, Greg Ferrara, Ric Teller, Ross Deane. Collectively, these four have worked on more than 120 Rose Parades.

In my initial article, “Ric Rambles and Reflects,” a few months ago, I belabored the story of my start in television at KTLA closing with the possibility that I might be working on their Rose Parade broadcast for the fortieth time. Wow! What a cliffhanger.

Well, I did it. Forty. More than half the parades televised by that historic seventy-five-year-old station. Not a record, but pretty good. In 1982, when I did my first, it was in mono and SD (Standard Definition). Actually, it was OD (Only Definition). Soon, the show was broadcast in stereo, and eventually in high defintion. My first parade was the initial one hosted by the iconic pair, Bob Eubanks and Stephanie Edwards. Thirty-five years later, when Bob and Stephanie retired, Mark Steines and Leeza Gibbons stepped in and have done a terrific job ever since. As you can imagine, many things have changed over that amount of time, of course, there have been some constants. The parade still goes from left to right and the food at the Elks Club in Pasadena is … well, let’s just say that the New Year’s breakfast of powdered eggs and filet of Adidas is legendary.

There are a couple of certainties about my career. One: It is coming to an end, “on the back nine” as my fellow ’52 baby, Dennis Mays, reminds me, and Two: I’ve been very fortunate. David Velte, the iconic, hilarious, unique, beloved mixer, would have reminded me that we knew each other long before I changed my name (from Lucky Bastard). Believe me, I’m grateful for all I have seen and done, and the longevity has given me the gift of working with many wonderful people. The friends and mentors who are no longer with us are missed, but we continue to appreciate them with stories and memories.

I was a little late getting started in television, nearly thirty years old when I began that run of consecutive parades. Over the years, I’ve worked on more than twenty Oscars, Emmys, Grammys, Jerry Lewis Telethons, Kennedy Center Honors, and the Dick Clark alphabet soup of ACMs and AMAs. Gotta love live television. My main excuse for the gaudy numbers is that I believe production departments often look at the previous crew list and go with that. Hey, it worked last year. Annual events such as award shows lend themselves to longevity in entertainment television.

American Music Awards 1995

Over the years, many things have changed. WTTW, in Chicago, a leader in simulcasting (a stereo broadcast carried on an FM radio station that was in sync with the picture on the air), began transmitting music programs in stereo in 1984. That same year, The Tonight Show Starring Johnny Carson, on NBC, began broadcasting in stereo, although at first, that new format was only available in New York City. By the time I began working on award shows, in 1987, all the networks had joined the stereo party, although not all the programs. How many remember “In Stereo (where available)”? Eventually, more audio channels were added, the somewhat problematic 5.1, the more confusing 7.1, and I believe Ed Greene once did a show in 3.9 … maybe. I’ll have to ask Hugh. One of the other especially helpful changes for the aging A2 population has been the transition from copper to fiber. At the Grammys a few years ago, the connections at all seven mixing consoles were fiber. Yes, seven. One on-air console in the broadcast truck, two music mix consoles in the two music trucks, two FOH (front of house) consoles, one for production, one for music, and two monitor mix consoles. Today, fiber is the norm, copper connections, except the runs from the stages, are rare and used mostly for backup. It has made split world larger, more complicated, and more flexible. A welcome consequence of the change from copper to fiber has been a serious reduction in ground issues. And backaches.

Initially, all my jobs were programs made by or for networks. Anyone who paid attention to the Emmy Awards last year, knows that numerous non-network companies have become successful content providers, and are having a major effect on how people view programs. Even stalwart awards shows have been touched. The Golden Globes, after a long and interesting relationship with network television, was not televised this year. The Academy of Country Music Awards initially broadcast by ABC, then NBC, and more recently CBS, will only be available on Amazon Prime Video in 2022, and last year, longtime CBS presentation, The Tony Awards, streamed for two hours on Paramount+, then continued for two more hours on CBS. That was a long day.

The Four Questions

  1. Are those 416’s T or P?
  2. Did you overbias 3 db for the 456?
  3. Can I borrow Yibbox?
  4. Ready for a rock on the AT?

Over the years, once common terms have become archaic, gone the way of Wrong Way Corrigan. If you know the answer to all these questions, you too might be on the back nine. If I remember, I’ll reveal the answers at the end of the column.

At this fortieth parade, my last one, I took a trip down memory lane and tried to make a list of all the mixers. I came up with a baker’s dozen. I’m grateful for the opportunity to have worked with so many terrific people. If I left anyone out, I am truly sorry.

• Jerry Pattison, a longtime KTLA sound engineer. Mixer on my first one.
• John Kennamer, KTLA staff sound mixer, before that, John worked for WTTW in Chicago at the beginning of stereo television
• Ken Becker, sound mixer, began his career at KTLA in 1952 and acted in four Elvis Presley films
• Monte Lee, KTLA staff mixer, and fairly famous midwestern musician
• Shawn Murphy … yes, that Shawn Murphy
• Ron Estes, worked at KTLA after a long career at NBC, he put The Tonight Show on the air in stereo
• Tom Ancell, audio director at KCET
• Leamon “Lee” Gamel, versatile mixer of entertainment and sports
• Carolyn Bowden, one-time ABC staff sound engineer, now has the job I aspire to have
• Russ Gary, had a long, successful career as a record mixer and an equally successful career mixing for television
• Sam Mollaun, current KTLA mixer, with many television credits
• Pete Damski, very busy, versatile sound mixer, then a second career as a valued educator
• Ish Garcia, mixed it this year, he’s almost as old as me with a better attitude

My sincere thanks to each of you. To those who have passed, you are missed, we raised a glass to your memory at the post-parade meeting.

This column began as a look at longevity in entertainment television. Through forty Rose Parades and more than ten times that many award shows and specials, I can tell you without question that it isn’t about numbers, it isn’t about what people think of as “prestigious” shows, it isn’t even about working in wonderful locations (that said, I recommend Town Park in Telluride if it is offered). It is entirely about the people.

That’s my story and I’m sticking to it.

Oh yeah, the four answers.

  1. P
  2. Yes
  3. Mine is broken
  4. Ready

CAS AWARD NOMINEES

The Cinema Audio Society announced the nominees for the 58th Annual CAS Awards for Outstanding Achievement in Sound Mixing for 2021 in seven categories.

The 58th Annual Cinema Audio Society Awards returns as a live event on Saturday, March 19, 2022, in the Wilshire Grand Ballroom at the InterContinental Los Angeles Downtown.


Motion Picture – Live-Action

Dune


Mac Ruth CAS–Production Mixer
Ron Bartlett CAS–Re-recording Mixer
Douglas Hemphil CAS–Re-recording Mixer
Alan Meyerson CAS–Scoring Mixer
Tommy O’Connell–ADR Mixer
Don White–Foley Mixer
Production Sound Team: György Mihályi
Senior 1st AS, Áron Havasi 1st AS, Eliza Zolnai 2nd AS

No Time to Die


Simon Hayes CAS–Production Mixer
Paul Massey CAS–Re-recording Mixer
Mark Taylor–Re-recording Mixer
Al Clay–Scoring Mixer
Stephen Lipson–Scoring Mixer
Mark Appleby–ADR Mixer
Adam Mendez CAS–Foley Mixer
Production Sound Team: Arthur Fenn Key 1st AS, Robin Johnson 1st AS, Ben Jeffes 2nd AS/Sound Coordinator, Millie-Ackerman Blankley Sound Trainee, 2nd Unit: Tom Barrow–Sound Mixer, Loveday Harding 1st AS, Frankie Renda Sound Trainee

Spider-Man: No Way Home

Spider-Man from Columbia Pictures’ SPIDER-MAN: NO WAY HOME.


Willie Burton CAS–Production Mixer
Kevin O’Connell CAS–Re-recording Mixer
Tony Lamberti CAS–Re-recording Mixer
Warren Brown–Scoring Mixer
Howard London CAS–ADR Mixer
Randy K. Singer CAS–Foley Mixer
Production Sound Team: Adam Mohundro Boom, Tyler Blythe Utility Sound,
Thomas Doolittle

The Power of the Dog


Richard Flynn–Production Mixer
Robert Mackenzie–Re-recording Mixer
Tara Webb–Re-recording Mixer
Graeme Stewart–Scoring Mixer
Steve Burgess–Foley Mixer
Production Sound Team: Sandy Wakefield 1st AS, Jessica McNamara 2nd AS, Lisa Leota 2nd AS

West Side Story

Ariana DeBose as Anita and David Alvarez as Bernardo in 20th Century Studios’ WEST SIDE STORY. Photo by Niko Tavernise. © 2021 20th Century Studios. All Rights Reserved.


Tod Maitland CAS–Production Mixer
Andy Nelson CAS–Re-recording Mixer
Gary Rydstrom CAS–Re-recording Mixer
Shawn Murphy–Scoring Mixer
Doc Kane CAS–ADR Mixer
Frank Rinella–Foley Mixer
Production Sound Team: Jerry Yuen, Mike Scott, Terence McCormack Maitland


Motion Picture – Animated

Encanto


Paul McGrath CAS–Original Dialogue Mixer
David E. Fluhr CAS–Re-recording Mixer
Gabriel Guy CAS–Re-recording Mixer
David Boucher CAS–Song Mixer
Alvin Wee–Scoring Mixer
Doc Kane CAS–ADR Mixer
Scott Curtis–Foley Mixer

Luca


Vince Caro CAS–Original Dialogue Mixer
Christopher Scarabosio CAS–Re-recording Mixer
Tony Villaflor–Re-recording Mixer
Greg Hayes–Scoring Mixer
Jason Butler–Foley Mixer
Richard Duarte–Foley Mixer

Raya and the Last Dragon


Paul McGrath CAS–Original Dialogue Mixer
David E. Fluhr CAS–Re-recording Mixer
Gabriel Guy CAS–Re-recording Mixer
Alan Meyerson CAS–Scoring Mixer
Doc Kane CAS–ADR Mixer
Scott Curtis–Foley Mixer

Sing 2


Edward Sutton–Original Dialogue Mixer
Gary A. Rizzo CAS–Re-recording Mixer
Juan Peralta–Re-recording Mixer
Alan Meyerson CAS–Scoring Mixer
Robert Edwards–ADR Mixer
Frank Rinella–Foley Mixer


The Mitchells vs.The Machines


Brian Smith–Original Dialogue Mixer
Aaron Hasson–Original Dialogue Mixer
Tony Lamberti CAS–Re-recording Mixer
Michael Semanick CAS–Re-recording Mixer
Brad Haehnel–Scoring Mixer
John Sanacore CAS–Foley Mixer


Motion Picture – Documentary

Becoming Cousteau


Tony Volante CAS–Re-recording Mixer
Phil McGowan CAS–Scoring Mixer

Summer of Soul (…Or, When the Revolution Could Not Be Televised)

Nina Simone performs at the Harlem Cultural Festival in 1969, featured in the documentary SUMMER OF SOUL. Photo Courtesy of Searchlight Pictures. © 2021 20th Century Studios All Rights Reserved


Emily Strong–Production Mixer
Paul Hsu–Re-recording Mixer
Roberto Fernandez CAS–Re-recording Mixer
Paul Massey CAS–Re-recording Mixer
Jimmy Douglas–Music Mixer
Production Sound Team: Alan Chow, Aisha Hallgren, Rich Mach, Mike Stahr, Emily Strong

The Velvet Underground


Leslie Shatz–Re-recording Mixer

Tina


Caleb A. Mose–Production Mixer
Lawrence Everson CAS–Re-recording Mixer
Phil McGowan CAS–Scoring Mixer
Production Sound Team: Patrick Becker, Charles Mead, Paddy Boland, Sam Kashefi, Raymond Anderegg, Adrienne Wade

Val


Michael Haldin–Production Mixer
John Bolen–Re-recording Mixer
Garth Stevenson–Scoring Mixer
Mitch Dorf–ADR Mixer


Non-Theatrical Motion Pictures or Limited Series

Hawkeye
Ep. 3 “Echoes”


Pud Cusack CAS–Production Mixer
Thomas Myers CAS–Re-recording Mixer
Danielle Dupre–Re-recording Mixer
Casey Stone CAS–Scoring Mixer
Doc Kane CAS–ADR Mixer
Kevin Schultz–Foley Mixer
Production Sound Team: Max Osadchenko, Matt Derber, Patrick Anderson, Paul Katzman, Josh Tamburo, Ken Strain, Robert Maxfield, Alana Knutson, Michael P. Clark, James B. Appleton

Mare of Easttown
Ep. 6 “Sore Must Be the Storm”


Richard Bullock–Production Mixer
Joseph DeAngelis CAS–Re-recording Mixer
Chris Carpenter–Re-recording Mixer
Production Sound Team: Tanya Peele, Kelly Lewis

The Underground Railroad
Chapter 10 “Mabel”


Joseph White Jr. CAS–Production Mixer
Onnalee Blank CAS–Re-recording Mixer
Mathew Waters CAS–Re-recording Mixer
Geoff Foster–Scoring Mixer
Kari Vahakuopus–Foley Mixer
Production Sound Team: Alfredo Viteri, Tyler Blythe, Timothy R. Boyce, Alexander Lowe

WandaVision Ep. 8
“Previously On”


Christopher Giles CAS–Production Mixer
Danielle Dupre–Re-recording Mixer
Casey Stone CAS–Scoring Mixer
Doc Kane CAS–ADR Mixer
Frank Rinella–Foley Mixer
Production Sound Team: Kurt Petersen, John Harton

WandaVision Ep. 9
“The Series Finale”


Christopher Giles CAS–Production Mixer
Michael Piotrowski CAS–Production Mixer
Danielle Dupre–Re-recording Mixer
Casey Stone CAS–Scoring Mixer
Doc Kane CAS–ADR Mixer
Malcolm Fife–Foley Mixer
Production Sound Team: Kurt Petersen, John Harton


Television Series: One Hour

Squid Game
S1 Ep. 7 “VIPS”


Park Hyeon-Soo–Production Mixer
Kang Hye-young–Re-recording Mixer
Serge Perron–Re-recording Mixer
Cameron Sloan–ADR Mixer

Succession
S3 Ep. 1 “Secession”


Ken Ishii CAS–Production Mixer
Andy Kris–Re-recording Mixer
Nicholas Renbeck–Re-recording Mixer
Tommy Vicari CAS–Scoring Mixer
Mark DeSimone CAS–ADR Mixer
Micah Blaichman–Foley Mixer
Production Sound Team: Peter Deutscher, Michael McFadden, Luigi Pini

The Morning Show
S2 Ep. 1 “My Least Favorite Year”


William B. Kaplan CAS–Production Mixer
Elmo Ponsdomenech CAS–Re-recording Mixer
Jason “Frenchie” Gaya–Re-recording Mixer
Carter Burwell–Scoring Mixer
Brian Smith–ADR Mixer
James Howe–Foley Mixer
Production Sound Team: Alexander
Burstein
Boom Operator, Tommy
Giordano
2nd Boom, Sound Tech,
Krysten Kabzenell Utility Sound

The White Lotus
S1 Ep. 5 “The Lotus Eaters”


Walter Anderson CAS–Production Mixer
Christian Minkler CAS–Re-recording Mixer
Ryan Collins–Re-recording Mixer
Jeffrey Roy CAS–ADR Mixer
Randy Wilson–Foley Mixer
Production Sound Team: Sabi Tulok Boom, Nohealani NihipaliDay Sound Utility

Yellowstone
S4 Ep. 1 “Half the Money”


Andrejs Prokopenko–Production Mixer
Diego Gat CAS–Re-recording Mixer
Samuel Ejnes CAS–Re-recording Mixer
Michael Miller CAS–ADR Mixer
Chris Navarro CAS–ADR Mixer
Production Sound Team: Andrew Chavez, Danny Gray


Television Series: Half Hour

Cobra Kai
S3 Ep. 10 “December 19”


Michael Filosa CAS–Production Mixer
Joseph DeAngelis CAS–Re-recording Mixer
Chris Carpenter–Re-recording Mixer
Phil McGowan CAS–Scoring Mixer
Marilyn Morris–ADR Mixer
Michael S. Head–Foley Mixer
Production Sound Team: Matt Robinson, Daniel Pruitt, Tiffany Mack

Only Murders in the Building
S1 Ep. 3 “How Well Do You Know Your Neighbors?”


Joseph White Jr. CAS–Production Mixer
Mathew Waters CAS–Re-recording Mixer
Lindsey Alvarez CAS–Re-recording Mixer
Alan DeMoss–Scoring Mixer
Stiv Schneider–ADR Mixer
Karina Rezhevska–Foley Mixer
Production Sound Team: Jason Benjamin, Timothy R. Boyce Jr.

Ted Lasso
S2 Ep. 5 “Rainbow”


David Lascelles AMPS–Production Mixer
Ryan Kennedy–Re-recording Mixer
Sean Byrne CAS–Re-recording Mixer
Brent Findley CAS MPSE–ADR Mixer
Jamison Rabbe–ADR Mixer
Arno Stephanian CAS MPSE–Foley Mixer
Production Sound Team: Emma Chilton, Andrew Mawson, Michael Fearon

The Book of Boba Fett
S1 Ep. 1 “Chapter 1: Stranger in a Strange Land”


Shawn Holden CAS–Production Mixer
Bonnie Wild–Re-recording Mixer
Scott R. Lewis–Re-recording Mixer
Alan Meyerson CAS–Scoring Mixer
Richard Duarte–Foley Mixer
Production Sound Team: Patrick Martens, Veronica Kahn, Moe Chamberlain, Kraig Kishi, Cole Chamberlain

What We Do in the Shadows
S3 Ep. 4 “The Casino”


Rob Beal–Production Mixer
Diego Gat CAS–Re-recording Mixer
Samuel Ejnes CAS–Re-recording Mixer
Mike Tehrani–ADR Mixer
Stacey Michaels CAS–Foley Mixer
Production Sound Team: Ryan Longo, Camille Kennedy


Television Non-Fiction, Variety or Music – Series or Specials

Billie Eilish: The World’s a Little Blurry


Jae Kim–Production Mixer
Elmo Ponsdomenech CAS–Re-recording Mixer
Jason “Frenchie” Gaya–Re-recording Mixer
Aron Forbes–Scoring Mixer
Jeffrey Roy CAS–ADR Mixer
Shawn Kennelly–Foley Mixer

Bo Burnham: Inside
Bo Burnham–Production Mixer
Joel Dougherty–Re-recording Mixer

Formula 1:
Drive to Survive
S3 Ep. 9 “Man on Fire”


Doug Dreger–Production Mixer
Nick Fry–Re-recording Mixer
Steve Speed–Re-recording Mixer

McCartney 3, 2, 1
Ep. 1


Laura Cunningham–Production Mixer
Gary A. Rizzo CAS–Re-recording Mixer

The Beatles: Get Back Part 3


Peter Sutton (dec.)–Production Mixer
Michael Hedges CAS–Re-recording Mixer
Brent Burge–Re-recording Mixer
Alexis Feodoroff–Re-recording Mixer
Sam Okell–Music Mixer
Michael Donaldson–Foley Mixer


Oscar Sound Nominees

Belfast


Denise Yarde
Simon Chase
James Mather
Niv Adiri
Production Sound Team: Lawrence Meads, Jennifer Annor, Kate Morath, Jamie Nicholls

Dune


Mac Ruth
Mark Mangini
Theo Green
Doug Hemphill
Ron Bartlett
Production Sound Team: György Mihályi Senior 1st AS, Áron Havasi 1st AS, Eliza Zolnai 2nd AS

No Time to Die


Simon Hayes
Oliver Tarney
James Harrison
Paul Massey
Mark Taylor
Production Sound Team: Arthur Fenn Key 1st AS, Robin Johnson 1st AS, Ben Jeffes 2nd AS/Sound Coordinator, Millie-Ackerman Blankley Sound Trainee, 2nd Unit: Tom Barrow Sound Mixer, Loveday Harding 1st AS, Frankie Renda Sound Trainee

The Power of the Dog

THE POWER OF THE DOG (L to R): BENEDICT CUMBERBATCH as PHIL BURBANK, JESSE PLEMONS as GEORGE BURBANK in THE POWER OF THE DOG. Cr. KIRSTY GRIFFIN/NETFLIX © 2021


Richard Flynn
Robert Mackenzie
Tara Webb
Production Sound Team: Sandy
Wakefield 1st AS, Jessica
McNamara 2nd AS, Lisa Leota
2nd AS

West Side Story


Tod A. Maitland
Gary Rydstrom
Brian Chumney
Andy Nelson
Shawn Murphy
Production Sound Team: Jerry Yuen, Mike Scott, Terence McCormack Maitland


AMPS FILM AWARDS NOMINEES
Excellence in Sound for a Feature Film

Belfast


Denise Yarde
Simon Chase
James Mather
Niv Adiri
Production Sound Team: Lawrence Meads, Jennifer Annor, Kate Morath, Jamie Nicholls

Dune


Mac Ruth AMPS
György Mihályi
Doug Hemphill
Mark Mangini
Theo Green
Ron Bartlett
Production Sound Team: György Mihályi Senior 1st AS, Áron Havasi 1st AS, Eliza Zolnai 2nd AS

Last Night in Soho


Colin Nicolson AMPS
Colin Gregory AMPS
Julian Slater
Tim Cavagin AMPS
Production Sound Team: Colin Gregory Key 1st AS, Thayna Mclaughlin 1st AS, Pete Blaxill 2nd AS/Playback

No Time to Die


Simon Hayes AMPS
Arthur Fenn
Oliver Tarney
Paul Massey
Production Sound Team: Arthur Fenn Key 1st AS, Robin Johnson 1st AS,
Ben Jeffes 2nd AS/Sound Coordinator, Millie-Ackerman Blankley Sound Trainee, 2nd Unit: Tom Barrow Sound Mixer, Loveday Harding 1st AS, Frankie Renda Sound Trainee

West Side Story


Tod A. Maitland
Michael Scott
Brian Chumney
Gary Rydstrom
Andy Nelson
Production Sound Team: Jerry Yuen, Mike Scott, Terence McCormack Maitland


BAFTA Sound Nominees

Dune


Mac Ruth
Mark Mangini
Douglas Hemphill
Theo Green
Ron Bartlett
Production Sound Team: György Mihályi Senior 1st AS, Áron Havasi 1st AS, Eliza Zolnai 2nd AS

Last Night in Soho


Colin Nicolson
Julian Slater
Tim Cavagin
Dan Morgan
Production Sound Team: Colin Gregory Key 1st AS, Thayna Mclaughlin 1st AS, Pete Blaxill 2nd AS/Playback

No Time to Die


James Harrison
Simon Hayes
Paul Massey
Oliver Tarney
Mark Taylor
Production Sound Team: Arthur Fenn Key 1st AS, Robin Johnson 1st AS, Ben Jeffes 2nd AS/Sound Coordinator, Millie-Ackerman Blankley Sound Trainee, 2nd Unit: Tom Barrow Sound Mixer, Loveday Harding 1st AS, Frankie Renda Sound Trainee

A Quiet Place Part II


Erik Aadahl
Michael Barosky
Brandon Proctor
Ethan Van Der Ryn
Production Sound Team: Gillian Arthur Boom Operator, Michael McFadden Utility Sound Technician

West Side Story


Brian Chumney
Tod Maitland
Andy Nelson
Gary Rydstrom
Production Sound Team: Jerry Yuen, Mike Scott, Terence McCormack Maitland

Names in Bold are Local 695 members

The M1 Pro & Max MacBook Pro’s from Apple

by James Delhauer

The digital revolution has certainly lived up to its name over the last few years. It’s odd to think that most of us working in film and television today still remember a time when the very concept of a digital workflow was a pipe dream; something that aspiring artists did to learn the craft. The first entirely digital film, Star Wars: Episode II – Attack of the Clones, was released more than twenty years ago using the earliest high-definition processes. The files recorded on that film (a term that doesn’t even technically apply) were revolutionary for the time but progress marches ever forward. Today, hardware capable of producing higher quality, images are available at your local Best Buy, and professional sets utilize devices capable of recording uncompressed 4K, 8K, and now even 12K images. All of that data needs to be processed. This year’s new MacBook Pro’s might be the tool that Local 695 technicians need to do it.

Whether you love or hate the Apple ecosystem of products, there is no denying that they’ve garnered respect in creative industries and have become one of the leading industry standards for artists around the world. Last year, Apple surprised the world when it announced that it would be abandoning its 15-year relationship with Intel and would begin producing its own line of processors for all products across the Mac line of personal computers. The new M1 processor debuted in the fall of 2020 in the 13-inch MacBook laptops and Mac Mini desktop, boasting impressive specifications at an even more impressive price point. Months later, a line of all-in-one iMac’s with identical specs was added to the M1 family. This first-generation M1 processor yielded promising results but its hardware specifications were well below what many considered necessary for a “professional” workstation. This caused users to turn toward the future, wondering when Apple would unveil new processors for its Pro series of products. After months of speculation and delays, attributed to the ongoing global chip shortage, the company announced its new 14-inch and 16-inch MacBook Pro’s, powered by M1 Pro and M1 Max processors.

The units offer a substantial jump in performance compared to the initial M1 lineup from last year, boasting 10-core central processing units (CPU’s) and 16-core Neural Engines with up to 32-core graphics processing units (GPU’s). One of the biggest critiques of the original M1 series was that Apple’s MacBooks, Mac Mini’s, and iMac models were all restricted to just 16gb of unified memory, a configuration in which memory is shared between the CPU and GPU rather than each unit having its own dedicated pool of RAM. The new units have raised this cap to 64gb, removing one of the largest performance bottlenecks facing power users. Apple has also introduced a brand-new Media Engine into their laptops, a hardware accelerated feature used to decode or encode ProRes, h.264, and HEVC codec video files. This frees up resources for the rest of the machine, significantly increasing productivity when working with these types of files. These improvements make the M1 Pro and Max laptops ideal for high-resolution transcodes, 3D rendering, and high track count audio work. In fact, Apple has gone so far as to boast that at max specifications, their $3,899 16-inch M1 Max units offer better 8K ProRes performance than the 28-core CPU version of their 2019 Mac Pro desktop, which retails for a minimum of $12,999 and up to $54,199 when fully upgraded.

Many users will also be delighted to learn that Apple has restored some of the previously discontinued input ports. The new MacBook Pro’s come equipped with three Thunderbolt 4 ports, an HDMI port for an external monitor, an SDXC card reader, and a dedicated charging port compatible with Apple MagSafe 3 chargers. This removes some of the need for expensive dongles and adapters that many have criticized since the 2016 refresh of the Apple notebook line. However, the company remains adamantly opposed to supporting aging interfaces like USB 3.0, CAT 5 Ethernet, and their own Thunderbolt 2 ports, meaning users will still require adapter docks for fairly standard external devices like hard drives, non-Apple brand external mouses & keyboards, and any type of networking equipment.

But bold claims from a trillion-dollar company can’t compare to hands-on experience, so I got my hands on a 16-inch MacBook Pro with an M1 Max processor and 64gb of RAM and put it head-to-head against my 2021 Alienware m15 R4 gaming laptop, which runs on an i7-10870H CPU, RTX 3080 GPU, 32gb of RAM, and Windows 10 Pro. The results were interesting to say the least.

For my tests, I downloaded or acquired samples of a variety of different file formats, including 8K ProRes 422 HQ, 4K DNxHQ, 6K BRAW, 4K h.264, 4K h.265, and 4K r3d files. All files were exported in UHD ProRes 422 HQ and h.264 at 40mbps, as these are the most commonly accepted industry delivery formats for broadcast and web content. All content was captured at 23.976 and was processed both in Adobe Premiere Pro and DaVinci Resolve.

The accompanying chart can be used for a full overview of the results but the upshot is that Apple has put a real contender on the market, which dominates the Alienware’s scores across the board, with an overwhelming advantage when it comes to ProRes content. Premiere Pro, long known for issues in terms of stability, crashed far less frequently on the Apple computer compared to the Windows one. DaVinci Resolve, though reasonably stable on both operating systems, seems to process content more efficiently in macOS. The Alienware struggled with 8K content across the board, quickly flagging VRAM errors if a sequence grew longer than a couple of minutes. The same error appeared when adding an included LUT and seven adjustment nodes to ProRes and h.264 footage. It only took four adjustment notes to generate these errors when working with DNxHQ or h.264 footage. Meanwhile, the M1 Max laptop kept chugging along, only throwing up a VRAM error when I hunted down a piece of 29.97 12K BRAW footage and added seventeen adjustment notes just for the purposes of trying to break the system. Interestingly, however, the Alienware did have a slight edge when it came to processing 4K r3d files captured on a camera from Red. This would seem to corroborate other online reports that Apple’s new flagship notebook struggles with Red video, suggesting that some further optimization is required. Whether this optimization will ultimately come from Red or Apple (if at all) remains to be seen.

My biggest question leading up to release was the issue of thermal throttling, a safety feature included in most modern systems to prevent overheating. If the processor reaches a certain temperature, the system reduces the amount of supplied power. While this protects the computer from melting its own hardware, it comes with a noticeable reduction in performance. This issue has made its way into the mainstream in recent years, as developers have struggled to pack as much power into the smallest possible gizmos and gadgets, with Apple having particular difficulty circumventing the issue. In 2019, MacBook Pro’s equipped with more expensive i9 processors had such a problem with thermal throttling that they often performed worse than units outfitted with cheaper i7 processors. The original M1 MacBook’s suffered from throttling issues as well, leading many to question whether it was possible to put that level of performance power into a 13-inch laptop.

While I cannot comment on the 14-inch model, the 16-inch M1 Max seems to have very few issues in terms of throttling. To test the issue, I transcoded thirty seconds of 8K ProRes 422 HQ to DNxHQ, which completed in twenty-seven seconds. I then created a two-hour timeline and filled it with files of various lengths, framerates, codecs, and resolutions—just to make the processor as angry as possible. After completing that render three hours later, I reran the original thirty second ProRes to DNxHQ test, which completed in twenty-nine seconds. A longer series of tests would be necessary to determine for sure but this would suggest that the issue of thermal throttling has been reduced, if not eliminated. This can likely be attributed to the low draw power of the unit, with high intensity tasks only drawing about forty watts for sustained usage. This trumps my Alienware, which achieves less impressive results on a 110-watt draw. In this instance, Apple really is able to do more with less, making the M1 Pro and Max MacBook’s two of the eco-friendlier workstations on the market.

Performance notwithstanding, the computer does feature some nice quality-of-life improvements. The divisive touch bar, first introduced in the 2016 edition of the MacBook Pro, has been removed, and the fragile tap pad keyboard interface has been replaced with a more traditional mechanical button setup. Typing on this computer feels nicer and, speaking as someone with joint problems in my hands, the softer impact when typing is much appreciated. The display is lovely and displays rich, contrasted colors that seem consistent with the iPhone XR and iPhone 13 Pro with which I was able to compare it. Out of a sense of nostalgic curiosity, I watched the aforementioned Attack of the Clones on this overpowered display, and while that film definitely shows its age, it looks and sounds spectacular on the MacBook Pro. The image is crisp, clear, and sharp and, more impressively for a laptop, the same can be said of the sound. It’s amazing how far we have come in just twenty years of digital filmmaking.

The M1 Pro retails starting at $1,999 for the 14-inch model and $2,499 for the 16-inch model, while the M1 Max will set you back $2,399 and $3,499 in those sizes respectively.

RF Over Fiber

by Delroy Leon Cornick Jr.

As working sound professionals, we all know the challenges and complexities of both cart placement and RF/signal management. Finding the perfect spot to place a sound cart can be a delicate balance between being close enough to properly record the scene and remain an integral part of the filmmaking process, yet far enough away that you’re not constantly having to move each time a new angle is filmed. Adding distance between the cart and set brings new challenges, thanks to ever shrinking available frequencies. While remoting antennas has gained in popularity via CAT 5/Dante, and wireless transmission for video remains popular, they require purchasing Dante-enabled gear or competing with other wireless equipment. RF over Fiber (RFoF) can work with any existing setup, and I’m a firm believer that adopting RFoF could be a game changer for our profession. It’s been used in the military and broadcasting for years and as the technology matures, the price has come down enough to be adopted by individual owner-operators.

RF over Fiber is the conversion and transmission of standard radio frequency signals into pulses of light through fiber optic cables, which have none of the loss of traditional copper. Where RG8 & LMR cables top out at around one hundred feet before needing amplification (especially at 2.4ghz), fiber-optic cables are capable of sending these same signals for miles with negligible loss. Now, we can be at the cart anywhere at a location with our antennas mere feet from set, down one cable, including picture. This could mean filming on the roof of a 10-story building with no elevator from the parking lot or in the middle of a dense forest while comfortably back at basecamp, since cable length is a non-issue. Utilizing fiber can also mean shooting a quick scene three blocks away without having to leave your original location. Since we can simply place our antennas just out of frame, signal dropouts are nearly a thing of the past. My favorite fact about fiber? It doesn’t pass electricity, so it’s waterproof. Theoretically, one could film a scene on a boat … on a beach.

  • 1000’ spool of tactical fiber.
  • Fiber system on the set of an AFI short film
  • Mixing from the back of a truck, 300’ from set

What ties the whole system together are two sets of connectors, T-FOCA or Tactical Fiber Optic Cable Assembly. Designed for harsh environments, including oil and gas pipelines and military installations, these connectors can house 4-12 individual lines of fiber in a cable thinner than an XLR. Standard fiber-optic connectors are far too delicate for the realities of filmmaking. One of the most common reasons for the signal to not reach the other end is simply dirt. Having a fiber-optic cleaning tool is often the difference between getting a signal and not, considering an individual fiber strand is thinner than a human hair.

Being that far away from the set absolutely does come with challenges. It’s impossible to run in every time in between to give notes or to adjust a lav. There is no opportunity to give a visual cue that a take was no good or that we’re OK to move on. It is imperative that at this distance, one has competent and qualified boom operators and utilities who can relay information quickly and clearly. It also helps when a director wears their IFB.

Delroy Cornick at his sound cart
  • Receivers and IFB modules at the rear of the cart
  • T-Foca II connector with RF inputs
  • SDI video input

Speaking of directors, making sure they understand that you may be slightly farther away during some scenes is important. I’ve found that remaining close by and visible for any setups where there is room for the cart goes a long way when it comes time to make the compromise to remain at a distance. Once they see how much time can be saved without any loss in sound quality, they (often) quickly understand.

With any new technology, there are going to be challenges. Before I had the custom Pelican case housing made by DataPro, walkie-talkies would cause interference and on very rare occasions, cause a slight signal hit. The remote gain and on-board recording feature of the Zaxcom transmitter was invaluable as a backup against this. Thankfully, the hits went away once the housing was created and better shielded cables were installed. Also, one thousand feet of cable, no matter how thin, gets to be heavy. Most of the cost is in the connectors so the difference between ten feet of fiber and one thousand feet was only a few hundred dollars. I recently purchased a 300’ reel, which is far easier to handle. What’s great about T-FOCA is that the connectors are hermaphroditic, and therefore, adding extra fiber only means attaching another set. If by accident, one thousand feet of fiber is cut at two hundred feet, for example, all one would have to do would be to add T-FOCA connectors on both cut ends, and all one thousand feet of fiber is still usable.

And of course, price. All in, this system cost me about $8,500. Three RF modules at $1,500 each, one thousand feet of T-Foca fiber cable at $1,500, T-FOCA Break-Out/Break-In Assemblies at $800, the DataPro Pelican case at $900, the Decimator Quad Splitter at $400, and another $500 or so in various connectors, cables, and a copy of Patrushkha Mierzwa’s book, Behind the Sound Cart, which is excellent.

Here are photos of the system designed with assistance from RFOptic, FISBlue, and DataPro:

  • Inside the case: AB&C fiber modules D: SDI-Over Fiber
  • with AC battery
  • AC power
  • Pelican case insert designed by the author and made by DataPro
  • T-FOCA II connected and video cables connected.

Modules A&B take in the RF signal via RF Venue’s Diversity Fin antenna. Module C is the 2.4ghz IFB return from the cart out of an RF Venue CP Beam. All three RFOptic modules are capable of converting from 0.01mhz-2.5ghz, easily covering 500mhz & 2.4ghz. Module D is the SDI over fiber converter. This is fed from the BNC output of a 4×1 Decimator Quad Split. These four lines are then fed into the Break-In end of the T-FOCA connector mounted on the side of a custom-designed Pelican 1600 case. On the outside of the case are the T-FOCA connector, three BNC connectors for Dual UHF, and 2.4ghz Zaxnet on the left panel; and on the right are four BNC inputs for SDI Video and one HDMI output from the Decimator Quad View. On the opposite side of the Pelican is an AC input. The entire case can run on either AC power or a basic laptop charging battery with AC outputs.

At the cart end are a Break-Out cable assembly feeding the opposite modules (A&B), receiving the signal and sending RF out via SMA into the Zaxcom Mic-Plexer. The Nomad’s IFB output via SMA feeds the third module (C). Each module, including SDI, powers via both AC and DC power.

Willie Burton on Spider-Man: No Way Home

by Richard Lightstone CAS AMPS

Marvel’s live-action Spider-Man films have grossed more than $6.3 billion to date at the global box office. The most current iteration, Spider-Man: No Way Home, is to be released on December 17, and the fan anticipation is as strong as ever. The teaser trailer alone had a record 355.5 million views in its first day!

Spider-Man with Doctor Strange (Benedict Cumberbatch)

Willie Burton CAS and Local 695 Boom Operator Adam Mohundro traveled to Fayetteville, Georgia, and the Trilith Studios to begin the film in November of 2020, wrapping at the end of March this year.

The first thing Willie encountered was that he and his crew did not get a script, as Marvel is extremely sensitive about any plot leaks.

“You don’t know what to expect, and so in prepping for the show, you take everything. You can’t leave anything out because you have to be ready for whatever they want.” Willie continues, “You don’t even know what’s going to happen until you get a call sheet. But you can’t read the script and prepare. They did give me sides, because they know that I have to follow the cues. They gave sides to my boom and utility person, but they blacked out most of the words. Anything that they thought might be plot information in the dialog was blacked out, so they had to look at my sides.”

Willie upgraded his package to a Zaxcom Deva 24 and the Mix 16 to be prepared for the additional tracks he would need. The majority of the show was shot on the sound stages and the backlot of Trilith Studios, where they had New York street sets. Willie’s key microphones are the Schoeps CMIT 5U and the Sennheiser MKH50.

MJ (Zendaya) with Spider-Man (Tom Holland)

“I love the MKH50 when the shots are not real wide,” says Willie. “In our ‘New York City’ portion, we’re able to boom a lot. Picture Editorial requested that we use the boom whenever we could because they wanted a good mix track. We had the wireless tracks, but they liked the mix track better with the boom. They said, ‘The more you can, use the boom. So I decided to just go with the boom. I’m old school-new school, so I always like to use the boom first.’”
This was the first time Willie worked with Director John Watts, whom he liked a lot and found him to be easy to work with. “My team worked hard and did what we needed to do. John wanted playback all the time of different sound effects, so we had to be ready. But he’s great, and we really had a great time with him.

“One day, John calls me over and says, ‘You want to be an actor today? Do you want to play a part?’ I said sure, ‘Okay, dress Willie up.’ So, I went to makeup and wardrobe and a little hair touch up. I played the nosy next-door neighbor. They gave me these grocery bags, you know, just like in New York. I’m coming in, walking down the hallway, and I hear this noise and I’m just trying to see what’s going on. The door was half-cracked open, and so I’m peeping in the door and say, ‘What’s going on in there?’ If it’s not cut out, I’m the nosy next-door neighbor!”

MJ with Spider-Man

Willie’s walk-on became even bigger when the Post Supervisor called him to do some additional lines in ADR. He went to Disney Studios and was very pleased to have ADR legend Doc Kane recording his five additional lines.

When Tom Holland, who plays Spider-Man, was in his suit, Willie wired him with a Lectrosonics SSM. The costumers found the perfect place to put the microphone and transmitter.

Willie explains, “The Costume Department was so incredible. They were really good. We liked them so much, they assisted us with putting the mics on the costumes. At the end of the show, we gave all of the on-set costumers gift certificates. We really appreciated the work they did. They saved us because they wanted some of the actors to be wired before they come to the set. Some of the other actors would be wired on set, but between both combinations the Costume Department really worked with us, I can’t thank them enough.”

There was quit a lot of green screen and blue screen scenes, especially for Spider-Man flying through the air, as well as a great deal of stunt work and lots of camera cranes. Many of the setups had scenes with the actors running on scaffolding. Willie and crew faced a problem with the decking resonating against the scaffold rails. He ingeniously had the top of the rails layered with gaffer’s tape and voila, the distracting noise was eliminated.

The film was under strict COVID protocols. Willie continues, “The COVID compliance people were very professional, and of course, we had to wear masks all day. We tried to keep our distance from each other. Myself and the Video Playback person kept four or five feet apart, and we did everything that was required. Adam, the Boom Operator, and the local Atlanta utility also wore face shields when wiring the actors or when working on set. They tested three times a week, and I only needed to be tested twice weekly. Fortunately for us, it wasn’t too hot most of the time. The weather was pretty nice.”

Willie wraps up his experience on the show. “It was a great crew. We had a good time, we worked hard, and I think it’s going to be an incredible movie. I really like what we did.”

The Harder They Fall

@2021THE HARDER THEY FALL (C: L-R): REGINA KING as TRUDY SMITH, ZAZIE BEETZ as MARY FIELDS. CR: DAVID LEE/NETFLIX © 2021

by Anthony Ortiz CAS

While working on location in Hawaii on a sunny Sunday afternoon, I received an email from the New Mexico production office for Netflix’s The Harder They Fall, confirming the script, written and directed by Jeymes Samuel, was on its way. Ten pages into my read, it was very clear that the music references in the script were going to be as essential and important as any performer or character in the film. I put the script down for a few minutes, opened a music app, and started my read again from scene one. Headphones on pulling up the music cues that Jeymes had noted in the script.

THE HARDER THEY FALL (L-R): JONATHAN MAJORS as NAT LOVE, DELROY LINDO as BASS REEVES, RJ CYLER as JIM BECKWOURTH. CR: DAVID LEE/NETFLIX © 2021

It is rare that I read a script all at once, but I found myself unable to pull away from the pages. The Old West was illustrated in a way never represented before, a truthful exploration of mid/late nineteenth-century pioneers, lawmen, women, and outlaws.

Next on the agenda, my first virtual meeting with Jeymes.

I grew up in Bayamón, Puerto Rico, on the outskirts of San Juan, and Afro-Caribbean rhythms were a big part of my upbringing, and music is what lead my path to Production Sound. My conversation with Jeymes in our first meeting drifted immediately into the music aspect of the film, and the influence music had on both our careers. This first conversation and the ones that followed, put into perspective Jeymes’ vision, and I began putting together a plan for our preproduction phase for The Harder They Fall.

THE HARDER THEY FALL (L-R): ZAZIE BEETZ as MARY FIELDS, JONATHAN MAJORS as NAT LOVE. CR: DAVID LEE/NETFLIX © 2021

Boom Operator Douglas Shamburger and Sound Utility Nick Ronzio joined the sound team. Doug is an outstanding Boom Operator with decades of feature films on his résumé, and Nick’s years in the field and vast Pro Tools knowledge were instrumental to our production soundtrack success. Joining us locally from our sister New Mexico Local 480 was Phillip Blahd (2020 Academy Award winner), and David Sickles, who was able to help with additional units and personnel.

The Harder They Fall was shot entirely on location, the majority of the shoot about forty-five minutes outside of Santa Fe, New Mexico, at the Tom Ford Ranch, where Production Designer Martin Whist and his team built a four-block-long mid 1800s Western town. Most of the film was exteriors, which came with its challenges for our team, and the entire shooting crew.

THE HARDER THEY FALL (L-R): REGINA KING as TRUDY SMITH, IDRIS ELBA as RUFUS BUCK, LAKEITH STANFIELD as CHEROKEE BILL. CR: DAVID LEE/NETFLIX © 2021
Boom Operator Douglas Shamburger overlooking “Douglastown” and Stagecoach Mary’s Saloon.

One of our biggest challenges on location in New Mexico was the harsh environment, not only the physical daily grind of location access, weather, and equipment transportation, but the RF noise floor, which to my surprise was extremely crowded. The U.S. military has a big presence in this region, so the first call of order every day, even when at the same location, was the coordination of all needed frequencies. Each day, I stood at the foot of our tailgate, looking as far as the eye could see into miles and miles of vast empty and beautiful New Mexico landscape, where it was hard to comprehend how crowded the bandwidth was. It was as crowded as any major city like downtown Los Angeles or NYC.

The next day could bring a completely different scenario on the airwaves. Lectrosonics Wireless Designer made our efforts much easier to scan, manage and coordinate frequencies in a more efficient manner.

Boom Operator Doug Shamburger on the roof and Anthony Ortiz CAS below

There is a great opportunity to do some preliminary RF scanning while on location scouts and start understanding what is coming your way with coordination and allocation of frequencies during production. The RF Explorer, in conjunction with Touchstone RF spectrum analyzer software, are great tools which allow us to put in perspective in real time what we will be faced with.

One important step is to reach out to the AD or Production Department to find out who the vendor is for the walkie rentals. The rental company might have an in-house frequency coordinator and see if they can provide a list of the walkie frequencies intended to be in use (sometimes called a conventional personality list). The 400MHz range seems to be popular for walkie frequencies, so it is very important we take that into consideration.

Anthony Ortiz, Production Sound Mixer, mixing from inside a set structure in the town of Redwood.

After I had a better understanding of director Jeymes Samuel’s expectations, I decided that the need for more tracks was going to be essential. I had a Sound Devices 688 as my main recorder, with the CL-12 as my control surface. Moving onto a recorder that provided more than twelve tracks was a must as The Harder They Fall had such a large ensemble cast, live instrument recordings in our saloon scenes, more tracks needed to capture sound effects of our period wagons, horses, weapons, and stagecoaches. My experience with Sound Devices was more than positive, so transitioning to the
32-track Sound Devices Scorpio was the obvious choice for me.

We were originally scheduled to begin principal photography March 2020. On the last few days of our pre-production work, COVID-19 hit and the project was halted. With the cast and crew safety as a priority, commencing filming was to be postponed until further notice. After a few months at home, our Line Producer, G. Mac Brown, got in touch with great news; a new start date for The Harder They Fall was set, and we would head back to New Mexico in early September.

Last day of photography (L-R): David Lee, Still Photographer, Sound Utility Nick Ronzio, Anthony Ortiz, Douglas Shamburger, and our Honorary Sound Department member, Teamsters brother Rob Elliott-Barry

I had sent my production sound package back to Los Angeles until the film was to resume, and fortunately, the new Sound Devices CL-16 control surface was released. I had been eagerly awaiting its arrival, as I knew the CL-16 was going to help make our efforts on The Harder They Fall much more efficient with its new features and capabilities.

While at home, what a better time to focus on incorporating the CL-16 into my main cart; go back to the bench to make a few new cables, trips to a machine shop to make some custom parts, gain some CL-16 flight hours to increase confidence, and speed navigating the new features and menus. Having worked with the CL-12 until then, the transition came with ease.

We returned to New Mexico in September with the new health and safety COVID-19 protocols in place. It was a fresh start with a new schedule, revised script notes, and making sure our needs had not changed during our long break. Pretty much of our plan of action was the same, with the sole exception that our saloon live music recordings were now changed to prerecorded tracks for playback, due to COVID-19 safety protocols.

Mood music during setups and rehearsals was an essential part on most days for Jeymes. Nick and I worked on putting together a system that would allow Jeymes to play music tracks from his cellphone that we could independently feed to his headsets, giving him the ability to play any track during a setup or rehearsal that could assist with the tone, rhythm, movement, or choreography of a scene.

Anthony Ortiz at his cart on location outside Santa Fe, NM.

A long-range stereo Bluetooth receiver was our choice for this setup, (S.M.S.L. B1 stereo CSR 4.2 receiver) connected to a 2-channel portable mixer. The left output fed the on-set battery-powered speaker, which usually stayed on Jeymes’ video cart, that our Video Playback Operator, Scott Wetzel, very kindly helped to integrate. The right output went to a Lectrosonics SMV transmitter, with that signal coming to my cart, and also fed to a Lectrosonics T1 IFB transmitter, giving Jeymes control of the volume on both the speaker and his Lectrosonics R1 IFB receiver and headset.

When scripted music playback was needed, we took over the task, with Pro Tools being our DAW of choice. The setup got more complex when speakers with wireless feeds were needed. For example, in a scene were Trudy Smith (Regina King) and Cherokee Bill (LaKeith Stanfield) walk across town to meet with Wiley Escoe (Deon Cole); we had speakers set up all around town, plus a wireless feed to a battery-operated speaker (Behringer Europort MPA40BT) on a Grip Trix camera cart a few feet away from our actors. Jeymes wanted the actors’ steps to be in sync with the tempo of the music, keeping it consistent on all camera setups, angles, and takes.

We also put together an additional public address sound cart system that served as a “VOG” that could also double as an external playback system allowing us to, as Jeymes liked to say, “blast the town.” The idea of having this cart was that we could roll it off the truck and be ready to go, with at least one functioning battery-powered speaker until AC power was available.

Ortiz with David Bach, Dialog Editor

The cart was made up of an Alto TX208-powered speaker, a Behringer BN1200D-Pro Eurolive active subwoofer, an EV ZLX12P-powered speaker, a battery-operated Behringer Europort MPA40BT portable speaker, and another EV ZLX12P speaker as a satellite deployment unit. A Sound Devices 442 gave us input control on the music source, and two “VOG” mics were Sennheiser E835 dynamic handhelds, with thumb switches, HMa transmitters and Lectrosonics LR, and 411 receivers.

Some days, in the name of safety, the volume of music was requested by our horse wranglers, as the horses were very sensitive to loud noises and sudden movements.

One key aspect of our success as the production sound team on The Harder They Fall was our communication within the department. Having a dedicated private comms system that could allow Doug, Nick, and I to speak freely at any point during setups, takes, or rehearsals, but also keeping the conversations private.

Doug and Nick wore Lectrosonics LT transmitters that I had assigned to channels 15 and 16 on the Scorpio, pre-fade, routed to a custom headphone mix and their IEM’s bus feed; push-to-talk surveillance mics converted to T5 connectors. Their in-ear monitors are Wisycom MPR50’s with a Wisycom MTP40S transmitter on my main cart. I chose the portable transmitter unit so I could quickly remove it from the main cart and use it on my portable “bag” rig.

Wind, wind, and more wind. I was very fortunate to be advised by fellow mixers, prior to arriving in New Mexico, that the wind could sometimes be very aggressive. That’s an understatement! Nick Ronzio, our Sound Utility, did his homework and prepared himself for the battle of the wind, and very successfully was able to keep it from contaminating our production tracks.

Boom Operator Doug Shamburger also had his work cut out for him with the open boom mics. The Cinela PIA-Piano and Cinela COSI came in handy to mitigate wind on whichever mic was decided to be at the end of the boom pole.

The wind was so strong on a handful of days that we found refuge inside our truck with the main recording and utility carts. The wind shook our truck side to side, while the cast and shooting crew were protected inside. The powered Betso antennas took a dive at some point. In one scene, where Trudy Smith tells Stagecoach Mary (Zazie Beetz) her sister’s story, you can probably see the evidence of the wind gusts happening on the main street, out the window.

Collaboration was the key to our team efforts. Weeks before the start of production, I flew out to New Mexico for our technical scout. I always find this process extremely helpful, as you get the opportunity to meet most department heads and their keys, start conversations on potential challenges, and together find solutions to any potential issues.

The magnificent costumes were designed by Costume Designer Antoinette Messam. I began a conversation with Antoinette and she and her team immediately made themselves available offering her ideas on what the costumes designs were going to be. Even with last-minute changes, we were able to work together to accommodate our lavalier and radio mic transmitter placements.

In a scene were Cuffy (Danielle Deadwyler) and Nat Love (Jonathan Majors) rob a bank, Cuffy is wearing a form-fitting red dress and as she gets off the horse, her movement made it impossible for the lav to be in her dress, but her bonnet hat saved the day! Working with one of our onset costumers, Nick was able to install both the mic and transmitter in the hat, and the costumer looked after it during the few days of filming. A battery change, a quick check to make sure all was in place, and we were ready to go!

Generator placement was also an essential part of our technical scouts, as we were filming in the expansive ranches of Santa Fe; empty land as far as the eye can see. Generator noise really travels, especially when the wind picks up. The art and construction departments built structures that could hide generators and keep the low rumble to a minimum. If that wasn’t possible, then the rigging grips and the Construction Department had portable baffles made that could be deployed when needed.

Once we left our Western town sets, some of our locations were accessible only by ATV. Our Transportation Department provided an ATV for our “all-wheel drive sound cart.” Our Teamster Driver and honorary sound team member, Rob Elliott-Barry. took such good care of us, with smooth driving and even some tire repairs.

The daily cooperation and teamwork on set was supreme, with tremendous help from Grip, Electric, Camera, AD’s, Locations, Art, and other departments achieving our goal of providing Jeymes with the best production tracks possible.

Once the first day of filming is behind us, we often do not get the opportunity to have direct contact with the Post Production Sound Editing team that will be working with our production tracks. As on all my projects, I establish communication with the Picture Editorial team, and I had the opportunity to visit the cutting room that was set up in our production office. Having conversations with Picture Editor Tom Eagle, and First Assistant Editor John Sosnovsky, gave me the chance to better understand and address their needs.

The Post Sound team usually comes on board after the film has wrapped. In most cases, we are already working on another project, perhaps not in the same city. Keeping in touch with our Post Production Supervisor, Jason Miller, gave me the opportunity to catch up with the Sound Post team a few months after wrap. I had the chance to spend a few hours at the post facility in Los Angeles of David Bach, Dialog Editor and ADR Supervisor. David was working on some scenes, and multiple Academy Award and BAFTA recipient Richard King served as the Supervising Sound Editor and Re-recording Mixer with his team.

Having the opportunity to hear and view how our production tracks and efforts during filming were falling into place was the most rewarding learning experience I’ve had as a Production Sound Mixer.

What really caught my attention was the use of lavs and boom mics and how they were mixed to maximize the overall sonic quality of the voice. Having the choices between boom, plants, and lavs gave them the ability to enhance our work in the most positive ways, which was the ultimate goal.

Working with Writer-Director Jeymes Samuel on Netflix’s The Harder They Fall has been the highlight of my production mixing career. Our fearless leader Jeymes brought the most positive energy to set every single day without fail. His tremendous cinematic vision was an honor to watch, but most importantly, his understanding, consideration, and support of the craft of sound was unprecedented. He allowed and inspired us to come to set every day, bringing our best game forward. Thank you to my crew of Doug Shamburger and Nick Ronzio for their hard work under some difficult conditions, masks, goggles, shields, and all!

Ric Rambles and Reflects

by Ric Teller

Straight, a Nebraska horn rock band, taping a television special in 1975

As I write this, it is mid-October, and a tentative agreement has been made in our labor negotiations. By the time this is published, I truly hope a satisfactory contract settlement has been approved and we are working. In 1985, while I was on staff at KTLA in Los Angeles, we went out on strike. While we picketed the gates on Bronson and Van Ness Avenues, some of the older fellows recounted their early days working in television. In the spirit of their memory:

Why did you choose this line of work? How did you start? What piqued your interest? Who inspired you to pursue this career? Why didn’t you go to college and get a real job?

OK, that last one might have been written by my mother.

Everyone has a unique story. In conversations, I’ve discovered that a few of us dreamed of this life from an early age. Many fell into it from being a musician, some by working as a touring sound engineer, others are following a family legacy, and a few grew up with the idea that carrying a Nagra on the beach was glamorous.

Here’s mine.

Some of you know me. I’m an A2 and sometimes a game show PA mixer. My first IATSE job in television began forty years ago at KTLA. But how did I get here?

(Cue the wavy lines and harp gliss)

First, the earth cooled. That event was followed by a remarkable series of coincidences.

In the fifth grade, my neighbor, Chuck Bauer, and I decided to play the trombone. His cousin was a trombone player, and that was reason enough for us.

While I was in high school, Al Kooper placed a horn section at the forefront of his band, Blood, Sweat & Tears. Other horn bands followed, creating a need for trombone players.
In college, a band named Straight that had plans to record an album in California asked me to play trombone and sometimes drive their bus.

When the band days ended, I happened to be on a whale-watch cruise off Dana Point, wearing a T-shirt from Sound 80, a popular Minneapolis recording studio. The logo caught the eye of a recording engineering student at Golden West College. We chatted and a couple of months later, I enrolled and attended as a part-time student.

When school ended, after three months of job searching, I happened to spot an ad in the Los Angeles Times for an entry-level position in the KTLA engineering department.

John DeMuth, KTLA Chief Engineer, at his retirement party.

John DeMuth was the chief engineer at KTLA. He was a television pioneer and genius. I walked into his office, a modest, cluttered room piled high with manuals and equipment in various stages of disassembly. John moved a pile of papers from the chair near his desk. I sat down and handed him my résumé, which made its way to one of the piles of papers without ever crossing his field of vision. He looked around on his desk for a moment and picked up a small electronic part.

John: “Do you know what this is?”
Me: “Yes, it’s a 22K quarter-watt resistor.”
John: “Can you start today?”
And I knew at that moment that the interview was over. I started working in television.

At retirement party. DeMuth, third from left.
At retirement party. Brian Weisbrod, Engineering Clerk; John Cook, Engineer; Craig Debban, Engineer; Ric Teller, Engineering Clerk; Bob Henke, Engineer; Bob Spears, Engineer; Dick Browning, Engineer. Cook, Debban, Spears, and Browning were longtime KTLA Engineers.

My first two years at that iconic station were spent as a clerk working for John and the rest of the talented staff in the amazing engineering maintenance department. I’m sure I learned more than I realized at the time and was fascinated by the abilities of the maintenance engineers and the crews that worked on the wide variety of shows that were taping on that historic lot. Never in my wildest dreams did I think I’d be asked to join them, and yet somehow with a résumé consisting of, “I can lift heavy things and drive a forklift,” I graduated to the position of schlepper. The differences between clerk and schlepper were substantial. The clerk job was more clerical (duh), no hands on the gear. The schlepper position had major benefits. First and foremost, it was a union job. I became a member of Local 695! I could now be asked to work on shows in addition to setting up gear and moving equipment from stage to stage (forklift).

That first year as a schlepper, when I checked the work schedule for the week between Christmas and New Year’s Day, I was surprised to find myself on the list to work the Rose Parade. Since I was new, and at the bottom of the seniority list, I hadn’t done many shows and none away from the studio. For many of the engineering staff, the Rose Parade wasn’t great duty. The weather was occasionally uncooperative, and the hours, especially the extremely early call on parade morning, weren’t big selling points. I was elated. My first Rose Parade. It was 1982! That year, we used the Sun Television Truck. The EIC was Max Kirkland, a terrific engineer that I knew from his days at KTLA.

In Pasadena, the mixer, Jerry Pattison, instructed me to line up the mic-cable port-a-reels and run the lines out to the street for band mics, then run some manner of cables to the KTLA booth “high above Colorado Boulevard” for mics and IFB’s to be used by our hosts, Bob Eubanks and Stephanie Edwards (it was their first year hosting together). You must understand that I was completely new to all of this. In retrospect, I’m sure the audio and maintenance men on the call that year kept close watch so I wouldn’t mess things up. I’m pretty sure I didn’t, I got asked back the next year. And the next. After a while, I became a regular working on the audio crews and my education continued courtesy of mixers Ken Becker, Dick Sartor, and other terrific engineers. I am grateful for my time there and all I learned from John DeMuth and so many at KTLA. All these years later, I still use the lessons afforded me by those men.

In 1985, after my fourth parade, KTLA was talking about changing the nature of their business. They had been a bustling production facility, providing stages, equipment, crew, and more, for a widening number of content providers. Now they planned to get out of that part of the business. Sometime that spring, Ken Becker and I were having a chat (possibly at Denny’s Bar on Sunset), and this part I remember clearly, he told me to leave my job. It was one of those rare moments that, as you are in the middle of it, you know your decision will affect the rest of your life. To walk away from the steady staff position that paid more than I thought I would make in my life and go test the waters of the freelance market seemed … chancy. This wasn’t a dip-your-toes kind of move, it was all or nothing, but Ken had been particularly kind to me. I took his advice to heart and became a freelance member of Local 695.

Ken Becker, KTLA Audio Engineer

As for the Rose Parade, if all goes well, January 1, 2022, will be parade number forty, not a record but pretty good.

So, if Chuck Bauer’s cousin had played the cello.
If Al Kooper had stayed with The Blues Project.
If Straight had kept their other trombone player.
If I had worn any other shirt on the whale-watch cruise.
If I had not read the Times on the day KTLA advertised for an engineering clerk.

I might not be writing this column; I might never have gotten here.

I hope some of you will take the time to share your own personal stories.

2021 Creative Arts Emmy SOUND MIXING Winners

Compiled by Scott Marshall

Outstanding Sound Mixing for a Variety Series or Special

David Byrne’s
American Utopia

Paul Hsu–Re-recording Mixer
Michael Lonsdale–Production Mixer
Dane Lonsdale–Additional Sound Mixer
Pete Keppler–Music Mixer
Production Sound Team:
Betsy Nagler

Outstanding Sound Mixing for a Comedy or Drama Series (Half-Hour) and Animation

Ted Lasso
“The Hope That Kills You”

Ryan Kennedy–Re-recording Mixer
Sean Byrne–Re-recording Mixer
David Lascelles–Production Mixer
Production Sound Team:
Emma Chilton, Andrew Mawson, Michael Fearon

Outstanding Sound Mixing for a Limited or Anthology Series or Movie

The Queen’s Gambit
“End Game”

Eric Hoehn–Re-recording Mixer
Eric Hirsch–Re-recording Mixer
Roland Winke–Production Mixer
Lawrence Manchester–Scoring Mixer
Production Sound Team: Thomas Wallis, Andre Schick, Bill McMillan

Outstanding Sound Mixing for a Comedy or Drama Series (One Hour)

The Mandalorian
“Chapter 13: The Jedi”

Bonnie Wild–Re-recording Mixer
Stephen Urata–Re-recording Mixer
Shawn Holden CAS–Production Mixer
Christopher Fogel–Scoring Mixer
Production Sound Team: Patrick H. Martens, Randy Johnson, Veronica Kahn, Patrick “Moe” Chamberlain, Kraig Kishi, Cole Chamberlain

Outstanding Sound Mixing for a Nonfiction or Reality Program (Single or Multi-Camera)

David Attenborough:
A Life on Our Planet

Graham Wild–Re-recording Mixer

Names in bold are Local 695 members

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 13
  • Go to Next Page »

IATSE LOCAL 695
5439 Cahuenga Boulevard
North Hollywood, CA 91601

phone  (818) 985-9204
email  info@local695.com

  • Facebook
  • Instagram
  • Twitter

IATSE Local 695

Copyright © 2023 · IATSE Local 695 · All Rights Reserved · Notices · Log out