• Skip to primary navigation
  • Skip to main content
  • Login

IATSE Local 695

Production Sound, Video Engineers & Studio Projectionists

  • About
    • About Local 695
    • Why & How to Join 695
    • Labor News & Info
    • IATSE Resolution on Abuse
    • IATSE Equality Statement
    • In Memoriam
    • Contact Us
  • Magazine
    • CURRENT and Past Issues
    • About the Magazine
    • Contact the Editors
    • How to Advertise
    • Subscribe
  • Resources
    • COVID-19 Info
    • Safety Hotlines
    • MyConnext
    • Health & Safety Info
    • FCC Licensing
    • IATSE Holiday Calendar
    • Assistance Programs
    • Photo Gallery
    • Organizing
    • Do Buy / Don’t Buy
    • Retiree Info & Resources
    • Industry Links
    • Film & TV Downloads
    • E-Waste & Recycling
    • Online Store
  • Show Search
Hide Search

Features

CATS Part 2

Mixing Live Singing Vocals on

CATS Part 2

by Simon Hayes AMPS CAS

As I mentioned in Part 1, the Sound/Music team required fifty-plus frequencies. During prep, due to the complexity of the shoot, a lot of other departments would need frequency real estate, too. A Google document was set up and each department was asked via email to chart what radio equipment and frequencies they wanted to use so we could make sure nothing would step on or interfere with the cast radio mics and IEM’s. This would obviously have been a disaster.

It was eye-opening watching that document grow during prep as each department added their requirements; whether that was focus controls for the Camera Department, iris controls for the DIT, crane comms for the grips, lighting triggers for the electricians and the myriad pieces of equipment the VFX Department were using. It was a miracle none of us had to compromise; luckily, most pieces of equipment were in the 2ghz and 4ghz bands and away from our wireless frequencies. This process was extremely valuable and something I would recommend becoming commonplace on all movie sets.

Two weeks before we started shooting, Tom Hooper came to me with a new request. As planned, he was going to be shooting on three or four cameras at all times and those cameras would be handheld or on Techno cranes. He wanted to be able to communicate with the camera teams and grips, but also be able to hear the program sound of singing and music in the same set of headphones.

When wearing their IEM’s, the actors were in ‘Cats world,’ meaning they could only hear the other performers on radio mics. This meant that Tom and Ben Howarth, 1st AD/Producer, could only talk to the cast if they removed their IEM’s. We needed to give Tom and Ben the ability to talk to the cast, Camera and Grips on separate channels directly into their IEM’s/radio headphone packs. The cast should not be able to hear Tom giving the Camera Operators and Grips instructions, and the Camera crew and Grips didn’t need to hear Tom’s direction to the cast.

The push to talk comm microphones to communicate to Cast, Keyboard booths and VOG.

We devised an effective and simple system for Tom Hooper and Ben Howarth to use. Ben Jeffes, my brilliant 2nd Assistant Sound, rigged a stand next to Tom’s monitor with different ‘push to talk’ Sennheiser handheld mics. Each mic had the name of the department it was tuned to communicate with and different-colored tape around the handle so it could be recognized in an instant when Tom went to grab it. Tom Barrow, as part of his IEM mixing duties, handled this process adeptly. He was actually mixing Director and Camera crew comms, as well as the cast IEM’s. It was a full workload and the show depended on him getting it right one hundred percent of the time.

The next part of my prep focused on a discussion with production regarding wireless headphones. In the UK, we use Sennheiser EW G3’s instead of Comtek’s. I explained to production that we would need to give each member of the crew a set which was going to be a very large rental hire. This hadn’t been considered and I had to make the case carefully as to why the entire crew needed headphones. I explained that there would not be any music coming out of speakers at any point during the shoot. We used this protocol on Les Misérables and Tom wanted to use it again on Cats and I wholeheartedly agreed. The issue with using speakers is that they are like a comfort blanket for performers; in rehearsals, they get used to singing with music playing out of speakers and then suddenly feel extremely exposed when they have to sing without the acoustic support while shooting/recording in front of two hundred crewmembers.

I made the point that the cast would feel more comfortable if everyone around them was wearing headphones. This didn’t hold much weight with production once they looked at the large rental quote. I told them that unless the crew could hear the music, all they would hear would be the live singing and that wouldn’t allow most of them to do their job properly.

I gave them several examples: “Why do the grips need to hear the music?” If the DP asks them to start a crane move or dolly at a certain point in the song intro, before the singing has started, how will they know what the DP is talking about? “Why would the clapper loaders need to hear the music?” Because they do the dolly jibs. What happens if they have to jib up at a certain point in a dolly move at a certain musical point?  “OK, why would the sparks (electricians) need to hear the music?” Well, they are going to be operating spotlights and programming moving lights that are cued by musical references. At this point, I was starting to get somewhere. Finally, “OK, but stunts, why would they need to hear the music?” They are using wires to fly the cast in some scenes and the moves will be cued by music.

The language of this film IS the music. Each and every person on the crew needs to become completely immersed in it, and the only way to hear the music is through wireless headphones. “OK, how many do we really need? We need a list that only has people absolutely necessary on it.” I replied, “I’ve got the list, here, it’s 130 crewmembers.” That was how we ended up with 130 Sennheiser G3’s with HD25 headphones.

We had an extreme workload and in prep I was always looking for ways of streamlining our tasks. I knew if we took on too many responsibilities as a team, the creative side of our Sound/Music process could potentially be compromised. I constantly try and think about this issue on all the films I mix. I ask myself the question, “Will this help me deliver higher sound quality?” If the answer is no, I either delegate the issue or move it down my priority list.

We had a really incredible discovery as shooting began, which was how perfect the mic position on the forehead was and how fortunate we were that Tom Hooper allowed me to use that placement.

3rd Assistant Sound Oscar Ginn repositioning a cast DPA mic after it slipped up an inch too high on the forehead.

Handing out 130 G3’s and keeping them in batteries daily would have required a crewmember dedicated to that job alone, so it got delegated. I decided to treat them the same way Motorola walkie-talkies are treated on ‘normal’ films. The rule was, if you wanted a set of headphones, they needed to stay with you for the entire length of the shoot. If you lost them, you needed to go and see production and not come to my team or me. We signed out each set and used a label maker to put the crewmembers’ name on both the packs and headphones. We also asked each Head of Department to assign a member of their crew to be the battery change person in their department. We got production to buy rechargeable batteries and a charger for each department. This was one of the best decisions I made to save my crew time. It worked perfectly and another huge time-consuming workload was delegated.

By the time we started shooting, we were one hundred percent ready for the challenge that lay ahead. We had collaborated extensively with each department during our fourteen-week prep period, with twelve of those weeks having our full core Sound/Music team of eighteen!

Simon Hayes, Production Sound Mixer
John Warhurst, Supervising Music and Sound Editor
Marius De Vries, Executive Music Producer
David Wilson, Music Producer
Becky Bentham, Music Supervisor
Arthur Fenn, Key 1st Assistant Sound
Robin Johnson, 1st Assistant Sound & Sound Maintenance Engineer
Tom Barrow, I.E.M. Mixer
Victor Chaga, Pro Tools Music Editor
Mark Aspinall, Music Associate/Keyboard Player
James Taylor, Music Associate/Keyboard Player
Fiona McDougal, Vocal Coach
Ben Jeffes, 2nd Assistant Sound
Taz Fairbanks, 3rd Assistant Sound
Oscar Ginn, 3rd Assistant Sound
Francesca Renda, 3rd Assistant Sound
Ashley Sinani, 3rd Assistant Sound
Kirsty Wright, 3rd Assistant Sound

With a team of this size all working together, we were able to have a huge creative impact on the filmmaking process. I felt fully supported by each and every person on our team. They gave me the freedom to allow me to focus on the creativity necessitated to give justice to the live performances by our cast. There was a very palpable feeling on the set between cast and every crew member that we were all creating the world of Cats, together, and very strong bonds were forged.

Our responsibility was that every department on the film relied on the Sound/Music Department to do their job effectively. It was not just recording the vocals live, but also providing the comms for the whole cast to be able to communicate with each other as they performed. The sets were so large and expansive; often we would have cast members singing solos many tens of meters apart from each other. They needed to be able to hear the other performers for their own cue points, even if they couldn’t see each other. The entire show was such a huge collaboration between actors and filmmakers. We were all working together to make something extraordinary for the cinema audience. Each of us felt a huge sense of responsibility to Tom Hooper and Sir Andrew Lloyd Webber, and wanted to prove that we could create something sensational as a team.

I built a great relationship with the entire cast as I worked with them throughout the shoot to provide them exactly what they needed to sing and dance perfectly take after take. I got to understand their characters and how they would emote, which was paramount for me mixing twenty-four mics, live on every take.

I needed to be precise with my gain structures, requiring an enormous amount of concentration. I was working without compressor/limiters in the chain and any EQ, apart from a 70hz Low Frequency roll-off, to protect the mic from wind during the dance movements. This is how I like to work when recording live singing. It was my aim to provide Sound Post the purest, most perfectly recorded tracks possible. I had to keep my headphones on at all times unless I had to leave my cart to speak to the actors, Tom Hooper or Chris Ross, our wonderful DP. I needed to be constantly aware if any mics had ridden up the forehead by an inch and let Arthur know so he could reset the headband. I had to listen for hits on frequencies or anything out of the ordinary between takes as it would be impossible to troubleshoot once we were filming. If I missed an issue before a take, I would be jeopardizing that actor’s vocals while shooting.

We had a really incredible discovery as shooting began, which was how perfect the mic position on the forehead was and how fortunate we were that Tom Hooper allowed me to use that placement. On the second day, one of our cast had to do a forward roll as part of his choreography while he sang. There was no break in the vocal as he rolled. I was expecting some kind of impact or mic rustle on the first take and potentially the need to do a Wild Track, but the vocal was perfect. There was no change in perspective at all. I put that first take down to luck and expected to hear an impact on take two but the vocal was perfect. The mic never got hit and it was like that again and again.

It doesn’t matter whether someone was spinning on their head during a break dance routine or tumbling like a gymnast, their forehead never touched the ground. We found throughout the shoot that whatever the performers’ choreography required, their mics would never get impacted in that position. It was a fortuitous piece of luck that meant we could record high-quality vocals even when the choreography was frenetic and explosive.

We did have to use a different mic placement strategy for both Laurie Davidson, who plays Mr. Mistoffelees, and Idris Elba who plays Macavity. They wore DPA cheek mics most of the time. This was because they were wearing hats, which we found in rehearsals would touch the forehead mics, causing rustle on their tracks. The cheek mic position was a little more intricate for VFX to remove in post, and required Tom Hooper and our VFX Department to sign off on. I then sent tests of the actors singing to Nina Hartstone, our Dialog Editor, to see if she could EQ the cheek mics to match the forehead mics. Nina confirmed this was the case. We bounced a lot of tracks to her while we were shooting to get her opinion about specific issues or concerns. Nina is an absolutely first-class creative Dialog Editor and a straight talker, so I knew if she was happy, we were doing well.

The feeling I had every day when wrap was called and I could take my headphones off was extraordinary. Never in my career have I had to concentrate so hard for such extended periods of time. I had been in a different world trying to record perfect vocals under enormous pressure of the live moment. I would like to pay tribute to the cast and their unbelievable vocal and physical stamina. Each and every one of our performers put in a super-human and extraordinary effort. They were on camera for twelve hours per day with only one hour of rest at lunch. Witnessing their cheeriness and good humor really humbled the whole crew. They were putting in the same kind of physicality you would expect from Olympic athletes, while performing in character with all of the emotion that entails, and singing live. The amount of breath control they were able to exhibit was unbelievable. All of them loved the live process unconditionally and gave us wonderful, collaborative support in the process.

I must mention two performers who were literally singing solo and dancing in almost every scene, Francesca Hayward, who plays Victoria, and Robbie Fairchild, who plays Munkustrap. Their effort was herculean and I was amazed at how perfectly they could perform day after day, hour after hour with so little downtime.

As I write this article, I am in daily contact with John Warhurst and Nina Hartstone, who are approaching the final mix. I can’t wait to hear the finished project I put my heart and soul into. I feel I grow a little with every film I make as a Production Sound Mixer, but Cats, thanks to Tom Hooper and Working Title, gave me the opportunity to grow a lot. I learned so much about what is achievable when a Sound Department is supported so mightily, and what a tremendous creative impact our craft has when collaborating with cast, crew, and director to produce a live musical.

Victor Chaga, Pro Tools Music Editor
Usually with on-screen musicals, the music and the vocals are recorded in advance. It is common to have the actors mime on set to the playback of the pre-recorded material. My conversation with Simon about the project demonstrated that Cats was going to be anything but conventional.

As you by now know, all of the vocals for this film were to be recorded live on set. An extensive amount of thought and planning has been put in by us in the Music and Sound departments to have the ability to support the actors musically, all the while making absolutely sure they are not time constrained by the music. As conversations continued, we developed a system that allowed us to seamlessly switch between live musicians, pre-recorded multitrack stems, and material recorded live, on set, during previous takes. This allowed us to instantly adapt to the performance in front of the camera and facilitate requests from actors and production.

Recording a vocal rehearsal. An excellent opportunity to gather information regarding levels and gain structures before filming.

“It is common to have the actors mime on set to the playback of the pre-recorded material. My conversation with Simon about the project demonstrated that Cats was going to be anything but conventional.”
–VICTOR CHAGA​

My side of the setup consisted of three distinct elements: live piano, a Pro Tools playback system, and a separate record system that was parallel to Simon’s. We really wanted the actors to lead the musical performance and not the other way around. Having live musicians behind the camera, alongside the playback setup, allowed us to provide our cast with the musical support they needed. By using an auxiliary record rig, we could quickly take Simon’s mix of the previous/best take, import it into the playback session and rebuild the pre-recorded material around the favored performance to be ready for playback by the time cameras rolled again.

For any given song, we would come up with a strategy between the Music, Sound, and Choreography departments to accommodate our director and cast. On a scale from completely rigid to completely free, we came up with the following options:

1.     Playback of pre-recorded material
2.     Playback of pre-recorded material with live piano playing on top
3.     Live piano playing along to a click track (metronome) based on a tempo map of the pre-recorded material
4.     Live piano playing freely with the ability to trigger “reference clicks” (four or eight clicks at a decided tempo). This allowed the musicians to follow the actors freely, yet still have a reference for the desired tempo of the music. For example: the pianist could trigger a reference click at the start of a musical phrase to guide the cast to the given tempo, yet quickly let the actors take over and allow them to feel the rest of the phrase, speeding it up or down as they saw fit. It should be noted that most of the time the click track was only heard by the musicians and not the cast.
5.     And finally, the musicians could play completely free, with no interference from myself, following the actors and allowing them complete creative freedom.

At any given point in the track, we could employ and seamlessly switch between any of the above approaches: the intro of the song would be played live, going into playback for the chorus, then back to live piano with reference clicks, and back to stems again … or vice versa!

The more we planned ahead, the more it became apparent that three things were at utmost importance: simplicity, flexibility, and reliability. I needed a rig simple enough to navigate and troubleshoot quickly. It needed to be highly flexible in terms of I/O, mixing, and routing options. We needed a certain level of redundancy, in case anything did, indeed, go wrong. With a film like Cats, that is completely reliant on music, the last thing you want to do is find yourself in a situation where you’re not able to play any … especially if your job description is “Music Playback” … and ESPECIALLY if you’re sitting next to Simon Hayes!

Having a background in designing and building customized writing systems for various Hollywood composers, I’ve had the privilege to test out various interfaces and computer systems in different capacities. I’ve found RME interfaces to be very flexible and reliable in the past, and for this very reason decided to go with the Fireface UFX+ for the box’s versatility and its ability to handle most audio and I/O formats. (Sorry, Tom Barrow! We’ll do Dante on the next one, I promise!)

The UFX+ was one of the integral pieces of the puzzle. Having the standalone features of the interface outside of the DAW, meant I could rely on the Fireface as a safety net. If the DAW or the computer crashed mid take, the UFX+ would still pass the live piano signal to our IEM mixer, Tom Barrow. In certain situations, that could be the difference between saving and ruining a take.

The routing and mixing flexibility of the interface meant I could keep the Pro Tools sessions clean and simple, with the bare minimum set of controls and I/O options, preventing anything from being nudged, reset, or misrouted, and being able to quickly tell if something is amiss. No ambiguity. Using the TotalMix FX software, I could then take these feeds from the keyboard booth and Pro Tools, and distribute them at set levels to Simon, Tom, and my second record rig.

I found the ability to lock the user interface extremely useful. If there were any changes that needed to be made with regards to routing or levels, I had to go through an extra step to make them, and once the user interface was locked, nothing could be nudged or changed accidentally. Having the ability to save and recall snapshots/workspaces meant I could trace our workflow all the way back to day one, if needed. Again, no ambiguity.

Simon Hayes in the midst of mixing 24 lavaliers across two sound carts. It was rare for Simon to remove his headphones for the length of the shoot.

The Pro Tools sessions consisted of the following:

1.    Playback LTC (used for synchronization of Simon’s recordings and pre-recorded material in Editorial)
2.    Click (Metronome) track based on the tempo maps of the pre-recorded material
3     Tracks for imported material (Click, Simon’s Vocal Mix, Live Piano, Playback Mx (Music)
4.    A and B rolls for the pre-recorded stems. Having two allowed me to have the flexibility to edit and build transitions between incoming and outgoing songs

On the record side, I had a small laptop rig receiving ADAT streams from the mothership that is the UFX+:

1.    Simon’s Vocal Feed
2.    Live Piano Feed
3.    Playback LTC
4.    Playback Click
5.    Playback Mx (Music)
6.    Live Piano Midi

Each playlist matched the scene and take number of the film, so we could quickly find the recordings of the performances Tom Hooper preferred. Having the two computers networked meant that I could export these playlists directly to the playback machine and save valuable time that would have been spent on copying data to and from transfer drives. This allowed me to quickly import and synchronize the material recorded on set to the playback sessions, or edit said playback sessions to the material recorded on set, all without nagging Simon for the files.

Having this flexibility required a lot of coordination between the Music and Sound departments. “What are we doing next, John?” “Are we listening to live or pre-recorded piano?” “Do we need to feed the lead vocal recorded on the previous take to the rest of the cast on this turnaround so they could find their tone and their pace?” “WHICH VERSION, Marius?!?” “Do we need to add or remove four extra bars to accommodate the new camera move?” “Simon, sir! Would you like to take down both the old pre-recorded backing vocals, and the new live vocals (previously recorded on set) on your ISO’s, or would you just like the ones we’re recording on the next take?” “Tom, could we send the click track to all the cast, please?”

You could see how things could go very wrong very quickly.

The interior of the sound-proofed Keyboard booth interior. The Keyboard players had monitors for camera feeds and Pro Tools. The two mics on the stand were necessary as one was routed to the cast direct and the other to the music department comms system.

“Cats was a very rewarding challenge. I was very impressed with Simon’s output to Editorial and what this level of collaboration, forward planning, and organization can achieve. I ended up mixing up to forty channels into sixteen mix busses, over two separate mixers on the fly, for most of the time.”
–TOM BARROW

We also found that we needed a lot of coordination between the musicians and myself. Having the actors on set and musicians in the keyboard booth meant we were relying on Simon’s mixes, Tom Barrow’s comms and video screens for feedback from each other and the cast. Try putting the musicians from the most rehearsed and cohesive band into separate rooms and ask them to play together. You could imagine that it would be a very challenging task at best and could easily descend into pure chaos at worst. Our pianists and I had barely three weeks to rehearse.

We quickly realized that the transitions between playback material and live piano could be a lot smoother if the musicians had control of the playback material in certain situations. Indeed, after much deliberation and research, we settled on custom-programmed pedals that would control the transport of my Pro Tools sessions from the piano in the keyboard booth. We would agree on what bar they would want to start the playback material on, I would line up the playhead and they would trigger the playback. Sometimes that would happen multiple times during the same take. This was also our go-to procedure for the reference clicks I mentioned earlier. Having worked out this system meant our musicians could check tempos only when and if they needed to, without any unwanted meddling in their performance from the likes of myself.

It was a daunting task when we first started, but it quickly became apparent that Simon, John, and Becky are as incredible at picking the right people for the team as they are at what they do. By the end of the first week of the shoot, our teams had become as tightly woven as the fabric of MADI, Coax, MIDI, and XLR cables that littered the floor of the set. We worked out all the kinks. It was time to hang on for the ride.

This having been my first on-set job, I’m incredibly thankful to Becky for putting me up for it, and Simon for taking me under his wing. However, I’m convinced that none of this would have happened if it wasn’t for our trainee, Jake Elliott, whose vigor, determination, and selfless commitment have kept us fed, warm, full of coffee, nonviolent, and focused on the momentous task that was Tom Hooper’s Cats! Plenty of tea bags for us! Shout out to Jake Elliot!

Tom Barrow, I.E.M. Mixer
When Simon Hayes asked me to be part of the Sound/Music team on Tom Hooper’s Cats, I knew it was going to be uniquely challenging. I also knew from previous experience that Simon’s fastidious and collaborative approach to negotiating would also allow me to concentrate my efforts on the job at hand, so I jumped at it.

We had a large number of radio mics, live musicians, multiple channels of Pro Tools playback, twelve mono IEM feeds, plus a director to artist talkback. Not just that, but it had to be portable and relatively quick to rig. That was for starters. It was going to grow, and it did.

Before thinking about specific pieces of equipment, my first thought was about the general infrastructure of my system, and the first thing that sprang to mind was Dante.  An unfussy setup with incredibly fast and intuitive routing. Sixty-four bidirectional channels on one CAT6 cable all run though unmanaged gigabit switches. Perfect. I’ve been a great advocate of Dante and have been using Focusrite RedNet equipment for several years now, so I was comfortable with Dante’s reliability and confident that it wouldn’t fail at a critical time. Although I knew I’d have to invest in some chunkier CAT6. Amazon would not do!

Luckily, I already had a small footprint mixer with a Dante option. The Midas M32R and thirty-two input stage box. I also had the Focusrite RedNet PCIeR card for interfacing with my computer, so I felt at the time, I had routing and mixing pretty much covered.

The next step was to select a good IEM system. In the end, we settled for the Lectrosonics Duet M2T and M2R system. It’s brilliant and as luck would have it, it’s Dante-ready and works in a frequency band we weren’t already using. Between Arthur Fenn, Simon’s Key 1st Assistant Sound, and Roger Patel of Everything Audio, we tried to find the slimmest body-worn receiver we could, due to the very acrobatic cast already having to wear a lot of other gadgetry. Not the cheapest by a long way, but the audio quality was superb and the flexibility and features in both the transmitters and receivers were excellent. The only problem was that they were brand-new products and there were simply not enough receivers in the UK. However, we have a great relationship with both Roger at Everything Audio and Lectrosonics. They understood what we were doing and got us the equipment we needed. Now with the added use of a TX antenna combiner, a large directional antenna, and a small cart, I could run the transmission system to anywhere on the stage and cover all of Eve’s enormous sets.

Next up was a conversation with our Pro Tools operator, Victor Chaga. I was keen to continue the Dante theme throughout as much as possible, but that was wishful thinking! Victor’s RME interface employed MADI. For a day or two, I thought this was going to be a problem but it actually led me to using the most useful piece of equipment and one which ended up being the key centerpiece of the whole routing system; the Ferrofish A32 Dante. This solved two problems. The first, how do I convert Dante to MADI and back again? There were other units on the market but they were basic and this unit allowed me to do a lot more.

The Ferrofish A32 Dante not only has sixty-four times Dante and MADI, but also thirty-two times ADAT, and thirty-two analog input and outputs, all-in-one 1RU-12v powered unit. What occurred to me was that if I took all of Simon’s radio mic analog direct outputs and put them through the A32, along with any other flavor of audio from Pro Tools and various other sources, I’d have a much better routing matrix than if I ran everything into the Midas directly. So anything musical that went into the cast’s ears, went through that unit first. I could put the audio anywhere I wanted to and in as many duplicate places as possible with just a click on a track pad.

My next consideration was adding the live-music element to the Pro Tools element. I like this part of the system and general concept. Coming from a musical recording studio background, in my opinion there’s an “atmosphere” you record when you have musicians and vocalists bouncing off each other in a live environment. Pro Tools (obviously) cannot feel moods and react to emotion. For those moments, the live musical element was crucial and Pro Tools was great for giving us rigidity in tempo when that was required. When we needed both, I could send a click track from Victor to the musicians.

With all of this, I was running out of channels on the Midas. Although we were the Sound/Music team, the music part of the team still needed to work independently of the Sound Department and they all wanted to hear different things and each other and have private talkback to the cast. They needed their own mixer. One that I could control remotely and have their own mix system that wouldn’t affect anything but their own mix. We ended up using the Behringer X Air 18. A consumer mixer but a Wi-Fi-controlled one. They could have all musical instruments, a feed from Pro Tools, a mix of the cast and their talkback completely independent of everything else and it had enough outputs to feed multiple channels to me. I also added a Behringer P16M, which is the personal mixer system so the keyboard player could create his own mix and save it for each musical number. We ended up duplicating the musical part of my system over three main stages once we had a basic mix set up, so we could just turn up, plug in, test and away.

I had the overall shape of my rig and now it was to the rehearsals. Simon had negotiated extensive rehearsal time with the cast. This was necessary for several reasons, but one important factor was that ear monitors take a bit (a lot) of getting used to for artists. Even in live-music events where these are used regularly, if a musician has never used IEM’s before, they can feel isolating, alien, and distracting. They had to get used to the IEM’s and feel comfortable with them before shoot day one. Most of our cast was even further removed from that world, having come from musical theatre and dance. To make them comfortable and give them an understanding as to what was possible and what wasn’t was very crucial. We needed time to iron out any kinks in our ever-expanding audio system, and create a basic useable mix for the cast and Music Department to monitor.

Onto the shoot. For me, this was an ever-changing and very organic process. In some areas it would change daily, but in others, it had to remain rigid throughout the entire process. My mix of music to Simon had to remain at exactly the correct level so it wouldn’t leap around wildly in amplitude when cut together. I also was unusually tasked with creating a mix for Tom Hooper to monitor whilst we were shooting. Ordinarily, this would be an output from Simon’s mixer, however on Cats, Simon’s mix, although perfect for editorial, was not necessarily what Tom needed to hear. Simon wanted me to EQ, compress, gate, and generally polish and tighten up a live mix for Tom’s headphones which obviously wouldn’t affect Simon’s recording for Editorial and Sound Post, and give something to Tom during the shoot that sounded a little more like the final mix.

Personally speaking, Cats was a very rewarding challenge. I was very impressed with Simon’s output to Editorial and what this level of collaboration, forward planning, and organization can achieve. I ended up mixing up to forty channels into sixteen mix busses, over two separate mixers on the fly, for most of the time. I’ve always enjoyed pushing myself technically and problem solving, and there was certainly plenty of that for me on ‘Cats’. If I’ve learned anything, it’s that however big your mixer is, it could always do with being bigger!

John Warhurst, Supervising Music and Sound Editor
I worked on Les Misérables alongside Simon and Becky, so was very pleased to be invited to meet Tom Hooper and Film Editor Melanie Oliver at Working Title in the summer of 2018, with a view to working on Cats. The film had been green lit and was going to shoot later in the year. I remember wondering before the meeting how it was going to be shot, as it’s a musical that has a lot of choreography. I couldn’t imagine Tom making a musical film that didn’t involve live singing, but I also wondered how it would be done with so many large set pieces, involving many cast singing and dancing simultaneously.

I came out of the meeting with the definite feeling that the final soundtrack of this film would be all about the sound recorded on set. I knew it would be important to go through the schedule with Simon and try and find moments when wild tracks could be recorded. Or any sung parts that might need to be recorded separately to make sure we would be covered in post production. After discussing the track layout with Simon and how many mics he would be recording, I knew it would be an editorial challenge with forty tracks per take. We need to be organized and test workflows right from the start of the shoot to be absolutely certain there weren’t any technical hurdles in post production.

There has been a lot written about Les Misérables, and how it empowered the actors to lead the performance and the music would follow. Each performance could be changed or adapted from take to take with a live pianist following every turn or pause the actor wanted to make. Cats was going to be all of that and more. There were some songs, and some sections of songs that would need to be music playback so the tempo could be rigid for the choreography involving up to forty-two dancers and twenty-four singers. It became further complicated because there were multiple instances within a song where the music would need to be live, and then playback, live, and then playback again.

We were very fortunate in finding Victor Chaga, our Music Playback Operator whose technical abilities are first class. He fitted straight into the team, which is so important on a stressful film set, especially this film set, where nothing would operate without music. It was important that everyone on our team had the right temperament to be able to deal with what can often seem like unreasonable requests within the timeframe expected.

When we started rehearsing, the tricky part was going from live to playback seamlessly. There were two main pianists, James Taylor and Mark Aspinall. Something we learned from Les Misérables was the importance of whoever rehearsed with the actor then transferred to the set. The actor would develop the part with their pianist and then know that same person is on set accompanying them in exactly the same way as it had been rehearsed. Rehearsals were also taking place at the same time as shooting, so there was always a need for at least two pianists.

James and Mark were playing in an acoustically treated booth, without windows, so communication was always going to be an issue. Our first thought was would it be better for Victor to go in the booth with the pianist, like a small theatre pit band so they could have good communication with each other. But there were many more reasons why Victor should stay next to Simon, reasons that might be the difference between the smooth running of the set or not. The first attempts going from live to playback, and back to live again felt glitchy, too random in execution and not musical because of the disconnect. We were concerned; this had to feel musically perfect and seamless one hundred percent of the time. This led to many discussions about how these two things could interface better.

Memories came back of playing in a band as a teenager and our drummer used to have a foot switch to trigger a MIDI sequencer. There’s something about the person playing to match the sequencer if they are the ones that trigger it. We discussed this idea with James and Mark about using a non-latching foot switch, which would start and stop the Pro Tools music playback. They agreed to give it a go and we bought a foot switch with USB extender cables. After the first rehearsal, it became clear this was the way forward, it was instantly more seamless. Victor was able to keep playing back from the right bar number, making sure it had stopped when it should and re-cueing for the next playback. We now had a system we felt could be taken onto the Cats film set. This was improved further by having a second monitor in the keyboard booth that had the Pro Tools bar count window visible to the pianist, so they could have the confidence knowing that if music playback starts at ‘bar 53,’ they could see Pro Tools was cued up and ready to go, whilst playing up to it.

Once we got up and running with the excellent communication system that was so crucial to the smooth operation of the department that Tom Barrow put together, we all felt ready to go on set!

My role was to foresee any issues that might come back at us in post and to stay close to our Director, Choreographer, and 1st AD making sure we always knew where we were going from in the music. Were we going to play back someone’s vocal who wasn’t on set that day? Do we need a vocal or dialect coach? I was employed to work on the film through post production, until final delivery.

I knew I needed to work with my Co-supervising Sound Editor, Nina Hartstone. Her experience and work with dialog is second to none. Although Cats is a musical film and everyone is singing, the way we deal with the audio in a live-sung musical is identical to a dialog film. It’s like dialog editing in soprano, alto, tenor and bass!

Nina Hartstone, Dialog Editor
When John Warhurst first spoke to me about working on Cats, I knew it would be great to work on a musical film with him again, employing some of the same techniques we brought to Bohemian Rhapsody in order to achieve the best possible finished soundtrack. One of the first steps of working on the vocals and dialog of Cats was to assemble the sound to the current picture edit and listen through Simon’s on-set recordings. For the Avid, Simon supplied two mix tracks (12 + 12), and a track of the music, playback, and live.

For sound editorial, we had the fantastic resource of all the ISO tracks. The use of multiple forehead-mounted lav mics, across two sound recorders, locked to the same timecode, provided a wealth of material for Sound Editorial. It allowed for a great deal of flexibility in the decisions of which tracks to favor for any one shot. It was a complex task de-multiplexing the source audio and conforming to the picture edit, but well worth it to have separate recordings, which maintain proximity throughout all the dance moves for every character. It was no small feat for Simon and his team to capture such good recordings for the multiple mics and avoid any kind of distortion, even across extremely dynamic performances. All their work is being used in the final vocal/dialog edit. Many hours have been spent listening through every mic recording across all takes, to compile libraries of breaths and movement from the dancers, clean of main vocal, allowing us to keep the ensemble presence alive during every scene.

In addition to the forehead placement, mics were mounted on the shoes of the lead tap dancer for one of the songs, capturing close and detailed sound for every tap, scrape,  and scuff, which really complements the close camerawork, and has been a fantastic resource throughout the scene. In Editorial, we have created a tap library from the original recordings, utilizing all tap moves clean of singing and enabling us to build a continuous tap-dancing track, perfectly timed to the music, throughout the scene.

With all the vocals, John and I worked very closely safeguarding authenticity and musicality in the performances, ensuring both sync to the lips and the music is always very precise, constantly asking Picture Editorial to roll shot sync for us to achieve this. This movie has amazing dancers and singers performing live and our aim is to create a soundtrack that sounds hyper-real and intense, using all the great recordings from set.

In areas of ensemble singing in Cats, it has been a time-consuming but rewarding task to meticulously edit the tracks for each character to make certain no undesirable noises are magnified across the bank of tracks and ensure they all play together without any phasing issues. For these chorus sections, we also had the benefit of additional Wild Track recordings that Simon and John acquired on set, allowing us to have both great-sounding singing, as well as the authenticity of the sync recordings, which gives us vocals with the movement of the dance.

There will be very little ADR required for this film, but for storytelling reasons, lines need to be added in post production in addition to what was shot. We plan to employ the same techniques to record our ADR as those used on set. The same mics will be mounted on the forehead. Where it would be beneficial for the actor to move around to replicate their performance, they will be wireless and free to mimic their on-set dancing or movements. The most important element of recording lines or vocals in post is to allow as much freedom as possible for the artist to recreate their original performance effectively.

There is still plenty of work to be done, as we are working toward our final mix. All the work by the Sound and Music team has been outstanding, employing innovative techniques and pushing the boundaries of what can be achieved in a musical film to support Tom Hooper achieve his incredible directorial vision. It will be exceptionally rewarding to see and hear it all come together in these final stages!

Ford v Ferrari

The Sound of Speed

by Steve Morrow CAS

Carroll Shelby (Matt Damon) and Ken Miles (Christian Bale), center, confer at Willow Springs International Raceway. All photos by Merrick Morton, courtesy 20th Century Fox Film Corp.

I was excited to re-live this historical drama about Carroll Shelby (Matt Damon) and Ken Miles (Christian Bale) who built a revolutionary race car for the Ford Motor Co. to compete against Enzo Ferrari at the 24 Hours Le Mans in 1966.

We loved working with all those wonderful cars; the hard part being that these vehicles were more than fifty years old. We also had a very tight schedule; it wasn’t one of those films where they had the luxury of time. The script had a large amount of story to tell that they just had to shoot no matter what.

Ken Miles (Christian Bale) and his son Peter Miles (Noah Jupe) at the Shelby LAX runway.
Carroll Shelby (Matt Damon) at Willow Springs.

There were a lot of days where we were at the mercy of the schedule, at the race track in the morning racing the cars around when the light was nice and low, and the image looked pretty. Then in the afternoon, we would shoot all the dialog around Pit Row.

That location was at the Agua Dulce Airfield in the desert. In the morning, it’s nice and calm, and then by the afternoon, there are 40- to 50-mile-an-hour winds, visually you can’t really tell the wind is blowing and it was also 110 degrees, so it was just hot and sweaty.

Four cameras are rigged around Christian Bale (portraying Ken Miles) for a racing scene shot at Willow Springs International Motorsports Park. The action continues with the assistance of what the filmmakers call the “Pod-Car.”

We had to be very careful in the way we radio miked the actors, making sure they were wind-protected, good sounding, but not buried, as they were just wearing T-shirts. Craig Dollinger is my Boom Operator and is terrific at radio miking the talent. We shot with two, three, and four cameras so the radio mics were the rule for the majority of the film. They were Lectrosonics SSM’s with Sanken Cos 11’s.

Steve Morrow at work on the Le Mans set.

For any of the sequences in the race cars with Christian, Matt, and Tracy Letts, we were on the Biscuit Rig from Allan Padelford Camera Cars. This is a custom-built drivable process trailer with a very powerful (and noisy) GM 32 valve Northstar engine. We used DPA 60 series lavaliers here, miking both the actors and planting another in the vehicle, allowing us to capture the loud performance dialog and yelling as we traveled the track.

The camera was in the car so I dropped a bag rig with a Sound Devices 633, a couple of Lectrosonics 411 receivers on the seat beside the actor to be as close as possible to avoid any RF spray from the process trailer engine. We’d hit record and let it go as they drove. I piggybacked the audio into the microwave video transmitters so Director James Mangold and everyone else could hear the dialog in Pit Row.

In any scenes where Christian talks to a driver in another car next to him, we had another receiver into the 633, and once the driver was in the frame, maybe ten feet away from the camera, we had reception on his mic for his line. Early on in the schedule, filming the Shelby Cobras, we hard-lined a couple of lav mics for the engine and exhaust into the 633 so we could record the sound of the engine, the car racing around the track, and the actors’ dialog. That was our go-to standard. The picture race cars, the GT40, and the Ferrari did not have their original engines, so we knew that postproduction would later record the authentic cars.

The Ford team finishing together at Le Mans.

My main cart has two Sound Devices 970’s as master and backup with the Midas 32R and two Lectrosonics Venue 2 receivers. There were several scenes with nine to ten cast, all wired and two boom microphones with the Sennheiser MKH 50 or the 416, a VOG setup for the Director and 1st AD.

The Le Mans Pit Row set in Agua Dulce was built on the airstrip. It was probably four hundred feet long and three stories tall. The race cars would go through the straightaway at about 110 to 120 miles an hour which made it a challenge, sound-wise, to record the dialog scenes, you just do your best and you don’t sweat what you can’t control.

There was no stage work at all; every set was a location. The Ford factory was a set in downtown Los Angeles, built at a warehouse where we had the factory line. Bales’ house was in Altadena. The first race track at the beginning of the film was at Willow Springs, a flat race track in the desert, where Ken Miles (Bale) throws a wrench at his car. The Ferrari factory was the power station up in Pasadena.

The Ford GT-40’s get a fast start at the 24 Hours Le Mans.

The LAX hangar was actually at Ontario Airport with all the planes coming in and taking off. There’s a great scene where Christian is talking to his son about the perfect lap as they sit on tarmac. “Do you see the perfect lap out there, do you feel it?” The sounds of the planes taking off and landing lend to the story, and we were able to actually get very useable tracks. They were not going to hold for sound for planes taking off or landing, because the sun had to be perfect.  

The unveiling of the Ford Mustang where Shelby (Damon) makes the big presentation speech was also filmed at the Ontario Airport. It was extremely windy and the sun was in a hard spot for us to boom, so the wires on the cast saved the day.

Ken Miles (Bale) wins at Willow Springs.

Matt Damon and Christian Bale were having a good time, they got along great. Bale was the easiest and one of the nicest people I’ve ever worked with actor-wise. You could do anything you needed. On this one, everybody was having a fun time. James Mangold was happy telling a story that he really wanted to tell. It was just one of those fun movie experiences because it had such an epic feel while you were making it. Phedon Papamichael’s photography was incredible. My team and I enjoyed every dusty minute making it.

1917

by Stuart Wilson, Production Sound Mixer

Sam Mendes sent me a script in June 2018 with the plan to shoot in April 2019. It’s unusual to be asked ten months in advance for a film shoot but it is a measure of Sam’s meticulous planning which was key to the methodology of such an ambitious project.  

The starting point was Sam’s vision for the film. He felt the drama could be best served by playing in one continuous shot.

Hugh Sherlock (left) and Tom Fennell (right)—Boom Operators (1st Assistant Sound) wielding MS stereo booms (Schoeps CMIT and CCM8) to cover part of a 600-yard-long trench as the soldiers get their orders.

The film is all about movement—a journey. It’s given us an unusual challenge because the actors travel so far in a single continuous shot. They could be moving over half a mile, talking all the way. In deep trenches or in and out of buildings. The camera sees 360 degrees, while moving around the actors, so equipment and crew have to be hidden away.

Production Sound Mixer Stuart Wilson with Spectrum Analyser

I set about working out technically how to achieve Sam’s vision in terms of capturing the sound, the voices, the actors’ performance, without getting in the way of the process.

My first thought was to carry my recorder, documentary  style, and follow in the blind spot behind the camera, but in planning it out shot by shot, it emerged that:

1. I’d be adding another set of unwanted footsteps to the sound
2. It may not always be physically possible so I’d need a Plan B anyway
3. For the whole choreography to work, it was essential that a number of key crew could hear the dialog live, wherever they were stationed and I would need to be able to broadcast the live mix to the Director, Camera Crew, Special FX, Script Supervisor, Video Assist, etc.—who could be half a mile away over a hill.

A documentary approach wasn’t going to work for this one!

The essence of Sam Mendes’ film is the performances. The writing, design, rehearsals and choreography of the cast and the camera are all geared toward those magic moments when the actors perform in front of the camera. There needs to be as few distractions for them as possible and, within the limits of the process, the actors—their characters, have to be given the space to inhabit the drama of their situation. If the technical process might get in the way or limit the actors’ freedom to be in the moment of the drama, then it wouldn’t work for the film.

The sound coverage for the most complex shots became like site-specific installations. We installed antenna networks so we could receive the actors’ microphones continuously over the large areas. We had antennas hidden in sandbags, in trees, in piles of mud on top of the trenches, on munitions boxes, etc. I got the Drapes Department to make us some bags from the same material as the army sandbags and used these, as well as leaves and artificial grass smeared in mud to disguise equipment (speakers, receivers, antennas, etc.). Hundreds of feet of fibre-optic cable were used, which was new to me. It’s great what can be achieved with it, but it’s expensive, fragile, and temperamental. The cable can fail if the connectors are not absolutely clean (not easy when it’s raining and everything is covered in mud!).

Director Sam Mendes discusses the scene with Script Supervisor Nicoletta Mani and Co-writer Krysty Wiilson-Caims


We would all be relying on wireless links so I had to establish from the start with the Camera, Video, and RF departments that we would all use the lowest power possible for our transmitters. We concentrated any amplification on the receiver end.

I managed to see most of the locations four months before filming so I could examine what could be beneficial or detrimental to the sound before any construction took place.

This was an exceptionally collaborative production and I was fortunate enough to have previous experience working with key crew; Production Designer Dennis Gassner, DOP Roger Deakins, Camera Operator Peter Cavaciutti, Location Manager Emma Pill, and the Costume Designers Jacqueline Durran & David Crossman, so that all helped enormously. (Trinity camera rig op was Charlie Rizek, who was new to the team.)

Trinity camera rig in action with Grip Gary Hymns and Operator Charlie Rizek

In pre-production with the camera team and their rigs, we made some useful improvements to the noise of their gyroscopes and a company called Cobham built us a special fan-less version of their high-powered miniature video transmitter which really made a difference.

I lobbied all departments to prepare to be able to work without electricity so we wouldn’t require the noise of a generator on location.

In the end, we did need one generator for some equipment, but there would not be spare power for nonessentials as I wanted it to be as quiet as possible and that meant keeping it small. We found one which looked promising, it was well-silenced and newly built on the back of a Land Rover 4×4. It was in use on another film when we were in prep so I went to that set to have a listen for myself and chat with the sound mixer there. I concluded that it would be workable as long as we kept it at least one hundred yards from the action and we got it reserved for our dates.

There was a period of rehearsals and “proof of concept” work with camera, sound, and cast which gave us a good dummy-run at achieving the distances we would face for the shoot. We were able to try things out and develop a way of working to suit Sam’s process. Any testing or rehearsal is useful. Even putting lav mics on actors when their costumes are not finalised, you always learn something.

I have to thank Sound Mixer Tim White and Boom Op Peter Davis for stepping in on my behalf to this early rehearsal period and solving a lot of the issues.

Colin Firth, General Erinmore

We had the luxury of being able to plan. We knew where the camera would be, where the actors would be and could plan where to install and hide the infrastructure to be able to capture the sound and relay the mix to everyone else involved in the elaborate choreography of the piece.

Planning made it all possible but once the sequence starts, it’s like a theatre show and you can’t stop, if something is not as expected that’s where the jazz comes in and it’s a buzz to improvise.

It was important that the cast could feel like they were in the drama as much as possible so crew around the camera had to be minimal and agile. Some of the sound crew wore army uniforms so they could blend into the background when the camera moved around in their direction.

An antenna carefully designed as vegetation.
Stuart and 1st Assistant Sound Hugh Sherlock recording a 100-year-old army truck.

For the drama, we had to feel ‘locked-on’ to the lead characters with a continuous connection. This is principally the dialog and breathing of the actors. The next dimension was to extend beyond the frame into the supporting cast and crowd who all have been given authentic roles within the story. We placed additional microphones on and around the set to capture sound of the other soldiers’ activity and recorded in stereo along the axis of the camera to expand the soundscape out beyond the frame.

George Mackay, Lance Corporal Schofield behind enemy lines.

There was a section where the camera was rigged on a wire cam. These are often used in sports stadiums with four massive lifting cranes at the corners, computer-controlled winches and generators by each one, to control the movement of the wires. In sports, equipment noise is not much of an issue and this setup was too noisy for us to get clean audio. The providers pointed out that there was no dialog in this five-minute section, so maybe it wasn’t a problem. I had to explain to them that, even though there was no dialog, there was still breathing and there were footsteps over many different surfaces. The breathing was as important as dialog because it conveyed the state of the terrified characters as they ventured into no man’s land.

The breathing was subtle and full of detail, conveying a lot about what our heroes were going through as they inched forward out of the relative safety of the trenches into the exposed landscape of no man’s land, diving into shell holes, stumbling past fallen comrades and on toward the enemy lines The sound of their breath conveys so much and keeps us connected to the characters’ experience.

Breathing is very difficult to recreate in a dubbing studio because the actor is trying to consciously do something which was unconscious at the time. It’s never as convincing as the original performance. We swapped out their generators for the quietest ones they could get and we hired in acoustic barrier sheeting for the winches.

The troops charge as Lance Corporal Schofield tries to warn them. Photo by Francois Duhamel/Universal & DreamWorks

The result was one of my favourite shots of the film, barely a word is spoken, yet it is gripping and the connection with the characters is completely held.

I have to pay tribute to lead actor George MacKay’s as a result of his great collaborative spirit, he didn’t have to replace any of his dialog as all the recordings of his live performance were usable. On one shot we had him wearing four radio mics at the same time or two body-worn recorders when he went down the river and underwater. It’s very difficult to get wireless transmission through water so body-worn recorders were used as well, in case of any wireless dropouts.

Effects recording

We filmed a lot on a location called Salisbury Plain. It’s a huge area of land owned by the Ministry of Defence (MoD) where we’d been given permission to film. It felt like a special opportunity. Hardly anyone lives there. Beautiful rolling countryside that looks the same as it did one hundred years ago. It was quite surreal as if we’d been dropped in the middle of wilderness, as a film crew, free to use it to stage our story.

For sound, it was fantastic and because it’s controlled by the MoD, we were even able to put a ‘no-fly zone’ in place. If there was an aircraft, they’d find out what it was and get it diverted. Come on—for a Sound Mixer—that’s the stuff of dreams!

The only downside is when we could hear live shelling going on in the “impact zone” where the real army were training. We were lucky most of the time and it didn’t impact our sound recording too much.

I had the best crew who made it all run smoothly, six of us full time and eight on the biggest days.

Hugh Sherlock, a former gymnast, equally adept in a choreographed dance with the camera as in using a sewing machine to make transmitter pouches.

Tom Fennell, long-term collaborator and expert in radio mic concealment and costume negotiations.

David Giles, a sound mixer in his own right, ready to back me up and take on the challenge of sending and receiving any audio anywhere.

Tom Wilkin making sure the key crew could hear what they needed to at any point.

Michael Fearon, all-round flexible support assistant.

Rob Piller, Fibre-optic Specialist, and running repairs.

Thomas Dornan, Sound Trainee, ready to have a go at anything with a bright future ahead.

It’s the first time I’ve managed to work with the brilliant Sound Editing team of Oliver Tarney, Rachael Tate, and their crew. They’ve really managed to make the best of the location sound and bring it onto another level with the sound design work. It was a real gift to work with a director that understands the power of sound in performance. In this way, we are party to something one-off and intimate. We tell a precise emotional story and not a general one. Sam pushes everyone to do their best work and that can be hard but when you get there, it’s all worth it. It was a big challenge technically but incredibly rewarding to be part of such a truly collaborative experience.

I used Danish and German microphones, Italian wireless and fibre-optic equipment, a Swiss mixing board, French recorders, British boom poles, and  German headphones—all in all—a very European kit!

Aaton Cantar X3 recorders
Sonosax SX-ST mixing board
Wisycom wireless equipment (plus two Lectrosonics)
DPA & Schoeps microphones
Panamic boom poles
Ultrasone headphones
Clark antenna mast

The nerve centre of the operation


The Wisycom gear was fantastic. Very well designed and built. Fibre-optical links, high gain antennas. True diversity on every receiver.

An aluminium antenna mast bolted onto the van from a company called Clark Masts was an essential piece of kit, an extra yard of telescopic mast was worth more than adding 250mW to the power output and messing up everyone else’s signals.

Sonosax SX-ST
Stuart’s bag rig, Cantar X3, Cantarem, wireless receivers, both Wisycom and Lectrosonics.
Main recording setup

Once Upon a Time…in Hollywood

An interview with Mark Ulano CAS AMPS

Margot Robbie, followed by Tom Hartig on Boom and DP Rob Richardson operating the camera on the Chapman crane.

How many films have you done with Quentin Tarantino?
MARK ULANO: Well, it’s a little bit of a hybrid: the first film I did with him was Desperado, a film Robert Rodriguez directed. Quentin had an acting cameo and that’s where we met. The next thing we did together was ‘Dusk Till Dawn, which he wrote, produced, and acted in. And, again, Robert was directing. And that’s where Quentin and I actually really bonded.

It’s a great collaboration.
It’s a cherished partnership. I feel very, very fortunate to have found a good conductor for the orchestra I love playing in, and we enjoy each other’s process, personality, content, and contributions.

How much prep did you have for ‘Once Upon a Time … in Hollywood’?
I consider non-technical inclusion to be part of the prep; I was asked to participate fifteen months before filming began. I had intermittent prep for about two months and the physical, practical prep/tech scouts and all the rest of that, probably about three weeks, maybe four. There are meetings and that’s prep and then there’s actual interaction with all the other departments; Quentin expects his crew to be fully participating and informed about what he intends to create. This time, one aspect of the film was kept under wraps to everyone. It didn’t matter what your status was, head of the studio or an electrician, and that was the ending of the film. That was super, super secret.

That was a surprise to the audience as well.
That was the idea. He was very interested in keeping that unadulterated for people to see and experience the surprise at the end versus having that revealed and out there. Everyone respected that; it was a really nice thing.

In terms of the logistics of the movie, there’s a lot of moving parts and of course, a considerable amount of driving shots. What scenes were the most challenging?
Well, I’ll admit to a bias here: Quentin has an ethic to use the performance that happens on the day in front of the camera, recorded at that moment, in the movie. And he has an absolute devotion to that as a director. If that’s the take he chooses in post production, he has an unblemished history of not doing replacement dialog for reasons that may be intrusive to the scene.

That’s a foundational aspect of working with Quentin. You have to be prepared for that psychologically, that you’re not doing something that’s potentially replaceable or temporary; it just isn’t okay. But over the years, you understand that comes with the caveat that you’re in collaboration and you’re not a non-entity in participation of making the movie.

When there are things that are competing elements, say, in a scene or a shot, something we need for image and something we need for sound and something we need for wardrobe or something we need for sets or whatever and they bump up against each other—it’s not territorial on his set. We say, okay, let’s figure out what’s the best solution to have the most beneficial aspects of those elements survive the collision.

The stereotypical response of “screw sound” or some department being territorially dominant doesn’t apply here. It’s all about serving the project and realizing the director’s intent. So, when you have issues that invade the threshold of connection with the characters, there’s collaboration to work it out.

In this movie, the biggest challenge is the enormous amount of driving work, very often at high speeds on freeways with windows open and actors in unusual postures and positions. I would probably say that was the most technically challenging component of the film. And to always have this caveat in the back of your head, what we do here today; if it’s not good enough to be in the movie, we haven’t gotten it.

My threshold is: Has the audience been interrupted enough by some element that breaks the connection they have believing the character on screen? My guidelines are about connection with the character, the character’s arc within the film, the story arc. These are all driving elements for me. And when I’m engaged with Quentin, I’m in those kinds of conversations.
Everybody in that orchestra is a very accomplished, polished filmmaker; filmmaker is really the key word. We all think in a directorial context: not preemptively directorial but supportively directorial. To exist on Quentin’s set, everyone has to have that kind of intuitive sense of where we are and if there’s a problem, we discuss it: this is an issue, these are possible solutions, we can do this or that.

The solution is creating certain processes and protocols on the set and that was collaboration between my great colleagues, Michael Minkler and Wiley Stateman in post production and myself. We are a triangle producing the sound for Quentin’s movies.

OUATIH has challenges that are relatively contemporary. It’s set in the late ’60s, not present day, but you’re still dealing with twentieth-century elements in the environment. The solution was always having multiple options in play at any given time so that, on the day, the actors were not invaded by external issues. They have the freedom of the set being a period environment in which their characters have to exist. So that’s the idea and on this show the challenges were relatively contemporary.
 
How did you mike the many scenes where Brad Pitt drives the Cadillac?
The permutations of improv is the dominant factor when you’re dealing with performance. But there’s a finite roadmap that’s established by Quentin in his blocking process. He’s profoundly traditional in approach as far as making sure his department heads know the information they need to know how to achieve the scene. There’s a blocking rehearsal between Quentin and the actors. Then there’s a blocking rehearsal for everyone.

Quentin Tarantino lines up the shot with Brad Pitt and Leonardo DiCaprio.

After this many years, I can say there’s a good percentage of reliability that that blocking is solid information. All the departments strategize around that, and I use something similar as in still photography—zone coverage. Where is the sphere of dialog going to occur?

I treat all of the sound that’s happening in the frame as each needing its own respect as an element and I’ll know that there’s a certain sort of peripheral range of head turns and head tilts. The harder issues are if you’re doing a combination of free run and tow and also if the actors are actually driving. If you’re going uphill with windows open on a tow vehicle in lowest gears, close and going slow or even speeding, suddenly you have these other potential competing elements of that invading the scene inside the car.

For me it’s a matter of ducking and mixing to the beats in the scene and riding the wave of the dialog’s flow. I don’t subscribe to the theory or the concept that actually what production sound people do now is just go out and collect discreet elements and let somebody else sort it out later. I do a mix. I do a mix on every shot every day for this director and any director, frankly. My mix is something I am creating as the way the performances are going to get to the audience.

The work that we’re creating is the architecture, or the blueprint of how we feel the scene should sound.
I call it the bed but the same idea; we’re laying the bed of this. Now I also know, particularly if I’m working with Quentin, I’m in a long-established, very involved collaboration with the supervising sound editor and the re-recording mixer about these issues. And I know they are going to look very, very strongly toward my mix on the set. I will capture other components and elements discreetly. I think that’s prudent and I think it also allows for creative adjustment after the fact. There’s a strong history of them trusting what we do on set and us trusting what they do in a way that really comes out on top for the director and for the movie project. So we’re really not under siege, you know, or a lonely department, we’re in collaboration with various and experienced filmmakers who love the sound to be as good as the lighting, be as good as the cinematography.

Everyone takes precious care or a position to respect, regard, and include the other elements to Quentin’s scene. Quentin is not someone who’s comfortable with singular departments taking a position that what they do is more important than what they’re integrating with. He wants the whole piece of cloth to come out of that. And every shot’s handmade when you are making it. You’ve got all these artisans doing their special part, like an orchestra—this guy plays oboe, I play drums, you know, but the point is that it’s a piece of music at the end of the day.

Unfortunately, it’s too rare a thing to have that kind of established environment by the director; the director sets the tone. If you don’t get it, you don’t belong there. I don’t mean that meanly, but you know, this is not something that there’s a lot of discussion about, by the way. There’s an enormous amount of evolved non-verbal communication about things. It’s like that. It’s like playing with other musicians. It really is. There’s no other way to describe it, and with a bandleader like Quentin—he gets it, too. Everyone gets it. That’s the miracle of it.

OK, how did you mike the car?
Let me preface with saying like in music, I don’t care if the drummer plays left-handed. I care if he knows, or she knows, what two and four is, how to swing the band and how to, how to make music. So musicians are often very focused on technique, they obsess, as do filmmakers. And long ago, I started to change my earlier philosophy of not being seduced by the fascination with technique and more seduced by the overall outcome. There are times I’m mixing something with many elements, you know, ten, fifteen, twenty elements and if you ask me what I did during the mix, I would have to come back and say, I’m in the flow, I’m in a fluent moment between all those elements, mixing; what makes sense like when playing an instrument.

If I have to stop and think about how I’m going to blend those microphones when I’m doing this scene, it’s too late. I’ve lost the timing connection with the scene. I have to be in flow the way the actors are and where the camera is. The outcome: okay, so in car situations, there’s a blend of different things. It depends on who the actors are and the tonal quality of their voices and their rhythms. Not just in terms of the actors specifically but the characters the actors are creating, because actors will do very different things with different characters. Early on, I try really hard to plug into the framework around the characters that the actors have built in terms of their performance, and mike to that.

In car planting situations, I lean more toward planting omnidirectional lavaliers in different places, sometimes on the character, sometimes overhead. And I started doing this a very long time ago with Sonotrims and Trams, which were revolutionary microphones because they were front-element mikes that allowed you to place them flatly and they also allowed you to get some sort of PZM capacity, giving them a kind of directionality. I use those depending on the acoustics of the vehicle and actors.

My dominant planting mikes these days are: DPA 4061’s and 4071’s, as they have a great capacity for a natural sound, and Sonotrims. Occasionally, I will plant a Schoeps and Sanken hypercardiod, even Countryman mini-cardioids at times. There’s a wide palette of choices in my lavalier kit.

I’ll use the analogy of a lens kit: What’s the right lens for this shot? You could use a 25mm, 50mm, 100mm, or 300mm, and each will have the same frame scale, a waist-up single, but each one of them will be saying something different. I’m like that with microphones. A lot of it’s intuitive and I build from there. I try not to have some pre-set notion about how to approach the scene until I’ve gotten there on the day. You know, no call sheet nor script page is going to tell you what the shot is until you’re in the blocking and you’ve got the actual elements all together at one moment. The idea that we can predetermine often exactly where the lens is going to be, where the actors … you know, it’s just not true. It happens occasionally, but it’s the rarity, not the fundamental way movies are made now.

It comes down to what’s the best solution for this particular moment? This shot, this scene, this character. Where are we in the story, you know? People look at sound and often just see the tools and assess us in a diminished way as “technicians,” just acknowledging the technical side. It implies a lesser creative engagement. My actual life experience is that we have a broad spectrum of inadvertent autonomy in our creativity as production mixers and production sound teams. I don’t mean to in any way exclude the incredible contributions in the department with our boom operators and utility people. I’ve been with Tom Hardig for twenty years; he’s my colleague, friend, and boom operator. My other really great partner for over forty years is Patrushkha Mierzwa, my other boom operator and utility person. These people are profoundly polished in this specialized work and I depend enormously on that.
To sum it up, I really try and understand what’s going on and there will be micro adjustments. I trained with the relentless pressure of decades of one-track, no redundancy on the set, that’s what’s going to dailies. But one of the enormous benefits that we have gained now with nonlinear, file-based multitrack tools in the field, is that we can be diverse in our approach. I can have more than one game plan working simultaneously.

Which is not exactly the same thing as just capturing discreet elements and solving it later. It’s having more than one approach happening simultaneously. That’s a different thing; it may be a subtle difference but I think it’s an important one. What it does for me is that it (figuratively) lets me float in between the raindrops during an actual take. I try not to paint myself into a corner, it’s partially defensive but it’s mostly aesthetic. I’m here to really get an audience to believe in this character at this moment in the story.

That’s my mission, to be supporting the story, to tell the story along with the other crafts in the movies. The magic of movies is that we get to believe in something that’s completely artificial in its fundamental creation, but transcends into something really substantive. Sometimes it can be something very significant socially, culturally, and emotionally. There’s magic to be done. For me, if you have the Chinese definition of luck (preparation and opportunity coming together), your percentages really go up in terms of succeeding with any particular shot or scene.
 
In any given scene, your mix is between an open boom mike, a radio mike on the actor, a plant, etcetera, to use those choices available to make that mix.
Correct. One of the more frequent questions I probably get asked is, “Is it a boom scene or is it a radio mike scene?” Well, it’s one or all of those elements as needed in the execution of the scene. I don’t want you to be able to tell. I’m not dedicated to the classical notion of perspective sound being the single and most significant approach. I had the incredibly good fortune early in my career to do a Robert Altman movie and that style (multitrack) certainly kicked my butt. It turns your head upside-down, but in such a good way because you’re free from some predetermined outcome and approach. For me, it’s about connection. I use those elements but it’s like paints: oils and watercolor, which makes the most sense? That’s the idea, what makes sense. People look at us as technicians, but, in reality, everyone on the movie is technical. Actors are maybe the most technical people on a movie. Day 2 of the movie, they have to do the final scene of the show, day 17, it’s the very first shot of the first scene of the movie. And then on wrap day, it’s the midpoint.

An actor’s got to calibrate the character arc so when the chronology of the movie is actually cut together, it works. I think that’s as technical as anything we have to do with our tools. Yes, our basic tools are plant mikes and hypercardioids and radio mikes and lavaliers and all those, but it’s like a box of lenses. It’s what you do with them that matters.
 
Let’s get into some of the tools. What do you record to?
I’ve been using Zaxcom for a long time, I know their people. I believe it’s important to be connected personally with all the people who make the specialized tools that we use. I just came back from a tour in Germany and France, visiting Cinela and Schoeps. Each tool that we use is excellent in some particular way but not necessarily in every way. For me, Schoeps is a primary key player in my miking. The Sanken CS3E shotgun mikes have been transformative for me as a boom tool. I journeyed from Schoeps to Neumanns, which I love and respect enormously, to the Sanken’s for the practicality that they afford us in the generally acoustically hostile environments that real movies are made in.

They allow us to have zero proximity. They allow us to have seventy percent to eighty percent control in terms of pattern. When someone’s walking on a noisy surface, you flatten out the mike; your boom operator is really painting with sound at that point. The off-axis is profoundly smooth, and the reach of the Sanken CS3E is also very significant, particularly with the proliferation of multiple-camera work over the last couple of decades becoming a dominant versus occasional form. You have a much greater potential with that microphone to be able to make the compromise between the two frames, or more frames than you would with some of the other microphones that are out there.

It’s really conditional to how the scene’s been blocked and what is in the frame. I’m old school. I don’t need to look through a monitor or a viewfinder to know what a 50-millimeter lens does at eight feet. I know what it looks like. I know where the key light is, I know where the fill light is. I know how it’s going to interact dynamically when the camera’s moving through those and I design the sound appropriately with my mix.

Leonardo DiCaprio and QuentinTarantino

Were there a lot of stage sets or was it mostly locations?
There were over one hundred locations, over one hundred speaking parts in the film. The first weeks we were at Universal Studios Western streets of the back lot. It was rebuilt and we shot the ’60s period Western episodic TV stuff, Lancer. We shot the Bounty Law scenes at Melody Ranch. Spawn Ranch was at the Santa Susanna Park and was completely created by the Art Department. The martial arts scene, where Bruce Lee is duking it out with Cliff, looks like a studio lot, but was a high school parking lot in Compton. Many of the locations were the actual places when we could. They locked up Hollywood Boulevard for a week a few times. That was a fundamental component for Quentin’s environments for his actors. He wants to give them as much of the flavor of a place that they’re supposed to be in at the time. It’s old school and you know, they love it, the actors just love it.

DiCaprio’s trailer is where he was rehearsing his scene with his tape recorder. What kind of a set was that?
It was a trailer, no breakaway walls, no breakaway ceilings, nothing. It was small, difficult and crowded with everybody not on camera. You deal with it, you make it happen, you make it work. We know Leo’s rhythm. This is my third movie with Leo and Tom’s second movie with him. You know, you get a sense of who you’re working with and how they telegraph a little bit with very, very, very subtle aspects. What’s he going to do next and where is he going to do it?

Tom used a Schoeps on a GVC angle, so it’s a thumb’s width from the top of the frame and he worked closely with DOP Bob Richardson; we’ve been collaborating on movies since 2002, 2003. Everyone is struggling in a creative but challenging environment. Even though it’s so collaborative and inclusive, it doesn’t mean that it’s easy and you slide by on compromises that are damaging. No, it’s the opposite. If it’s hard, it’s hard. Everybody’s working through the hard together. There’s no blame-seeking missions and there’s no “Sorry, you can’t be there” stuff or “You’re being there creates a problem.”

In regards to that scene he rehearses with his tape recorder, how was that done? Was that all added in post?
No, not at all. We went to Leo’s trailer—Quentin, myself, and Leo—and recorded him both on my main gear for the show and on that recorder, that very recorder. Leo recorded that and he knew when to not speak and to leave the holes. That was all predetermined before we recorded. Then he would play that back on set. He was using it as a practical prop for himself, which is what Quentin needed him to be doing because he’s acting with himself in that moment.

Like the action with the tequila margaritas and all that stuff and he’s rehearsing. He really used the props. So, it was actually kind of great fun. Likewise, the dancing where he’s doing the Hullabaloo show: he’s singing live. We brought in Gary Raymond, the creme de la creme of on-set Pro Tools and playback and Gary did a fantastic job of creating a safe space for Leo to do something he doesn’t usually do, which is sing and dance on camera.

Cliff Booth (Pitt), Rick Dalton (DiCaprio) and Marvin Shwartz (Al Pacino) meet at Musso & Frank

All of the driving sequences have an enormous amount of background radio playing throughout, which is so emblematic of that era. How was that done?
Every movie I’ve done with Quentin, I’ve had the joy and privilege of getting a brown manila envelope packet with his hand-written script. It means—OK, this is the movie that’s coming up in a couple of months, so get ready. This time, because of their intense interest in keeping it under wraps, I received a jump drive with fifty hours of log tapes from LA radio station KHJ.
All music and commercials, all true, live tapes. What does that mean in your knowledge of background stuff? Well, it was a whole separate creative endeavor on the part of Jim Schulz. He had to prepare for Mike Minkler five levels of all of the music that would be in the show; the level that would be what you’d hear over a two-inch cheesy speaker in a 1969 car with the windows open, all the way leveling up to direct remasters, the kind that you’d hear in your head; the psychology of third-person or first-person in that music. Multiple layers of that depending on what was going to be needed in the scene. All of that music was prepared by Jim so that at a moment’s notice in the mix they could draw from that library for what was really the right thing for a particular shot in the scene. Complicated.
 
Did you have to play back any of those tracks in the driving scenes?
I always had the music with me in some form if needed, always. You can do a lot with a Bluetooth speaker and iPod, you know, an iPhone or iPad to do playback from the backseat or from the tow vehicle. We would do things that were more atmospheric. Like Cliff driving back to his trailer from Rick’s house, that type of thing.

I would play music, particularly the Jose Feliciano song, for emotional purposes for the actors. Which goes back to the silent era, you know. There were never really silent movies, all those films had some kind of sound—dialog or music—in some form. The idea that music on the set is a tool for actors, plugging into their characters emotionally, is a very common element on Quentin’s set. I’m charged with a lot of sourcing for music when needed … both for crew morale and for scenes with actors.
 
In the car-to-car shooting, how difficult was that?
Occasionally in those cases where I was using a primary wireless transmission but felt that there was potential risk for damaging a scene, I’d put hard recorders inside the vehicle as well, that would take the same sources and capturing, and I had some recording transmitters as well. Or a special situation for times when somebody is coming from very far to near, like Tex charging back to the ranch from a horse lesson; we put a normal radio mike on Austin and a recording transmitter. I have a range of possibilities and it becomes more focused as we get to the reality of the shot. That’s a better way to describe it.

I have a fairly good remembrance of approach to the scene, what I don’t intellectualize is the minutiae of the performance itself, the timing in the moment. I will remember the architecture of the scene in terms of our approach. That’s really what matters to me because I’m giving an expression of my approach to something and someone else is going to get value out of it. Stay a student of the tools always. Go to seminars about Isotope to learn what is a viable answer when someone asks you on set, “Is that okay? Did we get it? God, there was a giant truck or airplane, that can’t be good!” and you can come back with an educated assessment. It’s your best defense for challenging situations on a film set. Trust your team, surround yourself with people who know more than you, are smarter than you, and better at it than you are so that they can help you support that director and those actors.
 
Anything else you want to add?
Yes. I’ll say this about Once Upon a Time … in Hollywood and it’s one of the reasons I have great admiration and affection for this film: it’s a significant expression about gratitude, acceptance, and love. People don’t necessarily associate those three things when they’re talking about Quentin Tarantino films, but if you dig deeper, you’ll see that they’re actually very deeply woven into all of his films. You know, kabuki violence aside and all the rest of that, a lot of that is humor and his deep, responsible obligation to himself to entertain. He really thinks that if you’re not entertaining, you’re not serving the audience, you’ve blown it.

The idea in this film about the love affair between the two men, the idea of the decline in their personal and professional status as a backdrop to revealing who they are as people … it’s very three-dimensional and not everybody gets that. But for me, this one’s really his love poem and all of our love poems to the making of movies. It’s his Day for Night, if you will. It’s a deep exploration of the love of making movies that infused everything, every day, every shot; we were all very explicitly aware that we’re doing something we love to do. We’re doing it with each other because we love to do it with each other. And we’re doing it for the love of it.

If he got a print and that was it, that’s a great one, he would say, “We’re gonna do another and, and why are we going to do one more?” And the entire crew in unison would come back, “because we love making movies.” It’s really the truth, it’s not an easy thing to find in our work; to be in a place where the director’s comfortable in his own skin, where the content is emotionally significant for many who are engaged in its creation and connects with people at the other end, and the audience got it.

What I admire and appreciate enormously was behind the scenes, without press or paparazzi around, the purity of the process. He achieves an autonomy that’s rare for directors. The studio was supportive and hands-off in a very respectful way. We had what we needed to do the thing we were doing, and we enjoyed doing that every day of that production.
Once Upon a Time … in Hollywood doesn’t fulfill your expectation of a “Quentin Tarantino movie” and yet it is ultimately the epitome of a Quentin Tarantino movie because of that very aspect.

All photos by Andrew Cooper. Courtesy of Sony Pictures

CAS Awards Nominees

LOCAL 695 OUTSTANDING ACHIEVEMENT IN SOUND MIXING FOR 2019

On December 10, 2019, the Cinema Audio Society announced the nominees for the 56th Annual CAS Awards for Outstanding Achievement in Sound Mixing for 2019 in seven categories. The winners will be revealed at the 56th Annual CAS Awards Saturday, January 25, 2020, InterContinental Los Angeles Downtown Hotel – Wilshire Grand Ballroom, Los Angeles, California

CAS Award Nominees

Motion Pictures – Live Action

Ford v Ferrari

Nominees: Steven A. Morrow CAS,
Paul Massey CAS, David Giammarco CAS, Tyson Lozensky, David Betancourt,
Richard Duarte
Production Sound Team:
Craig Dollinger, Bryan Mendoza, Richard Bullock Jr.

Joker

Nominees: Tod Maitland CAS, Dean A. Zupancic, Tom Ozanich, Daniel Kresco, Thomas J. O’Connell, Richard Duarte
Production Sound Team:
Michael Scott, Jason Stasium,
Jerry Yuen

Once Upon a Time … in Hollywood

Brad Pitt and Leonardo DiCaprio star in ONCE UPON A TIME HOLLYWOOD.

Nominees: Mark Ulano CAS, Michael Minkler CAS,
Christian Minkler CAS, Kyle Rochlin
Production Sound Team:
Tom Hartig, Patrushkha Mierzwa, Jay Golden, Chris Howland CAS, Marlyn Lopez, Veronica Lopez, Gary Raymond and Mauricio Rivas

Rocketman

Nominees: John Hayes, Mike Prestwood Smith, Mathew Collinge, Mark Appleby, Glen Gathard
Production Sound Team: Peter Allen, Emiliyan Appleby, Max Lipscombe, Morris Concas, Andrew Rowe

The Irishman

Nominees: Tod Maitland CAS, Tom Fleischman CAS,
Eugene Gearty, Mark DeSimone CAS, George A. Lara CAS
Production Sound Team:
Jason Friedman-Mendez,
Terence C. McCormack Maitland,
Jason Stasium, Jerry Yuen,
John D’Aquino

Motion Pictures – Animated

Abominable

Nominees: Tighe Sheldon,
Myron Nettinga, Nick Wollage, David Jobe

Frozen II

Nominees: Paul McGrath CAS, David E. Fluhr CAS,
Gabriel Guy CAS,
David Boucher, Greg Hayes,
Doc Kane CAS, Scott Curtis

How to Train Your Dragon: The Hidden World

Nominees: Tighe Sheldon,
Gary A. Rizzo CAS,
Scott R. Lewis,
Shawn Murphy,
Blake Collins CAS

The Lion King

Nominees:
Ronald Judkins CAS,
Lora Hirschberg,
Christopher Boyes,
Alan Meyerson CAS,
Blake Collins CAS

Toy Story 4

Nominees: Doc Kane CAS, Michael Semanick CAS,
Nathan Nance, David Boucher, Vince Caro CAS, Scott Curtis

Motion Pictures – Documentary

Apollo 11

Nominees: Eric Milano,
Brian Eimer

Echo in the Canyon

Nominees: Chris Jenkins,
Paul Karpinski
Production Sound Team:
John W. Rampey, Robert Reider

Making Waves: The Art of Cinematic Sound

Nominees: David J. Turner,
Tom Myers, Dan Blanck,
Frank Rinella

Miles Davis: Birth of the Cool

Nominees: Gautam Choudhury, Benny Mouthon CAS
Production Sound Team:
Adriano Bravo, Jean-Paul Guirado, Joe McGill, Caleb Mose,
Redd Reynolds, Bill Streeter, Brian Walker

Woodstock: 3 Days That Changed Everything

Nominee: Kevin Peters

Television Series – One Hour

Game of Thrones “The Bells”

Nominees: Ronan Hill CAS, Simon Kerr, Daniel Crowley, Onnalee Blank CAS, Mathew Waters CAS, Brett Voss CAS
Production Sound Team:
Guillaume Beauron, Andrew McArthur, Paul McGuire, Andrew McNeill, Sean O’Toole, Jonathan Riddell, Joe Furness

Peaky Blinders “Mr. Jones”

Nominees: Stu Wright, Nigel Heath, Brad Rees, Jimmy Robertson, Oliver Brierley, Ciaran Smith
Production Sound Team: Alessandro Pascale, Ben Hossle, Joshua Tot Carr, Laura Clough

Stranger Things Chapter Eight: “The Battle of Starcourt”

Stranger Things

Nominees: Michael Rayle,
Mark Paterson, William Files,
Hector Carlos Ramirez, Bill Higley CAS, Peter Persuad CAS
Production Sound Team:
Dan Giannattasio, Jenny Elsinger,
James Peterson, Julio Allen,
Nikki Dengel, John Maskew,
Patrick Miceli, Jesse Parker

The Handmaid’s Tale “Heroic”

Nominees: Sylvain Arseneault CAS,
Lou Solakofski, Joe Morrow, Adam Taylor, Andrea Rusch, Kevin Schultz
Production Sound Team:
Michael Kearns, Erik Southey,
Joseph Siracusa, Rob Beal

Tom Clancy’s Jack Ryan “Persona Non Grata”

Nominees: Michael Barosky,
Steve Pederson, Daniel Leahy,
Benjamin Darier, Brett Voss CAS
Production Sound Team:
Jorge Adrados, Frank Graziadei,
Jamie Llanos, Cesar Salazar,
Dean Thomas, Karl Wasserman

Television Series – Half-Hour

Barry “ronny/lily”

Nominees: Benjamin A. Patrick CAS, Elmo Ponsdomenech CAS,
Jason “Frenchie” Gaya, Aaron Hasson, John Sanacore CAS
Production Sound Team:
Jacques Pienaar, Corey Woods, Kraig Kishi, Scott Harber,
Christopher Walmer, Erik Altstadt, Srdjan Popovic, Dan Lipe

Fleabag Episode #2.6

Nominees: Christian Bourne,
David Drake, James Gregory
Production Sound Team:
Tom Pallant, Guido Lerner, Josh Ward

Modern Family “A Year of Birthdays”

Nominees: Stephen A. Tibbo CAS,
Dean Okrand CAS, Brian R. Harman CAS, Matt Hovland, David Torres CAS
Production Sound Team:
Srdjan Popovic, William Munroe, Daniel Lipe

Russian Doll “The Way Out”

Nominees: Phil Rosati, Lewis Goldstein, Thomas Ryan, Jerrell Suelto,
Wen Hsuan-Tseng
Production Sound Team:
Chris Fondulas, Bret Scheinfeld,
Patricia Brolsma

Veep Episode 707

Veep Season 7, episode 7 photo: Colleen Hayes/HBO

Nominees: William MacPherson CAS, John W. Cook II CAS, Bill Freesh CAS, Scott Sheppard, Jesse Dodd CAS,
Mike Marino
Production Sound Team:
Doug Shamburger, Michael Nicastro, Rob Cunningham, Glenn Berkovitz, Matt Taylor

Television Movie or Limited Series

Apollo: Missions to the Moon

Nominee: John Warrin

Chernobyl “1:23:45”

Nominees: Vincent Piponnier, Stuart Hilliker, Gibran Farrah, Philip Clements
Production Sound Team:
Nicolas Fejoz, Margaux Peyre

Deadwood: The Movie

Nominees: Geoffrey Patterson CAS, John W. Cook II CAS,
Bill Freesh CAS
Production Sound Team: Jeffrey A. Humphreys,
Chris Cooper

El Camino: A Breaking Bad Movie

Nominees:
Phillip W. Palmer CAS, Larry B. Benjamin CAS, Kevin Valentine,
Chris Navarro CAS, Stacey Michaels CAS
Production Sound Team: Mitchell Gebhard,
Andrew T. Chavez

True Detective “The Great War and Modern Memory”

Nominees:
Geoffrey Patterson CAS, Greg Orloff CAS, Tateum Kohut CAS,
Biff Dawes CAS, Chris Navarro CAS, Nerses Gezalyan
Production Sound Team: Jeffrey A. Humphreys,
Chris Cooper

Television Non-Fiction, Variety, Music Series or Specials

Country Music: Will the Circle Be Unbroken? (1968-1972)

Nominees: Mark Roy,
Dominick Tavella, Chris Chae

David Bowie: Finding Fame

Nominees: Sean O’Neil,
Greg Gettens
Production Sound Team: Ricky Barber, Alberto Battocchi, Nigel Chatters, Martin Evanson, Rob Thomas

Deadliest Catch “Sixty Foot Monster” Episode 1512

Nominees: Bob Bronow CAS
Production Sound Team:
Tom Pieczkolon, Mike Morrell

Formula 1: Drive to Survive: The Next Generation

Nominees: Nick Fry,
Steve Speed, James Evans

Hitsville: The Making of Motown

Nominees: Pete Orlanski, Richard Kondal, Victor Shcheglov
Production Sound Team:
Ben Sortino

CAS Outstanding Product Award Nominees

Production

Lectrosonics, Inc.: D Squared Digital Wireless Mic System
Schoeps Mikrofone: CMC 1 U Miniature Colette Series Amplifier Body
Shure Incorporated: Shure Axient Digital
Sound Devices, LLC: Scorpio
Zaxcom: Nova

Post Production

FabFilter: Pro Q3 Equalizer
iZotope, Inc.: Dialogue Match
Leapwing Audio: DynOne 3
Sound Radix Ltd.: Auto-Align Post
Todd-AO: Absentia DX v2.2.3

Names in Bold are Local 695 members

Remembering Don Coufal

Don Coufal was undoubtedly a first-class Boom Operator in high demand and was liked by everyone who had the opportunity of working with him. Don’s skill and perfection in his work was renowned. He was a Sound Mixer’s dream, because when Don ran the floor, he would tell you everything you needed to mix great tracks and above all, his microphone positioning was perfect.

One of the longest working relationships Don Coufal had was with Jeff Wexler. They worked together on just about every feature film in Jeff’s long résumé, over forty years together. They were a team, ‘Jeff and Don’ or more like ‘Don and Jeff.’

Don lost his two-year battle with cancer in November. Don bravely chronicled his thoughts, treatment, and prognosis regularly on Facebook. We all followed his posts and replied with incredible support. When we read the awful news, the outpouring of love, grief and condolences was emblematic of how much Don meant to all of us.

On Friday, December 6, there was a Memorial service at the Valley Oaks-Griffin Memorial Park in Westlake Village, followed by a graveside service. An overflow crowd was in attendance, not just those of us from Local 695, but Camera Operators, 1st AD’s, and many many more.

Heartfelt remembrances were delivered by Ronnie Coufal, Don’s younger brother and Jeff Wexler, followed by Forrest Williams, poignantly playing his guitar and singing Willie Nelson’s “Blue Eyes Crying in the Rain.”

In the twilight glow I see them
Blue eyes cryin’ in the rain
When we kissed goodbye and parted
I knew we’d never meet again
Love is like a dyin’ ember
Only memories remain
Through the ages I’ll remember
Blue eyes cryin’ in the rain
Some day when we meet up yonder
We’ll stroll hand in hand again
In a land that knows no partin’
Blue eyes cryin’ in the rain
Now my hair has turned to silver
All my life I’ve loved in vain
I can see her star in heaven
Blue eyes cryin’ in the rain

Songwriter: Fred Rose

Cameron Crowe
December 5, 2019
“
Don is one of the rare people you meet in this world who leads with heart and soul. He’ll lean in and talk with you, eyes always focused on yours, and get right to the epicenter of things. With quiet razor-sharp powers of observation, he’ll tell you what he’s seen or what he’s feeling … and it’s always through the prism of brilliance and empathy. He roots for everybody and everything wonderful in this life, and reflects it back to us with his special smile and that twinkle that lives forever. Miss you and love you brother. Your love and loyalty means the world, and will always be present with us, his friends, his co-workers, and his wonderful family. No need for past tense, Don is with us always. Right now and every day after. We love you brother!“

Don is survived by his ex-wife Lenore Alexander, his two daughters Emma and Libby, as well as his brothers and sisters from Texas.

Adobe Creative Cloud

Non-Linear Editing Platforms: Adobe Creative Cloud

by James Delhauer

As the world entered the computer era and the first non-linear editing systems were developed in the late 1980s, one of the biggest technological revolutions within the industries of film and television began. The previous standards of manually cutting filmstrips and using flatbed editors quickly became obsolete. Today, digital non-linear editing software forms the basis of post-production workflows across the globe. New tools to edit video, design sound, color grade, and create visual effects are being created every day. Computers, tablets, phones, and even video game consoles all have built-in applications designed in order to take audio and video files and edit them together into a finalized product. Where editing was originally a skill only seen among those who worked in Hollywood or broadcast television, the technology and knowledge have now been so widely disseminated that even children are learning to edit videos in school. It is hard to argue that anyone has created more tools for this purpose than Adobe systems. While film editing remains the jurisdiction of our brothers and sisters in Local 700, these tools are also of the utmost importance to Local 695 video engineers, whose responsibilities include media playback, on-set Chroma keying, off-camera recording, copying files from camera media to external storage devices, backup and redundancy creation, transcoding with or without previously created LUTs, quality control, and syncing and recording copies for the purpose of dailies creation.

Adobe’s software portfolio currently includes more than twenty individual apps and services that perform a wide range of tasks—many of which are recognizable as industry standards in their specific fields. Adobe After Effects is considered to be the most popular visual effects and compositing application for video in the world. Adobe Photoshop has consistently been one of the most pirated pieces of software for more than a decade. Although most of their software can be used as standalone products, many of these applications were designed in order to integrate with one another, allowing users to start off using one program and pick up right where they left off in another.

While all of the Adobe programs are valuable in their own right, three of their applications are of particular significance to Local 695 engineers.
The first is Adobe Prelude, a tool used to ingest, index, and log media in a tapeless environment. Speaking broadly, it is most useful for copying files from recording media to external storage devices. Users are able to select a source device, such as a memory card or camera mag, and copy the contents of that device onto multiple redundant media drives simultaneously. During this process, it can also perform a hash check based on one of multiple logging standards in order to verify that the media that lands on the destination drive is verifiably identical to that which was found on the source device. These protocols include file size comparison, bit-by-bit comparison, and the MD5 message-digest algorithm comparison. This ensures that the media manager will be notified of any digital copy errors and can address them on set so that these sorts of errors will not come back to haunt the production down the road. Once ingested, clips can be played back for review or roughly edited together in order to ensure continuity. Metadata tags or notes can be generated and attached to individual clips, allowing the post-production team to review embedded notes and information from the production team. For productions that require the generation of proxy files, Adobe Prelude can integrate with Adobe Media Encoder to do the job.

Adobe Media Encoder, as implied, is a versatile tool used in order to transcode video and audio files from one file format to another. When productions are utilizing high-density file formats that are difficult for computers to play back and edit in real time, they often take advantage of what is called a “proxy” file. These are lower quality copies of the files that are more easily digestible for computer hardware and which make real-time playback and editing achievable even on lower end machines. The project is then edited in the non-linear editing program of the production’s choice and then is swapped out with the high-quality master files so that the project master export can be created. Media Encoder is also capable of applying either custom or pre-packaged LUTs to footage during the transcode process, allowing for color accuracy and consistency throughout the post-production workflow. This is also quite useful in the creation of dallies, generating copies of files in a specific format for specific use, exporting computer-generated content such as visual effects for editorial use, and any other task that involves rendering a piece of source content into another file format.

Once media has been ingested and (if necessary) transcoded, it can be forwarded directly into Adobe Premiere Pro—Adobe’s flagship non-linear editing program. On set, this is a useful tool that allows for 695 engineers to verify the viability of files, play back previously shot content for review, sync sound files to video files, and organize bins & string outs. Although Premiere comes with a plethora of plugins for generating visual effects, sound effects, color grading, and transitions that allow users to begin and end without ever leaving the application, the software also includes a variety of export options that allow for integration with third-party software. Users can export XML, OMF, AAF, and EDL files so that final color and sound can be completed in more popular applications such as DaVinci Resolve or Avid’s Pro Tools.

Both Adobe Premiere Pro and Adobe Media Encoder are exclusively 64-bit applications that take advantage of Adobe’s Mercury Playback Engine—an audio/video decoding algorithm that takes advantage of multithreading processors (such as Intel’s Core or AMD’s Ryzen line of processors) and GPU acceleration on approved graphics processing units (such as Nvidia’s GTX/RTX line or AMD’s Radeon line of graphics cards). This allows both applications to take advantage of various pieces of computer hardware to perform specific tasks, effectively speeding up the process of rendering files. While multithread processing has become fairly standard, GPU acceleration remains a gray area where only some companies have disclosed the extent to which it has been integrated into their software. Therefore, Adobe’s total support for it comes as an unqualified pro over its competitors.  

At this time, Adobe offers their products in two distinct subscription-based distribution packages. The first is a “Single App” access package in which users pay $20.99 per month in exchange for unlimited access, upgrades, and support to just one piece of Adobe software of their choice. The second and more popular plan—“The All Apps Plan”—includes all products applications, both for desktop computers and mobile devices, for $52.99 per month. Users who are interested can further pair either of these subscription plans with an Adobe Stock subscription in order to download photos and videos from the company’s stock media library.

That being said, Adobe is not without its drawbacks for users. The company’s 2012 decision to switch from selling one-time purchase licenses to requiring a permanent subscription remains a controversial choice with users. While this business model may remain profitable for the company itself, it has inarguably limited customers in some ways. While a relatively low ongoing cost may have opened access to the software for some users, the ongoing nature of a subscription inevitably priced out others. Under the old model, perspective buyers were able to analyze the upcoming features and products in new Creative Suite releases and determine whether or not they merited spending the money to upgrade. For many, the list of improvements from generation to generation was of little consequence and there was no need to upgrade. Under the current Creative Cloud model, these users have no choice but to continue paying Adobe in perpetuity or lose access to their software, their projects, and potentially their livelihoods.

Moreover, certain events have left Adobe customers concerned as to whether or not the software they rely upon will remain on the market or accessible. In May of 2014, there was a twenty-four-hour Creative Cloud service outage, rendering millions of creative professionals worldwide unable to access the Adobe products already installed on their own machines. When service was restored, Adobe did not offer compensation to customers for loss of paid service time, loss of productivity, or loss of revenue. This has been followed by periodic decisions to discontinue certain pieces of software. Since Creative Cloud’s launched Adobe SpeedGrade (a color-grading software for professional video users), Adobe Story (a collaborative script development and screenwriting tool with non-linear editing integration) and Adobe Muse (a website building application for non-code-based developers) have both been discontinued—sparking outrage from customers and businesses that had worked to integrate these applications into their workflows. Furthermore, Adobe did initially make all software versions of their products from 2014 onward available to Creative Cloud users to download, meaning that customers would not be forced to upgrade against their will. This policy was reversed in May of 2019; however, when Adobe removed all pre-2018 software from the Creative Cloud downloader and sent emails to customers stating that pre-2018 software licenses had been revoked. Users were required to upgrade or face possible legal actions.

At the risk of editorializing, I feel it is worth pointing out the obvious. Threatening your customers for using software that you licensed them is bad business.

While Adobe has emerged at or near the forefront of the creative software development world, their business practices do invite a degree of risk for their customers. For users who are concerned by these issues, there are alternatives. Avid Technology, Apple Inc., and Blackmagic Design have all developed competing products that currently see wide use within the industry. In the coming issues of Production Sound & Video, we will provide spotlights on these products as well so that members can make the most informed decisions when determining which non-linear editing family is the best investment for them.

The DemerBox

James Demer & The DemerBox

James Demer and DemerBox

by Richard Lightstone CAS AMPS

You are probably asking yourself, “What is a DemerBox?” I didn’t know either until someone (I forgot who) told me about this outstanding portable speaker built into a small Pelican case. I bought one in May of 2012 for a feature I was about to start. It sounded great, had long battery life, you could send audio to it directly via a cable or Bluetooth and if an actor stomped on it in the wheel well of a picture vehicle, it was virtually indestructible. Oh, and once you put the bass-port plug in—it’s waterproof too.

From left: James Demer and Richard Lightstone CAS AMPS at the Maine Media Workshops in Rockport, Maine

My DemerBox is still going strong almost eight years later. I have been following its creator, James Demer, and we finally met in September 2014 at the Maine Media Workshops in Rockport, Maine. At that time, James was living in Portland and I asked him to speak to my class on Sound Mixing. James made his living as a busy Reality, Documentary, News, and Feature mixer.

James was always a tinkerer. “I built my first speaker out of a shoebox when I was twelve, and never lost the passion. I was buying audio amplifier boards and putting them together with wood enclosures for speakers. One day, I took a Pelican 1300 case, cut some holes in it, and slapped a pair of Fostex three-inch drivers and tuned it up with a port and it sounded really good. A lot of my crewmember friends on the feature film I was on, Winter’s Bone, asked, ‘Oh, that’s awesome, can you make one for me?’ Being a bit of an entrepreneur, I thought what a great side hustle.”

James worked eleven seasons on Survivor and the crewmembers would buy DemerBoxes. “They’d use them hard and break them, and then tell me what to fix. Then I’d come back with a new and improved model.”

James explains, “We launched a Kickstarter in the fall of 2014, and fulfilled the orders in the spring of 2015. Kickstarter allowed us the ability to open some injection molds, which we had made in the US. We went from thermoforming our own cards with a homemade vacuum machine and a heat lamp to having real injection molded parts to hold the speakers, the circuit board, and the battery inside the box.

Manufacturing the DemerBox

James teamed up with Jayson Lobozzo, a cameraman, where they figured out the port plug addition that allows a user to put a plug in the porthole; you lose a little bass, but it’s fully waterproof. The Kickstarter also allowed a new design where everything is incorporated in the lid and the electronics are all protected. From an assembly standpoint, it’s a lot easier to manufacture by attaching the lid to the case.

James continues, “We started manufacturing in Portland, Maine, which is where I’m from. It started in my basement and then I rented a workshop in downtown Portland. Zac Brown of the Zac Brown Band had purchased a DemerBox late in 2016, and he sent me an email and asked, ‘Hey, this thing’s awesome. What are you guys doing? Why don’t you come down to Georgia, I’ll put some money into the operation and we’ll see what we can do.’”

Zac Brown of the Zac Brown Band and partner in DemerBox.
Zac Brown of the Zac Brown Band and partner in DemerBox.

James moved in July 2017 to Peachtree City, Georgia, outside of Atlanta, and set up shop. “Sales are very good, we grew sixteen hundred percent in 2018; it was pretty crazy. We had one order alone for four thousand units. Because we manufacture everything in the United States, we’re very hands on with our product, we can customize for corporate clients. That four thousand unit order was for Jimmy John’s Sandwich Shop’s annual convention where they gave a DemerBox to each one of their store owners and managers.”

James posits, “We’re partners with Zac Brown and Troy Link, an investor in DemerBox, who owns Jack Link’s, a beef jerky company. Jack Link’s is the largest packaged meats company in the world. They do $1.2 billion in revenue annually. So, between Zac’s celebrity and his connections and Troy Link’s guidance, I feel like we’ve hit the jackpot down here in Georgia.”

When it came time to find a new injection molding company, they were able to find one a little bit closer to home from the one they used in Minnesota. There were four or five to choose from in Georgia.

James explains, “We found one that’s just up the road and get exceptional service. You just can’t find that in Maine. The population’s too small.”

DemerBox just did a three hundred unit giveaway with the USO with their branded logo on it. The USO is America’s oldest military nonprofit that gives back to the servicemen and women. The DemerBoxes are for those who are deployed overseas who needed a speaker that they can’t break.

The product has a Class D amplifier chip rated at twenty watts per channel made by Maxim. It’s limited closer to ten watts per channel so to not blow the speakers. It’s been measured at 96DB output at three feet.

James continues, “You can fill a small room. We have updated the DemerBox, calling it the DB2, with two speakers and we’ve revamped the circuit board and the switches with a battery indicator, so now you know when your battery’s getting low. You can pair multiple boxes via Bluetooth. We’ve added digital signal processing (DSP) so we have better high frequencies.”

Working on the DemerBox circuit board

They are about to launch a redesign of the original mono DemerBox, the DB1. It’s rated at eighty percent of the volume of the larger DB2 even though it’s half the size. The DemerBox has multiple uses, for example at Video Village, if everyone doesn’t want to be on headsets or in the client van, they can listen to the audio that’s being transmitted from the picture vehicle.

James on Survivor

James says, “The easy way is to make our product in China. There’s a reason that no other portable speaker manufacturer is making their product in the United States. It’s a commitment, but it’s something that both Zac and I share, and we’re employing Americans who are working hard, and are skilled, and really care about our brand.”

The company is so successful that James Demer has given up sound mixing doing this full time and is home for his wife and older daughter.

James concludes, “It’s definitely nice to not live out of a suitcase.”

My Godlike Reputation Part 3

(A tutorial for those without half a century in the business, and a few with)

by Jim Tanenbaum CAS​

IT’S OKAY TO LIE TO THE DIRECTOR … REALLY


In my UCLA classes, “Set Politics & Protocol” occupy more than 20% of the class time. No matter how good of a mixer you are, if there is an adversary relationship between Sound and Camera, it may be difficult or impossible to get good sound. Ditto for all the other departments.

Unfortunately, some directors make bad calls when asked to judge sound on the set. In the first place, most directors want to listen to the production track most of the time, but they don’t want to be encumbered more than absolutely necessary. Thus, the ubiquity of lightweight foam-padded open-air headphones. Getting most directors to wear even moderately-isolating closed-cup headphones is a losing battle. (On the rare occasion when they do, they often leave their dominant ear free.) Even worse, some young “mixers” no longer have learned how to mix anything, and simply crack all the pots open halfway through the entire take for the “scratch mix.” This results in many directors accepting an excessive amount of reverb and/or ambience mixed in with the (hopefully) clean dialog track iso as “normal,” so when there is a real problem of too much background or other undesirable noises in the production dialog, they are almost never aware of it.

Therefore, when I know there is a problem that makes the production track unusable, I have learned not to let the director (or producer) simply listen to it over my highly-isolating headphones because the lack of ambient bleed makes the track sound “better” to them. Invariably they will say: “That’s okay—we’ll fix it in post.” Some examples:

1. Excessive generator (or other localized-source) noise. Instead of letting the director hear it as it really is, I will have my boom op surreptitiously cue the mike toward the genny or other source, or open my slate mike, or otherwise degrade the mike’s signal so even a deaf director will know it’s no good.

2. Excessive room reverb. This is more difficult for the director to perceive because the human brain has dedicated echo-reducing circuits that operate subconsciously whenever stereo input from both ears is available. Listening normally doesn’t reveal the hollowness, so when the director does hear it in your headphones, their first reaction is: “There’s something wrong with your mike—look, it’s much closer to the actors than I am right now.” Teaching him or her Acoustics 101 before the first take is not an option. And even if it was, there won’t be time to pad all the off-camera walls, floor, and ceiling.

This is where your pre-production homework comes in. If you found out that this particular director always covers every line of dialog in closer shots, simply don’t bring up the issue at all. If Murphy’s Law intervenes and they don’t, ask for wild lines of the uncovered dialog. If the director objects, just say: It’s needed to match the covered dialog.” IMPORTANT: If you say anything with enough authority and certainty, most people will accept it without question (even if it’s wrong). Yes, there’s a slight risk that this procedure might fail, but risk-taking comes with the job. Remember: The most important thing experience teaches you is what you can get away with.

On the other hand, your research may reveal that this director doesn’t cover everything, and/or lets the actors improvise every take (in which case you may have much more serious problems to deal with). Now is when your location scouting pays off. Being aware of the parallel hard walls, no carpets on the floor, etc., you will have brought along a pile of sound blankets and your own mounting rigs (an extendable painter’s pole and a 6’ 1”x2” wood batten clamped in it at a right angle, will allow you to quickly jam one edge of the blanket against the ceiling, several inches to a foot away from the wall for maximum effectiveness). Also, you will have scoped out places to hide close-in plant mikes (remember that mounting them on a large hard surface eliminates any sound reflections from it—the Boundary-Layer Technique). And you will have short cardioid and super/hyper-cardioid mikes for low ceilings if they are present (remember that interference-tube shotgun mikes are not a good choice for reverberant spaces).

3. Excessive shoe or prop noises. Some directors, particularly older ones, understand the real reason for quieting these sounds: the regular cadence of heel clicks will be disrupted by intercutting between angles and takes, and they need to be replaced with uniformly-spaced Foley. The timing of noisy prop action may not match exactly from wide shot to close-up. Explaining all this takes time, and often exceeds the director’s attention span. This is another example of something you take care of ahead of time, without bothering the director. Should they ask, just give them the shortest, simplest, believable answer—whether it is accurate, or even remotely true, is irrelevant.

4. There are esthetic considerations that are beyond the pay grade of some directors. Here is a chance for the production mixer to make a significant contribution to the picture, but often in spite of, rather than because of, the director. However, this should never be done to the detriment of the integrity of the project, as can occur if the production mixer’s artistic judgment is faulty. To protect against this, it is necessary to preserve the original characteristics of the sound by recording it on a separate iso track. Of course, if the director is knowledgeable, and approachable, then she or he should be consulted in advance and advised of the proposed approach, if they’re not, then don’t mention it.

Do we hear dialog through a closed window? Whether or not we will in the final mix, I always record it as well as possible from inside, not just from the outside.

Perspective or neutral sound? This question is dependent on whether or not a boom mike is in use. And with today’s ubiquitous multi-track recorders, the on-set decisions required to mix two actors at different distances from the camera down to a mono Nagra are thankfully no longer needed. Whenever I can, I try to get the boom in as closely as possible when using radios, and iso it. In post, the boom track can be pulled up (advanced in time) and added to the radios to give them the appropriate perspective. (Unfortunately, this would be overkill for most TV productions and documentaries, as well as not usually having the “luxury” of a second boom op.)

In spite of all the advances in DSP (Digital Signal Processing), it is still important to fight for room tone and wild lines, recorded with the mike in the position used for the master. Resist the 1st AD saying: “We’ll get it later,” because later never comes, or if it does, the birds have stopped or the frogs have started. Room tone and wild lines need to be recorded as soon as possible, with all the set lighting on. If you have a good rapport with the director (and the producer), they can override a pushy AD. If not, saying: It will save lots of time and money in post,” usually works. On one show, we had a flock of very noisy birds (common grackles) with unusual, harsh, grating, distracting, and distinctive vocalizations. There was no way to keep them out of the dialog. This time, it was a sharp director who initiated the request for “bird tone,” but the AD, and the need to change locations before losing the light prevented it. Since the director insisted, I told the producer I would work Sunday (our only day off on location). The editor said that the bird ambience was the only thing that prevented a major scene from being looped. (Then it was cut.)

No matter how talented and conscientious you are, ADR will happen at some point. You can help to improve the results by making a note of the mike(s) used in each shot, and their position. This, combined with your room tone, will be almost as good as wild lines recorded on the set.

IT’S OKAY TO LIE TO THE ACTORS … REALLY

1. Actors are justifiably annoyed by having to wear a radio mike, but there are ways to minimize this. Some examples:
a.With a director who stages shots that are impossible to boom, or makes last-minutes changes in blocking so that the original lighting for the boom now will throw shadows all over the set, the best procedure is to simply wire everyone every day. If you can establish this at the beginning of the shoot, as new actors come to the set you can just tell them that this is the way the production company does things. (Of course, I try to get the boom in if I possibly can, no matter how far away it has to be.)
b. Sometimes, after a shot, an actor seeing the boom complains about having to wear a radio mike. Rather than explaining about the director’s changing his mind, and the intricacies of booming, I simply lie: “We couldn’t reach your first few lines coming through the door with the boom—I had to use your radio mike for them.”

2. Getting some actors to “speak up” is one of the most difficult jobs I have. For starters, I need to have a conversation about this with the director as soon as I meet him/her. The worst is a director who likes “method” actors—they SHOUT without warning, or whisper. I have found that there is little I can do to disabuse a director of this notion, so I have to look to hardware solutions (or clairvoyance). I did a show, Moving, with Richard Pryor, who never said the same thing once, or in the same place. I used a Nagra IV-STC with a 20dB offset between channels for boomed shots, and when I had to, I mounted two radio mikes on him with their gains also offset 20dB. I did develop a sort of faux-clairvoyance, when he would take a deep breath I knew he would be about to shout (most times), and a short quiet pause before delivering a line meant a lower level (sometimes).

Regardless of how the director wants to handle getting the actors to “speak up,” I have found that it is usually best to let her or him handle it, because if the actor is having a bad day, I won’t be yelled at by them on the set. However, I sometimes manage to establish a rapport with an actor, and can unobtrusively communicate directly with them. But if the director finds out…

IT’S NOT OKAY TO LIE ABOUT YOURSELF

Time and time again, I’ve seen someone who “padded” their credits get caught out on a shoot—you never know who else may have worked on that show.

In the ’80s, my girlfriend was a script supervisor, and she was working on another feature and swapping stories when their mixer said he mixed Blow Out. She knew I did that film, not in the least because I had flown her to Philly to stay with me. When she told him I mixed it, he explained, “Oh, I boomed that show.” “No,” she replied, “Rimas Tumasonis boomed it.” “Embarrassed, he quickly said, “Yeah, Rimas is a great guy, I use him all the time,” and avoided her for the rest of the entire shoot.

The mixer involved wasn’t a newbie, and had about as many credits as I did, though lacking the two “bigger” shows I had done, and didn’t really need to lie about his resumé. This story got spread around, and I’m sure it didn’t help his reputation.

NEWSFLASH: As if lying about working on shows you didn’t isn’t bad enough: I was just researching IMDb and happened to check the crew listing on a movie I recorded about 35 years ago. I found the cable person listed as the boom operator, and the real boom op listed as “sound assistant (uncredited).” I’m sure this was not the original IMDb crew listing. I was flabbergasted—the nerve of that a…hole. I got out my DVD of the show and watched the credits—neither of them had credits as anything. Since IMDb still also lists the orginal cable person as “sound assistant (uncredited),” I assume he edited the webpage and replaced the boom op’s name with his own, and sloppily forgot to remove his correct credit as assistant. After finding out about this change, I checked the crew listing for another show I had used him on and saw that he had altered his credit to be a “Boom Operator” too. I immediately submitted corrections to IMDb—who promptly declined them because they were “unable to verify my contribution.” I am credited as the mixer on both shows, and stated in my submissions that I had proof (call sheets), so I will pursue this matter further.

Just because you tell the truth, doesn’t mean you always have to tell the whole truth. If you’ve done only a couple of features, you can talk about how the challenges of working on documentaries (or whatever else you’ve recorded) help you when mixing features, commercials, and music videos (or whatever else you’ve worked on). But if confronted directly, admit the true number. There are other tricks like not starting your invoice numbers with “00001.”

Be careful about accepting jobs you don’t know how to do, figuring that you’ll learn before production starts. If you’re told that there will be a lot of telephone conversations involving hard-wired phones and you know that prop houses rent a gadget that two phones can plug into and when one handset is lifted the other phone will ring and when it is picked up both phones will function normally, don’t think you’re home free. I got such a show, and already had my own CO (Central Office) simulator. I even rigged a lav on the off-camera phone so I could get a clean track of the off-camera actor.

But as soon as the director heard the first rehearsal, he wanted to hear it “like it sounds on the telephone.” Now what would you do? If you tried to connect a line-level input to the phone line (assuming you happened to have an RJ-45 modular phone plug splitter in your kit), you would find that the two sides of the conversation were greatly unbalanced in level. Furthermore, the telephone DC loop-voltage component can cause distortion in both transformer and active-input electronics.

Since I was experienced with telephone circuitry, I also had a battery-operated hybrid phone patch (and the necessary adapter cables) so I immediately plugged it in and had control of the send and receive levels, and a happy director.

THE SECRET TO PERFECT PRODUCTION SOUND

The secret is… there is no secret. There is no one thing that will always produce a perfect sound recording. But there are lots of individual things that will produce big improvements in the quality of the sound you record.

1. 90% of recording good production sound is having the right microphone in the right place. However, learning which microphone to use, and how to boom correctly, or hide (“plant”) microphones on the set, or install radio mikes on the talent, takes many hundreds of hours of actually listening to various mikes in various situations, so I can’t explain that here. Especially when starting out, a good, experienced boom operator is an absolute necessity. I would give them part, most, or even all of my salary to augment their pay on some of the low-budget exploitation films I cut my ears on.

2. 9% of recording good sound is the control of extraneous/unwanted noises. There are three main problem areas: 1. ambient sounds; 2. noise made by equipment (e.g., cameras, lights, and dollies); and 3. noise made by sets, props, and wardrobe (e.g., creaky floors, footsteps, dish and glassware rattles, and clothing rustle). I can help you with that, but in another article.

3. 1% of recording good sound is twiddling the knobs on your gear. Again, fodder for another article. (Well, not necessarily these exact three percentages, but you get the idea.)

NO JOB IS TOO SMALL FOR ME

She has such a lovely… er, voice

Some feature-movie mixers consider it “beneath them” to work on TV movies, commercials, documentaries, reality shows, etc. I work on any kind of show I can get, because there are various advantages, including a certain “cross-fertilization” that occurs, which improves my abilities on all the other production types.

1. I learn certain “quick ’n’ dirty” procedures are applicable to any shoot, e.g., a gel rattling in the wind can be quickly stopped by stretching a bungee cord (or two in series) across it and wedging a plastic drinking cup between them. (But check for any shadows it might throw on the set.)

2. Less-critical projects allow for “experimenting.” If I encounter a problem when I’m on a major feature, and I know a method to solve it that will produce “good-enough” sound, I can’t afford to take a chance trying something new that might provide better sound, but also might fail and require ADR. On a documentary, even such a “failure” will still be acceptable—and if the method succeeds, I know I can use it on features the next time.

3. More than once, someone I worked for on a non-feature job, or an el-cheapo exploitation film, has gotten me on a “big” show.

4. Some of these shoots turn out to be much more interesting and/or fun than I anticipated. On the reality show, The Jim Henson Creature Shop Challenge, I acquired a new girlfriend.

5. Most of them require much more physical activity than just sitting at my cart hour after hour, and help me keep in shape. Although I’m a senior citizen, I’m not an invalid, and can still run around with a 35-pound kit bag strapped to my chest. (Also from the The Jim Henson shoot.)

6. Even internet movies and shorts produced with extremely-low budgets now can be completely “professional,” with camera cranes and full lighting/grip packages. They often provide the opportunity to work from a bag throughout, although I do bring a chair for the long dialog scenes that aren’t walk-and-talk.
 

I don’t need no stinkin’ gym
From Breathe (2019), Boom Operator Joe Bourdet mikes male lead Jeremy Guskin, who is also wearing a radio mike.
Just because I mix from a kit doesn’t mean I can’t sit!

I’M ALMOST OUT OF TAPE, I MEAN MEMORY CARD SPACE

Finally, returning to my initial comment, what is required of sound is not accuracy, but believability. “Verisimilitude trumps veracity” as I am fond of telling my students. And what is “believable” changes from generation to generation, or even more rapidly. When “moving pictures” were first shown in public, a black-and-white film of an approaching locomotive caused a panic in the audience. Now, they know better.
 
When it comes to sound, there are changes in believability, too. Until fairly recently, most movies were boomed, and perspective sound was the norm. The proximity effect of directional mikes helped to heighten this effect.

Then radio mikes were developed, and soon evolved to be practical—and then ubiquitous. Now, close-miked sound is gradually becoming the new normal. The Laws of Physics have been superseded, too—at least for movies—and now thunder is simultaneous with lightning, the sound of aerial fireworks with the flash, etc.

There is another new development—several related ones, actually. Movie theaters have changed, and not for the better. In the “good old days,” they showed only one picture at a time, and were reasonably quiet. Today, multiplexes abound, and you can hear the audio from theaters on either side of yours, not to mention the sound of most of the audience eating popcorn and talking. In some theaters, outside traffic noise intrudes, and even the HVAC systems are not silenced properly. This increase in the noise floor has forced the projectionists to raise the volume of the sound track, either making the loudest portions too loud, or compressing the dynamic range excessively, or both.

As a result, the re-recording mixers have to adjust the release tracks to have the softest sounds above the noise floor of the theater, and then compress them so the loudest levels won’t be above whatever maximum the local authorities have ordained. The new HDR (High Dynamic Range) standards can work for the picture, but not the sound in these venues. As mentioned earlier, you can ride the gain manually during production and produce a track that will have an acceptable dynamic range, but what that range is to be is continually changing.

PLAY IT FORWARD

Helping to reduce our trade deficit with China

I was very lucky to have my very first day’s recorded ¼-inch tape sent to Lee Strosnider’s transfer house. He realized that the problems were not caused by my incompetence, but rather by defective rental equipment and incorrect instructions in its use. He kindly rented me his own gear, and taught me enough to get started, so I could bootstrap myself up from there. That was the beginning of a 45-year mentoring, which soon became mutual. Many others also helped me along, and eventually I was able to return the favor (though in many cases only Karmically because by then, those people were mixing in more heavenly locations than I was).

Beginning in 1988, I taught sound part-time at UCLA for 25 years, until the department suffered a mission drift and became more interested in making money than having the students learn anything. I’ve also taught at USC, Art Center of Design, LACC, and many other venues.

But my mentoring is not completely altruistic—I learn from my students as well. Many years ago, I got a panicked call from a newbie on a ShotMaker camera car who was having a mysterious static problem. After half an hour of remote investigating and experimenting over the phone, we found the trouble was caused by the truck’s electronics package located behind the passenger seat, where the excess mike cable had been coiled on its way to the towed picture car. The very next week, I found myself in exactly the same situation, thanks to my crew who had rigged the cables via the same route. I took me only ten seconds to correct the problem.

There are many other advantages to disseminating knowledge. In 1984, when Nagra released their first timecode recorder, the IV-STC, the manual that came with it was primitive and not particularly useful, especially to a user that had no familiarity with timecode. When I complained, I was told: “If you don’t like our manual, write one yourself.” With the help of the late Manfred Klemme, the Hollywood Nagra rep, we produced a thin tome that I titled Using Timecode in the Reel World. It was an instant hit (not in the least because it was free), but the IV-STC customers wanted even more information, particularly about timecode in general. I did most of the work on the Second Edition of UTCitRW, with Manfred providing the printing and distribution. At that point, he had no more time to spare for this enterprise, so I took over completely with the vastly-enlarged Third Edition, and sold it for $29.95. The cost didn’t discourage the purchasers, and I netted over $5,000 in the following years, until timecode was so well-known that the book was no longer needed. I also wrote articles for Mix magazine.

Now that I was a “published author,” as well as a “university professor,” new doors opened for me, and in 1995 I was invited to Japan to present a technical paper at an international symposium on production recording (InterBEE95).

More recently, in 2010 and 2011, I was sent to China to sell American-made sound gear to Chinese mixers, and since I had worked on the last three years of Avatar, I was a celebrity, as well as an authority. Everywhere I went, women wanted to have their picture taken with me. (And other things, but I’m monogamous.)

Ha Noi audio booth (I took my shoes off inside, but kept my black socks on)

In 2012, I went to Viet Nam to train audio engineers for VTV, the state-run TV network, and was paid $17,075 plus expenses for the 6-week trip. (I would have gone for free, but I didn’t tell them that.)

I’m not being sexist—there are large numbers of women in scientific, technical, medical, and managerial positions in Viet Nam.

At present, I continue to mentor by taking phone calls from other mixers, giving advice and feedback to equipment manufacturers, and writing articles for Production Sound & Video magazine.

THAT’S A WRAP

To wrap up, in the early ’80s, if I remember correctly, I got a call for an interview for a low- budget feature. Thanks to a production office screw-up, another production mixer arrived at the same time. The secretary was embarrassed, but there was no problem—it was Bruce Bisenz, with whom I was on friendly terms.

I got in to see the director first. Soon after the interview started, he told me that he didn’t want “good sound,” and explained: “I’m tired of the re-recording mixers messing up my sound. I want the background sounds, birds, traffic, stuff like that, mixed in with the dialog so they can’t change it from the way I want it.”

This was against everything I had learned so far, because not only could unexpectedly-low level dialog not be pulled up without increasing the ambient sounds, and thus calling attention to the change, but many backgrounds, like traffic, were unpredictable, and could suddenly drown out an otherwise wonderful performance. I had recently bought a new Nagra IV-S recorder (2-track/stereo, but before timecode) and politely suggested that I could record the ambient sounds separately from the dialog, so he could adjust the relative levels to his liking later, and then mix them together before giving the mono track to the editor.

“Thanks, but no thanks. Don’t call us—we’ll call you,” was his reply, though more judiciously put.

I think Bruce overheard all this, and told the director exactly what he wanted to hear, because when I ran across him somewhat later, he told me he got the job. Then the project folded.

Text and pictures ©2019 by James Tanenbaum CAS, all rights reserved.

The Comeback After Surgery 

The Comeback: Recovering From Rotator Cuff Surgery

by Bryan Cahill

Act 1:

“Mr. Cahill,
Unfortunately, your MRI shows you have a full tear of your supraspinatus tendon, along with a partial tear of your biceps tendon. Please schedule an appointment with an orthopedic surgeon as soon as possible.”

Crap! I had a pretty good notion even before the diagnosis with the way my shoulder was feeling but I was hoping for the best. The diagnosis clearly doesn’t sound good. But, what does it mean? Time for a little exposition.

The supraspinatus is part of the rotator cuff of the shoulder. A supraspinatus tear is a tear or rupture of the tendon of the supraspinatus muscle. I get opinions from two surgeons. Both tell me the only option for a full tear is surgery as the humerus bone will start pushing upward past the shoulder joint which is not very funny (Dad joke). Like it or not, I’m heading to surgery.

So, this is my story and everyone else’s will be a little or even a lot different. I’m hoping that sharing my experiences here might be helpful to others facing similar situations to my own.

Cut To: The surgeon refers me to a pain management physician. This is new to me but it seems quite common now to have someone on the team who only deals with post-op pain. My surgeon confidently states that with the pain management physician, my recovery process should be relatively painless. How wrong he is! Even so, you’ll want to make sure you have all your prescriptions at home, ready to use once you’ve had your surgery.  

The surgeon’s office sells me a cold therapy system to ease pain and swelling. Later, I find that I could have paid a lot less by shopping online. They also sell me a sling that I will be wearing most of the time, including while I sleep for the first four weeks.

My surgeon also recommends a Rengenten implant made from sterilized bovine tendon. Smith Nephew, maker of the implant, claims that it accelerates the healing process over traditional surgery by six weeks, allowing patients to begin physical therapy almost immediately. The implant costs $3,000 and is not currently covered by our insurance. I felt the benefit of being able to boom six weeks earlier outweighed the cost and opt for the implant.

I am advised not to push, pull, or lift anything weighing over a pound for the first six weeks following surgery! I’ll also need loose-fitting shirts that open in the front as it will be impossible to pull a shirt over my head for quite a while.  

The simplest tasks can become incredibly complicated especially if the affected shoulder is on your dominant side (I was fortunate to have torn my non-dominant cuff). I’ll need a lot of help from wife and kids after surgery. Post-surgery sleeping will take place either propped up in bed or in a recliner. I buy the ugliest recliner in the world off of Craig’s List and put it out for bulky item pickup when I no longer need it.

Most sources will tell you that driving isn’t recommended for the first few weeks after surgery. My surgeon told me driving was fine, but I won’t be driving with my hands at ten and two as the repaired shoulder won’t allow my left hand to get to “ten” for many weeks.

Act 2:

I go under the knife on March 1, 2019. My surgeon finds not only the torn supraspinatus and biceps tendon, but also a torn subscapularis along with tendinosis, and bursitis in the joint. Two anchors are screwed in my shoulder to hold the repaired tendons in place. The surgeon finishes the job on my biceps tendon, cutting it loose, and reattaching it to one of the anchors with sutures that look like thin, braided nylon rope. The other two tendons are also reattached in a similar fashion.

How could my shoulder have gotten so messed up? Rotator cuff tears often occur from wear and tear of the tendon over time. The likelihood of such injuries increases with age and from performing work overhead. Does this remind you of any boom ops you know?

After only a few hours in an outpatient surgical facility, I am sent stumbling to my car so that my wife can drive me home. Now, I’m no greenhorn when it comes to pain. I have been riding and falling off of motorcycles for more than forty years. When I decide to do something, I go all in.  I’ve had more broken bones, stitches, sprains, compressed vertebrae, etc., than I can count. And, this surgery is an arthroscopy. Little incisions should equal little pain, right?  

The pain management doctor prescribes a cornucopia of medications, including Lyrica, Tzanidine, Tramadol, hydrocodone, and 600mg Ibuprofen. Even armed with my own little pharmacy of meds, I constantly fantasize about cutting my arm off for the first six weeks. I’m also icing hours every day. I add topical, over-the-counter remedies like Aspercreme, arnica, and then CBD oil to the mix but still have intense stabbing omnipresent pain from my neck to my elbow.  

Sleeping propped up, with my arm on a pillow, in the recliner, or on the couch is almost impossible. I pace the house at all hours of the night moving from one spot to another and getting about two hours of sleep on average. After the first week, I lay off the hydrocodone because I return to driving the morning school carpool. I steer by holding the wheel with my repaired arm resting on my legs at six and my other arm at noon. After dropping off the kids, I work an eight-hour day at Loyola Marymount University, go to physical therapy twice a week, and exercise twice daily at home. I quite frankly don’t know how long I can keep it up.  

According to one surgeon, the recovery goal with rotator cuff surgery is to get ten percent better every month. Even when surgery and rehabilitation are successful, the repair is prone to re-injury up to twelve months following the procedure.

My physical therapy (PT) begins less than a week after surgery. This is where the hard work begins. It seems that once I feel comfortable and capable of properly doing any of the exercises, I’m given something new, harder, and more painful to do.  

The worst part of PT though is the manipulation by the therapist. My surgeon and my physical therapist both refer to the repair as “tight.” Knowing how I use my shoulders, my surgeon took extra care to make sure everything was very firmly held in place. This is apparently better for the long-term prognosis but it makes regaining mobility that much more difficult.

Every time I’m about to have a post-op exam with my surgeon, my therapist worries that we’re not making enough progress and really goes to town on my shoulder. She is constantly pushing the limits of my range of motion in all directions and trying to get the joint to move a little further with every appointment. I feel like a roasted chicken having a wing torn off.  

There are many days where I leave PT in greater pain then when I came in and I wonder if the surgical repair has somehow been damaged. On top of the appointments, it now takes four hours to get through the exercises I do at home twice a day. So, I split them doing half in the morning and half at night. The gains are often incremental and the pain so great that it is hard to stay positive.

Finally, six weeks after surgery and quite suddenly, the pain lets up enough so that I’m sleeping four hours a night and I want to keep my arm. Although it may not seem like it, this is a huge step forward. I can burn the sling and start thinking about things other than how badly I hurt.

Act 3:

A montage spanning the next four and a half months. I continue to do PT, exercise at home, lots of low fives with the PT staff, pain decreases, strength, and range of motion increase to the point where I sprint up the steps of the Philadelphia Museum of Art with a Bill Conti theme blaring. Okay, it is not that dramatic but, on July 29, less than five months after surgery, my surgeon removes all limitations and tells me I can start booming again.  

My first opportunity comes on August 5; five months and four days after surgery. I take the Airframe loaned to me by Levitate Technologies to day play on the BET series Twenties, with Von Varga mixing and Yervant Hagopian on utility duties.

Although there aren’t any particularly hard setups on this day, the Airframe gives me a little more confidence in returning to boom work. It provides lift assist and doesn’t restrict my motion in any way. Comfortable, light and slim, I use it all day with ease.   

Epilogue:

On Thursday, Sept. 12, 2019, I bring the loaner Airframe out to Chris Walmer on the set of Schooled at Sony Studios, along with some prototype “cassettes” that offer even more lift. He tells me he used it for some ten-minute take the following day and that it “worked great.”  

I believe it is possible that exoskeletons like this might not only help prevent injury but also get people like me back to work sooner after an injury or even save the careers of experienced boom operators who previously might have gone on permanent disability.

Every day, I am still icing, stretching, and exercising. Getting to where I am now has been an arduous journey but it feels great to be back on set!

Mixing Live Singing Vocals on Cats

Mixing Live Singing Vocals on Cats Part 1

by Simon Hayes AMPS CAS

As I drove into Leavesden Studios in London, UK, to meet Tom Hooper to discuss his next project Cats, I knew that the film was going to be a huge challenge having seen the stage show. I hoped the conversation we were about to have would present possibilities to deliver a first-class soundtrack. I was aware that Tom wanted to record the vocals live but I’d also had conversations with various people involved in the project who thought it just wouldn’t be possible to go completely live due to the frenetic choreography and the difficulty our performers would have singing and dancing for the length of a film industry day.

The first question I asked Tom was, “Is it your intention to shoot the film with completely live sound?” He looked at me quizzically and with a wry smile he replied, “Of course, that’s why you’re here.” He moved swiftly into explaining his vision and how we were going to shoot the film, and the details of what he wanted to achieve visually and sonically. It was early on in the proceedings, and at this point the DP had not been chosen. As I listened intently, Tom told me about the VFX tests he had been doing which involved a new process that hadn’t been used before. We would shoot completely live action on the set, with actors who could sing and dance, and then in post, the actors would have fur added to them and become cats, whilst completely retaining their body movements and, most importantly, their facial expressions.

Tom and I spoke about the audience and how cinemagoers judge performance and the leap of faith they must have to trust what they are seeing and hearing on screen. Tom’s position in filmmaking has always been that audiences instinctively believe performances if the dialog is original and recorded on set. We spoke about how this is deeply rooted in human subconscious and is instinctively part of our fight or flight mechanism that has been one of the keys to our species survival. When we were hunter-gatherers, human beings had to assess every interaction by reading facial expressions and listening to the tone of voices; did a stranger want to steal from us, kill us, or collaborate in a helpful manner?

We do not switch off this subconscious assessment when we walk into a movie theatre to watch a film. We look at the actor’s facial expressions and listen to the tone of dialog and we instinctively wonder whether we trust what we are experiencing. Is the lip sync perfect? Does the dialog inflection match the facial expression? Is there an acoustic in the vocal that does not match the location of the actor we are watching on screen? All of these questions are being asked at a deep base level by every moviegoer. With this in mind, Tom continued to explain that the VFX process would be presenting audiences with something they had not seen before, human “cats.” Having the performers sing live throughout the film was one of the main strategies Tom had for helping the audience immerse themselves in the narrative.

Sound/Music team equipment row


As our conversation continued, I was getting more important information from Tom that would affect my pitch in how to capture the live singing. Tom explained that he wanted to showcase modern and exciting choreography that would involve break dancing, street dance, and parkour (free running, a dynamic, and explosive style based around gymnastic movements), as well as more classical styles like ballet. Up to this point, I had thought that due to the fur being painted onto the bodies in post, and the actors wearing mo-cap suits, it would be an ideal opportunity to place lavaliers on the body in vision, and that they would not have the usual clothing rustle. This workflow would not need to be signed off by the VFX team as they were adding a layer of fur anyway. Tom described the dancing tests he was excited about, and the type of rolling, tumbling, and break dancing the choreography would contain. I quickly realized I had to come up with a better strategy. Mics on the chest/solar plexus would be prone to getting impacted by the dance moves, and secondly, the excessive head movements of the style of choreography Tom favored, would mean the actors’ vocals would be going off axis regularly.  

The first idea I pitched was using DPA ‘cheek’ mics on miniature booms that attach to the performers’ ears. Looking back now, I was extremely fortunate that Tom didn’t want to use this process, citing the fact that even though the VFX would paint fur onto the actors’ faces, the fur was going to be translucent so facial expressions could be retained underneath. Tom felt the cheek mics would need to be digitally removed before the fur could be added. He wasn’t concerned by the financial impact; Tom’s concern was potentially losing valuable facial expressions from around the mouth during the mic-removal process.

I had to think fast and come up with another idea. Fortuitously, my idea was actually going to provide our film with higher quality, richer vocals than the cheek mics. I asked whether I could attach a DPA lavalier to the forehead of each actor. I have always known this is a better position than the cheek for capturing high-quality vocals but I had never done an A/B comparison. This workflow was not my first pitch because it is notoriously difficult to stick lav’s to the forehead for a whole day without them losing adhesion, especially in a place where humans perspire, coupled with the fact the actors would be dancing all day in very high-energy routines. In my mind, this would result in re-sticks, lost takes, and actors with sore skin on their foreheads within a week.

Tom said, “Simon, there is so much expression in the forehead that I really don’t want to jeopardize.” I explained that this really was the only option if he wanted a guarantee under these very difficult conditions that I would deliver vocals that would not require any ADR.

Now we were negotiating. There are few things that make Tom Hooper happier than a negotiation, especially one that will creatively benefit his film. “How good will the quality of sound be if we put the mics on the forehead?” he asked. “Absolutely brilliant, Tom, it will be perfect and the greatest benefit will be that the singing will never go off mic, and most importantly for music vocals, the perspective won’t change when you cut from wide to mid to tight and back again.” This resonated with Tom, as we found this out while testing on Les Miserables.

Spoken dialog in films benefits from the perspective of the mic matching the camera angle, but with singing, any change in perspective, rather than helping the audience believe the performance, creates the opposite effect, drawing attention to the picture-editing process in an extremely uncomfortable, jolting manner.

This makes sense; when was the last time you heard any type of singing accompanied by music that has anything but a close perspective whether it be pop, rock, jazz, country, or blues?
Tom fixed me with a serious stare and asked, “If we put the mics on the forehead, will they sound as good as a close boom?” I replied, “Yes, I believe they will, because they will actually be closer than a boom could get and we will NEVER get caught unprepared by a sudden head turn. More importantly, I’m assuming we are going to cover this film with multiple cameras shooting wide, mid and tight at the same time to capture a perfect performance from all angles?”

“Yes, of course,” Tom replied, “and the sets are important to me. We are not shooting on green screens, and the cameras will often be handheld and frenetic so I’m not sure painting booms out is a viable option.” I responded, “I’m assuming from what you’ve described about the look of the film, your DP will probably use a lot of hard spotlights to achieve that.” Tom smiled, “Yes, that’s for sure. It seems we are certainly going to need to use the radio mics to achieve this; how can we make them work?”

Great, we were at a stage of the negotiation where I knew I had presented my case convincingly enough for Tom to help me find an answer to mic placement. “How about this compromise, Simon,” Tom said, “you keep the mics clear of the lower fifty percent of the forehead directly above the eyebrows where most of the facial expression inhabits, but you can have the upper fifty percent, below the hairline.” I answered, “We’ll get perfect vocals!” A big grin broke across Tom Hooper’s face; he reached out to shake my hand and said, “We’ve got a deal.”

Now, all I had to do was work out how to attach the mics and make sure the actors could perform and hear the music, keeping the vocals clean while collaborating with the Music Department. Most importantly, I had Tom Hooper’s support for my workflow; the first big hurdle had been crossed.

My next port of call was a meeting with my old friends, Supervising Sound & Music Editor John Warhurst and Music Supervisor Becky Bentham. John had worked on Les Miserables and was booked for Cats, along with Becky, who I have done many musicals, starting with Mamma Mia! more than ten years ago. They are both strong allies and we have a shorthand with each other. We completely trust one another and have a very strong relationship. The first part of our agenda was the team. We agreed that that there would be no demarcation between Sound and Music departments on Cats. This collaborative methodology had been extremely successful on Les Miserables and we agreed our team would be called Sound/Music. John and myself would head up that team on set every day with all the team members taking instruction from us, with support from Becky on the set and in an office at Leavesden Studios where we were shooting the entire movie.

Tom’s vision, along with Eve Stewart, his Production Designer, was that all the sets would be three times bigger than reality (doors, tables, chairs—everything!) because cats are about three times smaller than humans. We were fortunate that the entire film would be shot on soundstages with zero location work.

My next discussion with John and Becky was about Pro Tools operators. We had been very fortunate to work with a brilliant operator on Les Miz who has since retired and moved into another career. Becky has helped me find absolutely outstanding Pro Tools operators throughout my musical career and I really trust her judgment. She has her finger on the pulse of what is happening in the music industry and who are the best technicians, as she is based at Abbey Road Studios where she works closely with their staff and freelancers. Becky is also very aware of an important point: not every good music editor is the correct fit for Pro Tools work on films. Music editors in the music industry can work at a speed that is somewhat less pressurized than a film set. Every decision is based around music, whereas on a film set, the priorities are the camera and the visual image. To expand on that, a Pro Tools operator working on a film musical has to be extremely fast, quick thinking, and realize that no one is going to wait. With this in mind, we discussed her recommendations and I quizzed her on their personalities, technical expertise and whether they would be able to work in a highly pressurized environment for more than three months, twelve hours per day. We kept coming back ’round to the same name—Victor Chaga.

I met Victor the next day. He is a Russian who moved to LA as a young composer looking for his big break. He instantly struck me as being made of the right stuff as he had the emotionless confidence I associate with many Russians, coupled with the desire to provide excellent service I associate with American culture. When I interview music editors for a position on my team, I always ask how they would react to a number of precise extreme scenarios on set. I was confident we had found our guy!

The hiring of a first-class music editor/Pro Tools operator was incredibly important on Cats, because the whole film is driven by tempo—unlike Les Miserables, where tempo could be manipulated and sometimes ignored in favor of performance. The rhythmic dance routines of Cats meant that we needed to be able to drift in and out of pre-recorded playback tracks and live piano constantly. Tom still wanted to use the live keyboard whenever possible to allow the actors the opportunity to play with timing and let their emotional performance take priority. This was mainly between the songs or in small breaks in the tempo. Victor would be playing his Pro Tools rig as if it was a musical instrument, accompanying its own highly creative ‘dance’ with the live keyboard player, so the two instruments could immerse the actors and support them in their performance.

Everyone involved in Cats, from Tom Hooper and Producers Debra Hayward and Ben Howarth, and Executive Producers Eric Fellner and Tim Bevan from Working Title, to Andy Blankenbuehler our Choreographer, kept expressing how dynamic the choreography was, and how loud the music needed to be for the actors to dance to. I needed to find a way to get more volume to the actors than would be possible using the ‘earwigs’ and induction loops we had used on Les Miserables.

It became obvious to me that for the extreme SPL’s that some of our performers would require, we needed to use personal ‘in-ear’ monitors that would be custom-molded to the performer’s ear canals. I already had a great relationship with Puretone who supply all of my musicals with custom- molded earwigs and also supply many rock performers and musicians with custom personal in-ear monitors to use on stage. They’d actually just supplied the same product on my last film, Danny Boyle’s Yesterday for the star, Himesh Patel.

I decided that Cats needed a completely new strategy for IEM’s and that my production sound mix for Tom Hooper and his editor, Melanie Oliver, would not always be suitable for the performers to listen to as we were shooting. I really wanted them to feel supported. I also knew that many of the cast came from a music industry background and would expect a personal mix in their IEM’s, something a front-of-house mixer would usually provide for them on stage. I have been working with my 2nd Unit Sound Mixer, Tom Barrow, on a number of films now. Tom’s former career was working in a music recording studio as an engineer, so it seemed to make complete sense that he should join our team as the Cats IEM mixer.

I explained to production that hiring Tom Barrow would be an additional expense that they were not anticipating, but that Tom Hooper’s expectation would be difficult to achieve without having a huge Outside Broadcast (OB) truck parked at each stage if they didn’t take my advice. I explained to Working Title that if they wanted me to stay ‘small’ enough to not require the expense of OB trucks. they had to accept that Cats was completely unique and would require a large Sound/Music team, potentially larger than any previous film ever shot. Luckily, my relationship with Eric Fellner and Tim Bevan at Working Title and their Head of Production, Sarah-Jane Wright, goes back fifteen years over multiple films. They told me they trusted me and would facilitate whatever I needed to support Tom Hooper’s vision. Of course, they did let me know they were not expecting any ADR, so the pressure was on! Their Unit Production Managers, Jo Burn and Nick Fulton, were also incredibly supportive throughout.

I decided to hand the IEM workflow and process over to Tom Barrow as I prioritized the vocal recordings and the rest of the music workflow.

We were actually creating a world for our performers to inhabit that would sonically block them off from outside voices and sounds with their IEM’s. Our cast was given the choice of wearing a single IEM or a pair. Many of our cast come from a music industry background so most preferred to use a pair. This meant that the IEM mixer had to become a conduit for all the information and effectively provide a comms mix,  alongside the dedicated mixes for the performers in order for them to communicate with each other, or for the director, 1st AD, keyboard player, music producers, and Pro Tools op to communicate with the cast.

At this point, it was fourteen weeks out from principal photography and it was becoming clear that I was spending many hours of the day working on telephone calls or meetings. I needed to start splitting the workload if we were to achieve this monumental task and be ready to shoot. To provide support and creative input, I needed my longtime key 1st assistant sound (UK term for boom operator) to start work. Arthur Fenn has worked on every single one of my fifty-eight movies. We have a collaborative approach and I consider Arthur to be one of the best boom op’s in the world. The industry has changed over the last fifteen years with large, multi-camera VFX movies becoming the norm. Arthur has, in turn, become a radio mic expert. His skill is collaborating with the actors. He has an engaging personality, an easy rapport, and a fearlessness that quickly means he is considered in much the same way a personal costumier would be with the cast. He also collaborates very closely with the Costume Department from the beginning and creates a strong working relationship with that team. I approached production and explained that it was time for both myself and Arthur to be hired full time, fourteen weeks out. That would be the requirement if they wanted us to be ready on time. They agreed and I sat down and brainstormed with Arthur regarding how we could attach the mics to the cast’s foreheads.

Arthur strongly advised against trying to stick the mics on their foreheads for the reasons mentioned previously. When told about the IEM’s the cast would be using, he expressed his concern about the practicalities of so many cables being unattached and pulling on the mics and IEM’s during explosive choreography, so we came up with a plan. It was born out of the knowledge that some puppeteers have used tennis players’ ‘sweat bands,’ so we started discussing that strategy and that led to ‘lav straps’ which are a relatively new product that actors wear around their chests. Arthur said, “What we need is a lav strap for the head which keeps the cables tidy, can be washed overnight if necessary, and can also double as a sweat band which could be helpful to the performers.” This is where our strong relationship with Simon Bysshe of URSA straps came in. We have known Simon since he was a young trainee who did a film with us, and when his company URSA was in its embryonic stages during Guardians of the Galaxy, Arthur and myself did a lot of work with him on product development. We also tested his prototype straps on set. I sent Arthur off to meet Simon and together, they came up with a design for the bespoke Cats mic and IEM head strap system.

Arthur and I went to watch a choreography rehearsal to get a feel for the movements and to start to consider where we would rig the mic transmitter and IEM receiver packs. The choreography team had continually expressed concern to production that the performers may get injured if they landed on a pack. Tom Hooper had called me into a meeting to ask that we use the very smallest packs available.


We arrived at the rehearsal with a bunch of different generic URSA straps (thigh, ankle, waist, etc.) and two wireless packs. We had a couple of hours with one of the dance performers who tried different positions on his body, and then showed us lots of different dance movements from ballet, tap, street, gymnastics, and break dancing to represent the diversity of our cast’s routines. The choreography team tried to assess which positions were best. It quickly became clear that there wasn’t “a best position,”  and that our workflow for each performer throughout the film would depend on the unique movement they would be doing in the scene or shot. This meant that every performer would need to let us know how they would like to be rigged each day, and sometimes on a shot-to-shot basis, to avoid the packs restricting their movement or causing the risk of injury. We had the usual options of ankle, thigh, and waist but we also developed a shoulder/chest strap with URSA that was used for a large percentage of the film. This kind of ‘holster’ strap, with choices of where the packs were placed—whether that be pec, armpit, or shoulder area—very rarely made contact with the ground. At the end of this test, Tom Hooper arrived and asked the performer, “Can you dance unrestricted wearing these packs?” His reply was “Yes,” and Tom turned to Arthur and myself and said, “Great work, let’s keep moving forward. We have some news from VFX that I need to discuss, so let’s meet tomorrow.”

The next morning, Arthur and myself attended a VFX meeting set up by Tom Hooper’s producer, Debra Hayward, and Producer/1st AD Ben Howarth, Executive Producer Jo Burn, and UPM Nick Fulton. Tom Hooper’s Costume Designer, Paco Delgado, and members of his team were present as well.

The VFX team explained that they would be recording each performer’s body movements using a box the size of a small hard drive attached to the performer’s body, linked to tracking markers (tiny buttons) in key points on the body. The big question production wanted answered was, “How big is this box, and who is going to rig the performers?”

The VFX Department explained that they needed the hard drive capture boxes attached because it would not be technically possible to capture that many different performances by radio link as that technology simply didn’t exist. The movements of each performer would be recorded on the hard drives, which would all be jammed to timecode, and at the end of each day, the information from each hard drive would be downloaded.

The VFX team needed help to implement rigging the actors. Generally, VFX teams don’t rig and de-rig actors. The whole process was becoming extremely unique as there would be elements of costume on some cast, but many of the chorus would simply wear a motion-capture suit. Rigging them with sound and vfx equipment would be the most time-consuming task. Another factor that had become clear in the last few days was the number of cast: about fifty in EVERY shot!

I suggested that we needed to forget about standard film industry workflow and protocol, and treat the film as a unique entity. I explained that it would not be possible for fifty performers to go through Costume, then Sound, then VFX before coming to set every morning. Tom Hooper wouldn’t get his cast until lunchtime! I said that it would be counterproductive having the Costume Department dressing the cast in mo-cap suits, then having the cast undress to have sound equipment strapped to them. I suggested that my sound team could rig the actors with the vfx and sound equipment, with VFX present during the process to technically assist and troubleshoot their equipment.

We started talking about the amount of time to do the rigging, and the number of people that we would require. It became clear that we needed to throw a large team at the issue to enable the fifty cast to be on the set within an hour of arrival at the studio. After much discussion, we decided that Arthur Fenn, 1st AS, would head up a team of sound “suit techs,” which became our team on Cats.

Five 3rd assistant sound people would work each morning and evening rigging and de-rigging the cast under Arthur’s instruction, each with a costume dresser and a VFX technician as a three-person team. They would dress a single cast member, then move onto the next. We would try wherever possible to keep the same three-person team working with the same actors each day so they could develop a relationship with the actor to help them feel comfortable and supported. When the cast were dressed, the five suit techs would come to the set and join the rest of the Sound Department. They would stand by to troubleshoot ‘their cast,’ moving packs around depending on the dance routine or movement. They were on call to make sure their actors were comfortable, in the same way a personal costume dresser would be on a ‘normal’ film set.

I cannot stress enough how much time/management coordination went into this process with Arthur and his team of suit techs, using Excel spreadsheets to work out the daily timings and strategy based on the next day’s advanced call sheet. It really was a tour de force in organization. The other really helpful factor was that our suit techs had experience working on ‘normal’ films and had radio mic’d actors before, so they were arriving on Cats with a skill set already in place.

Arthur also coordinated with the VFX team and URSA’s Simon Bysshe. Simon was working full time on making unique URSA STRAP solutions for Cats, that included custom-sized straps to attach the VFX hard drive packs to the bodies of the performers. Everything was coming together but we still had a lot of preparation to do.

I sat down with John Warhurst, Supervising Sound/Music Editor, Melanie Oliver, Picture Editor, and her 1st Assistant, Alex Anstey, and discussed the fifty cast members. Tom Hooper and Working Title did not want to go down the route of using an OB truck outside the stages, as Tom wanted to be able to move fast from stage to stage. I was also concerned that if I was based off the set in a truck, I would be creatively divorced from proceedings on the set. I feel that the closer I can be in proximity to the director, the more supportive I can be and positively influence the recording of the performance on the set. I can’t stress this point enough.

The custom Ursa headband for the microphone, Coms and VFX sensor


We continued by talking about how many tracks the Avid could ingest, and how many tracks I felt I could actually mix on the set successfully. I invited my other 1st Assistant Sound, Robin Johnson, into the conversation. Robin has another diverse skill set that complements Arthur and me, and is a reason he has been with us on our journey since 1998, joining us on our second movie. Robin is our technical genius, qualified as a scientist and an absolutely first-class audio engineer, computer expert (Mac & PC) and, of course, boom operator. I asked Robin to come up with some frequency plans to try and determine how many radio frequencies we could get reliably working together on one stage, bearing in mind the additional frequencies that would be necessary for IEM’s and comm’s, as well as the radio mics.

Headband microphone with Bubblebee fur to stop wind noise in extreme dance moves


When we started looking at our planned workflow, we needed a little more than fifty frequencies to work in close proximity on one stage. The most radio mics we could assign to actor vocals was twenty-four. After breaking down the script in conversations with Tom Hooper and the music team, we knew that eighteen soloists would need mic’ing every day as they were key cast members that would sing solos. The rest of the cast would sing the chorus. When I spoke to Tom Hooper about my plan to only mic twenty-four of the cast, he was unconcerned. He explained that the chorus had been cast primarily for their extraordinary and diverse dancing ability. Tom did not feel we needed all of them mic’d because of the time it would take to get fifty performers wired, as well as the fact that many of them were not singers but dancers.

We worked out a plan. Eighteen soloists would always be mic’d and the other six radio mics would be assigned on a day-by-day basis to the strongest singers in the chorus. This was based on which song we were shooting and how to support the soloists on those tracks with the exact mix of baritones, altos, tenors, etc. The decision on whom to mic was made by the Vocal Coach, Fiona McDougal, and our two Keyboard Players, James Taylor and Mark Aspinall. They were very adept at working out exactly where to place the six ‘floating’ radio mics on a scene-to-scene basis, as they knew the cast’s voices, their strengths, weaknesses, and how they could be played to enhance the backing vocals. Phew—we only had to run twenty-four radio mics EVERY scene, EVERY day!

Our next step was how to deliver those twenty-four tracks to Editorial daily.

I had been waiting for the Zaxcom Deva 24 to arrive specifically for Cats. Glenn Sanders and Roger Patel at Everything Audio, the UK Zaxcom distributor, managed to get me a very early machine to use on the additional photography we were doing on Aladdin, right before we filmed Cats. Being able to use the Deva 24 for a few weeks before bringing it onto Cats was very important to me. I found it very similar to my Deva 16 in the usual Zaxcom style. I am a firm believer in making Production Sound Mixing as intuitive as possible. I believe the equipment is only there to help and support the creative process of the director and sound mixer. The equipment I am using is no different than the instrument a musician plays. It would be unthinkable for any musician to play a new instrument before the most creatively complex and demanding concert of their career. I decided that track count would not decide which recorder and mixer panel I would use. I was going to have to use a second recorder because of the high track count; this was going to be the highest track count I had recorded and mixed in my career, with an extremely diverse group of vocalists, all with different styles and levels.

I wanted to give myself the best chance of capturing every subtlety and nuance of the performances, with the most appropriate gain structure. In order to give myself the best chance of delivering this, I decided to stick to equipment I had been using for years, but make it more expansive. I was already using a custom-built twelve-channel Audio Developments AD149 desk. Their previous largest was ten channels. The company MD Tom Cryan had made my twelve-channel specifically for me. Of course, like all reliable production sound mixers, I had a spare built at the same time, just in case I was in some far-flung location and had a failure. It soon became apparent to me, however, that if I linked the two mixers together to deliver twenty-four tracks, I would not have a spare if there were a failure. My mixers were unique and we would be completely reliant on them for Cats. I made the decision to task Audio Developments to build another two from scratch, so I had complete peace of mind that if the unthinkable happened on set and both mixers died or developed faults, I could swap them out and the shoot could continue.

The next decision was recorders. I decided, with Editorial’s permission, to link together the Zaxcom Deva 24 with my Zaxcom Deva 16. This would give me access to forty tracks. I would then create two mixes for Picture Editorial, allowing them to expand on choruses if they wanted to by using both mixes in the Avid. Mix Track 1 contained a mix of the first twelve ISO tracks, and Mix Track 2 contained a mix of ISO tracks thirteen through twenty-four. We assigned the first twelve ISO tracks to our key cast members who would be in every scene. When we had a guest star in the cast for a specific song or two, we would decide the most appropriate workflow. I would also try to keep the actors with the highest amount of solo singing on the first twelve ISO tracks and Mix 1, with Mix 2 being a ‘supporting’ mix, which could be added or subtracted by Melanie Oliver, the Picture Editor.

Suit Techs Taz and Oscar early morning, prepping the equipment


This suited Melanie because it gave her creative control over the ‘size’ of the sound in the Avid, without giving her and her team the additional workload of having to dig into the ISO tracks to rebuild the mix. Of course, all the ISO tracks were ingested into the Avid so Picture Editorial had access to them, but the plan was for Editorial to only work off the two mixes whenever possible.

We then took the conversation into the musical side of the production sound recording. We had two elements as discussed earlier: The electronic keyboard would be used wherever possible in links between songs to give the actors more freedom to explore tempo, so they could play with their performances.

Pro Tools playback of temp mixes digitally orchestrated.

The keyboard and Pro Tools would need to weave in and out of each other so the actors did not hear a shift in tone between the two accompaniments in their IEM’s. This would be achieved by careful gain matching. We also learned early on in rehearsals that the best strategy to achieve this perfect blend was to have the keyboard player play along with the Pro Tools playback. It worked because when we went from free form keyboard to playback, it was not obvious to the cast, because they could still hear the keyboard.

Prepping the transmitter frequencies
Array of Comm microphones for Director Tom Hooper


The keyboard players, James and Mark, were actually ‘performers’ in Cats, just a different kind. They were not on screen, instead spending twelve hours a day in a soundproofed booth we had built, with large LCD screens to view all the camera angles, such was their creative input on the set. The irony being that both the expressive keyboard playing and the Pro Tools playback were simply temporary. It would all be discarded when Cats was fully orchestrated in post production, but what remained was the emotion they helped the cast achieve.

I decided adding the musical element to the two vocal mix tracks would be counterproductive. Melanie would need two tracks of vocal mixes that would give her control over the size of the sound, and a mono keyboard/Pro Tools track.

At this point, there were two vocal mix tracks, twenty-four vocal ISO tracks, and the mono music on track twenty-seven. Melanie would have the ability of manipulating the volume of the keyboard/Pro Tools playback against the vocal mixes, as per her own and Tom Hooper’s instinct, on the Avid. She would also have the stereo ‘bounce backs’ of the temp Pro Tools orchestrated tracks, timecode locked so she could utilize this as she finished cutting a scene.

The ‘mono mix’ of keyboard and pre-recorded temp track was on Victor Chaga’s Pro Tools cart and it was his responsibility to keep it gain matched and send it direct to my Deva 16. Victor also sent it to Tom Barrow the IEM mixer at the same time so he could use it to create his individual IEM mixes for the cast, along with the vocals.
I sent Tom all of my ISO tracks as well, so he could creatively provide each actor with what they wanted and work each character’s gain in the IEM mixes. This gave Tom far more ability to support the cast than if he had simply been working with my production sound vocal mixes. It was invaluable to have Tom Barrow giving the cast what they needed to hear on a shot-by-shot basis so I could concentrate on what Post Production needed, and ultimately provide the highest quality vocals for the finished film.

Now onto timecode.

We were using an Ambient Master Clock on the sound cart, generating time of day TC that locked together sound recording, digital cameras, and VFX. Each Pro Tools temp song had its own unique timecode, so Picture Editorial could quickly drop the on-set music ISO and sync up the stereo temp version of the song in the Avid quickly and efficiently. The playback timecode fed directly from Pro Tools was recorded on audio track twenty-eight of my Deva 16. The Pro Tools playback timecode was also fed to the Lighting Desk so that lighting cues and moves could be triggered, ensuring that the complex lighting signature of each beat in the song would be frame accurate on every take we filmed.

We were now twelve weeks out from principal photography and I decided that the workload was growing too quickly for Arthur and myself to deal with on our own. After another meeting with production, I made a case for more of our Sound Department to start work full time at Leavesden Studios. Working Title and Tom Hooper were receptive to the idea after I explained the technical complexity of what we were all embarking on. This was the point where the performers were moving from their rehearsal studios to begin rehearsing on the actual sets we would be shooting on. This was very important for the choreography because the sets were three times larger than reality. My main point was that making the performers feel comfortable and supported by the process was absolutely paramount to them being able to give their best performances when we were shooting.

Up until this point, they had been rehearsing with keyboard and playback coming out of a small PA system operated by the vhoreography team, and vocals had not been a priority. Our cast was a mixture of film actors, theatre actors, musical theatre performers, ballerinas, and rock/pop/R&B vocalists. I felt it was incredibly important to introduce them to the workflow of singing and dancing with the TX/RX packs and hearing themselves in their IEM’s, to help them achieve the perfect balance of vocal and music for their own particular needs. We did not give everyone a completely unique custom mix, as this would have meant twenty-four separate IEM mixes. Instead, we tried to group performers together, based on their taste and the range in which they sing. Some of our main cast were able to have a completely personal mix, especially if that was what they were used to from performing on stage, but most cast were given a choice of a set few mixes.

We also decided that giving all fifty cast personal IEM molds with receivers was cost-prohibitive, so chorus members twenty-five through fifty were fitted with personal molded ‘earwigs’ being fed audio from an induction loop. This was the way we did Les Miserables, so we knew it worked well although at a lower SPL than the IEM’s. We didn’t put vocals into the loop, only the music playback, to help us get more volume for the beat of the music out of the earwigs. This section of the chorus was not being radio mic’d and were primarily incredible dancers rather than vocalists. We felt it would not hamper their performance in any way and they generally worked with only one earwig so they could hear the music to keep them in time; their other ear was open so they could hear the live singing on the set.

In rehearsals, this whole process had been auditioned and fine-tuned. As I said to the producers, if we do not spend twelve weeks rehearsing the workflow, we would waste time on a very tight schedule due to cast availability when we started shooting. I explained that if a performer has a sound problem, it could easily suck up a half-hour working out exactly what blend they want in their IEM. It would be terrible if that happened with a whole crew standing by, waiting to shoot on a very exacting daily schedule.

On our first couple of rehearsal days, that was exactly what happened. The producers present were shocked at how much time it took sorting out the performer’s IEM mixes, instead of rehearsing the choreography. They realized just how important their decision was bringing the Sound/Music team in twelve weeks out.

I also expressed that creatively I really needed to get a feel for the way the songs were being performed, and the dynamic range each vocalist used. I think because our visual colleagues don’t “see” sound, it is often assumed that dialog/vocals are just simply captured by a mic and that there is no need for a gain structure.

One of my biggest responsibilities and the core of my job were ensuring the vocals were captured as richly and perfectly as possible and that meant understanding the volumes our vocalists would use. I started to get familiar with which performers had huge dynamic range and which were the softer singers. I started taking notes and feeding that information to Arthur on where I wanted the gain settings to be on the Lectrosonics SMb transmitter packs.

I am a great believer in recording full dynamic range and I don’t like hitting the internal limiters in TX packs, even on loud pieces. The limiters in the transmitters cannot be completely turned off so I like to set them so that on a vocalist’s loudest part, the limiter is not engaged. Each performer was given their own unique gain setting on their TX, which would change from song to song dependent on the kind of performance they were giving. When I first heard Jennifer Hudson and Taylor Swift sing, I was shocked at their dynamic range and the way they used their voices. In order to capture their vocals in the purest way possible, I had to ask them to wear two radio packs with two DPA 4061 Core mics. The mics were rigged next to each other in their headbands but the TX packs were given two different gain settings that varied by about 6db. This gave me the best chance of capturing the lowest level pieces of their performances with perfect signal to noise, and the very highest SPL parts of their performances without hitting the TX limiters. Taylor and Jennifer were both assigned two tracks on the Deva with notes made for the Music Department regarding the track designation. Taylor and Jennifer were very supportive of this workflow and understood exactly why we were doing it this way after Arthur and myself explained the process to them. Tom Hooper asked, “Why are they wearing two mics?” and when I explained it to him he exclaimed, “That’s genius!” That is how excited Tom gets about capturing live performances. He is incredibly enthused by the process.

Another thing that came about in the rehearsal period was that our Keyboard Players, James and Mark, asked for a click track into their ears. Even though they were allowing the performers to have the freedom to express themselves when they were not locked to a playback track, they wanted to have a tempo guide for the next song when we moved into playback. John Warhurst, Supervising Sound/Music Editor, told me it would be incredibly helpful for him while he was editing to have access to that click. It was generated on Pro Tools and fed directly to my Deva ISO track twenty-nine. Phew! We were up to twenty-nine tracks of my forty tracks available.


The opportunity to use booms to capture the performances would be rare, due to the hard lighting style DP Chris Ross was using, alongside the huge sets and multiple cameras. However, we needed to be ready for any opportunities that would arise. We decided to use Schoeps Super CMITS, which are a very good match for the DPA lavs. The Super CMITS provide us with two AES 42 tracks from each mic, processed and unprocessed, so Nina the Dialog Editor could choose which track was more effective in post for any vocals where we used the booms.

There were no inputs for the boom mics on the AD149 mixers; they were already full with the twenty-four radio mics. However, the Zaxcom Deva 24 could take their signal on its AES 42 inputs. I would rarely use the booms in the mix so it made sense to keep them completely in the digital domain, which was a huge step forward in sound quality. This resulted in another four tracks assigned to the Deva 24, our total track count was now up to thirty-three.

In the end, we rarely used the booms apart from capturing the clapperboards. The radio mics sounded great, and the spotlights didn’t allow booms to get close, so we prioritized the mics that sounded best for the project and they were the head worn DPA’s. There are a few sporadic boomed pieces of dialog in the film and a few spot FX, but ninety-five percent of Cats was recorded on the DPA lavaliers.

One question that kept coming up since my first meeting with Tom Hooper, and in multiple meetings with the producers and Working Title Films, was how was I going to deal with the footfall from such dynamic choreography? This had not been resolved when we started rehearsing. I recorded a camera test and used that to start negotiating with the performers and with Tom. Up until this point, the cast had been rehearsing in whatever shoes they felt comfortable dancing in, knowing that their feet would generally become cats’ paws in VFX. After the test shoot, I played Tom the recording and told him we needed to rethink this protocol.

Tom was adamant that he did not want to do anything that limited his cast’s ability to dance in the most extreme way their skills allowed. Many of the cast were professional dancers. Tom was concerned that if someone got injured due to wearing inappropriate footwear, it jeopardized shooting and could damage the performers’ future career.

We had a frank and honest conversation as I explained my issue. Also present was the brilliant Paco Delgado, the Costume Designer. If any performer opted for heavier footwear than they actually needed, it would be contributing to making the live vocals poorer. It would be counterproductive to go through the fantastically difficult, energy-sapping process of recording the vocals live, only to find out footsteps had ruined the recordings and having to go into ADR to fix it.

High heel shoes treated to remove footfalls with safety for the dance
It was decided that each performer would have three levels of footwear and they would choose whatever they needed that was appropriate for a particular scene or dance move. I was very lucky that many of the cast had classical ballet training even if they were now street-style dancers, and were generally happy to perform in bare feet or ballet shoes unless a particular move or song demanded more support on their feet.

The three choices for the performers were bare feet, ballet shoes, or a type of shoe Paco and his team found that was halfway between a ballet shoe and a trainer and were designed for parkour (free running). The footwear was always near the set and available at all times. If Arthur saw a performer wearing heavier footwear than he thought a particular piece of choreography required, we would ask them if they could change shoes or go bare foot. Most of the time, they would realize they had forgotten that they were wearing parkour shoes and change them immediately.

Construction of the keyboard booth
Soundproofing the keyboard booth
Inside the keyboard booth


This was a very productive process and it removed approximately seventy percent of the heavier footfalls from the live-recorded vocals than we would have otherwise encountered. There were a couple of dancers whose moves were so extreme, they had to wear trainers, and sometimes a character would be in high heels for a song; we took care of this the same way we would any other film, and treated the soles as much as we could while still keeping them safe for the performer.

Our soundproofed keyboard booths were first developed on Les Miserables. We needed to put a keyboard into a ply box that was on castor wheels and soundproofed, so we would not hear the noise of the electric keyboard being played on set. It worked successfully, but on Cats we wanted to go bigger and better. One of the discussions with production was about the complexity of our Sound/Music technical workflow and how quickly the equipment could be moved from stage to stage. Right now it was two production sound carts, two IEM mixing carts, one Pro Tools cart, and a keyboard booth, all linked together by a myriad of cables in the analog, digital, Dante, and MADI domains.

Unfortunately, because of artist availability and time constraints, there were many days in the schedule where we had to move stages. We were asked to look at where we could save time within our workflow, and one of the ways was to have construction build three keyboard booths, all containing a keyboard, flat-screen monitors, Midas mixers and the cables pre-rigged. The booths would be leap-frogged to the next stage by a swing gang on a telehandler or forklift truck to enable us to move from stage to stage and have a fully rigged booth waiting for us.

Our stage moves would still take us three hours for Sound/Music to be able to be ready to shoot. These moves were generally timed to coincide with our one-hour lunch break, which we would work through, and then by the time that stage was lit and cameras were ready, we were all in the same ballpark. The important factor was there was no way for the performers to rehearse until we were ready. They could block a dance routine with the choreographers counting out a rhythm verbally, but until the keyboard, Pro Tools, IEM’s, and radio mics were rigged, our cast could not hear any music, or each other, so the whole show was extremely reliant on our team being ready.

The three booths we had built for Cats were made of double-skinned ply with an eight-inch cavity in the walls, ceiling, and floor. We had construction spray expanding foam inside the cavity. The interiors and exteriors of the booths were lined with rubber-backed carpets. This meant they were completely soundproofed from interior noise escaping, and they were also soundproofed from exterior noise penetrating the interior. Our Keyboard Players, James Taylor and Mark Aspinall, had complete privacy.

I have already mentioned that James and Mark were actually performers on Cats, such was their creative collaboration and support they gave the cast. Not only were they playing the keyboards, they could also give verbal counts for anyone who needed them, directly into the cast IEM feeds. On many songs, Fiona McDougal, the cast Vocal Coach, was also inside the keyboard booth using a Shure SM58 mic, harmonizing or singing along to support the cast in their IEM’s during a scene. Fiona also did this when rehearsing with them for weeks in prep. It was an amazing sight to behold.

It was important to have two keyboard players, as we needed one to be present while filming a scene and the other rehearsing with a performer on a different stage due to artist availability and a very compressed shooting schedule. They would rehearse with Fiona, a keyboard player, and one of the choreography team. It was a very finely tuned machine and a tour de force of planning and scheduling.

John Warhurst and Victor Chaga devised the fantastic idea of using a foot pedal to trigger Pro Tools playback. This meant the keyboard players were ‘playing’ Pro Tools just like another musical instrument! Whichever keyboard player rehearsed, a song with a key performer would be the same person who played keyboard on that number when we shot it. This was extremely important as they had built up a rapport and understanding with that actor. We wanted to give the performers as much creative support as possible during their physically demanding experience of dancing and singing live. Victor made sure the playback cues were lined up so the keyboard player could trigger multiple cues and weave in and out of Pro Tools playback and live keyboard playing multiple times within a take. This was an extremely creative collaboration between the Sound/Music team and the cast.

Each one of us became part of a band ‘playing’ our own ‘instrument’ that would become an organic and intrinsic part of the piece. It was an incredible collaboration of which I am so proud to have been a part.

The Cats Sound Crew

Simon Hayes, Production Sound Mixer
John Warhurst, Supervising Music and Sound Editor
Becky Bentham, Music Supervisor
David Wilson, Music Producer
Arthur Fenn, Key 1st Assistant Sound
Robin Johnson, 1st Assistant Sound & Sound Maintenance Engineer
Tom Barrow, I.E.M. Mixer
Victor Chaga, Pro Tools Music Editor
Mark Aspinall, Music Associate/Keyboard Player
James A. Taylor, Music Associate/Keyboard Player
Fiona McDougal, Vocal Coach
Ben Jeffes, 2nd Assistant Sound
Taz Fairbanks, 3rd Assistant Sound
Oscar Ginn, 3rd Assistant Sound
Francesca Renda, 3rd Assistant Sound
Ashley Sinani, 3rd Assistant Sound
Kirsty Wright, 3rd Assistant Sound

Editor’s note: “Mixing Live Singing Vocals on Cats Part 2”
continues in the winter edition of Production Sound & Video.

Emmys

Creative Arts Emmy Sound Mixing Winners 2019

Outstanding Sound Mixing for a Comedy or Drama Series (One Hour)

Game of Thrones “The Long Night”
Winners:
Onnalee Blank CAS, Mathew Waters CAS,
Simon Kerr, Danny Crowley, Ronan Hill CAS

Production Sound Team:​ Guillaume Beauron, Andrew McNeill, Jonathan Riddell,
Sean O’Toole, Andrew McArthur, Joe Furness

Outstanding Sound Mixing for a Comedy or Drama Series (Half-Hour) and Animation

Barry “ronny/lily”
Winners:
Elmo Ponsdomenech CAS, Jason “Frenchie” Gaya,
Aaron Hasson, Benjamin Patrick CAS

Production Sound Team: Jacques Pienaar, Corey Woods, Kraig Kishi,
Scott Harber, Christopher Walmer, Erik Altstadt, Srdjan Popovic, Dan Lipe

Outstanding Sound Mixing for a Limited Series or a Movie

Chernobyl “1:23:45”
Winners:
Stuart Hilliker, Vincent Piponnier

Production Sound Team: Nicolas Fejoz, Margaux Peyre​

Outstanding Sound Mixing for Nonfiction Programming (Single- or Multi-Camera)

Free Solo
Winners:
Tom Fleischman CAS, Ric Schnupp, Tyson Lozensky, Jim Hurst

Outstanding Sound Mixing for a Variety Series or Special

Aretha! A Grammy Celebration for the Queen of Soul
Winners:
Paul Wittman, Josh Morton, Paul Sandweiss, Kristian Pedregon, Christian Schrader, Michael Parker, Patrick Baltzell

Production Sound Team: Juan Pablo Velasco, Ric Teller, Tom Banghart,
Michael Faustino, Ray Lindsey

Names in bold are Local 695 members

2019 Mac Pro

by James Delhauer

Though the evolution of technology is already a staple of the 21st century, there are some breakthroughs that are widely acknowledged as being revolutionary. Within the world of film and television, one such moment occurred in 2006 when Apple unveiled the first Mac Pro. Featuring high-end Intel processors and sporting a versatile modular design that allowed for routine upgrades, the first-generation Mac Pro changed the landscape of the production world and gave creatives the kind of computing power they needed to enter the digital era. Though this particular line of computers has evolved in the years since its introduction, it remains a cornerstone of digital workflows across the globe. That is why it was so exciting when Apple announced a newly redesigned Mac Pro with some truly impressive specifications.
 

To understand the significance of the announcement, one must recognize the somewhat turbulent history of the Mac Pro. While the first-generation units, which were produced from 2006 to 2012, were revolutionary machines, they had begun to show their age by the end of their life cycle. Even with substantial improvements to the CPU, GPU, RAM, and hard disk drives every few years, the Mac Pro was fundamentally designed around the now obsolete Firewire, USB 2.0, PCIe 2.0, and SATA II connection interfaces. While these were industry standard and even ahead of the curb in 2006, they had been superseded by next-generation technology by 2012, making even the newest Mac Pro a problematic investment. Even the most enthusiastic of DIY users were having trouble upgrading their machines to compete with computers that took advantage of newer USB 3.0, SATA III, and PCI 3.0 technology. The widespread availability of these superior technologies in non-Apple computers started to make Windows-based workflows very appealing to loyal OS X customers. If Apple wanted to remain relevant in the high-end desktop market, they needed to overhaul the core hardware of their design.
    
In 2013, they did exactly that when they unveiled the second-generation Mac Pro. This unit utilized a significantly smaller and more compact design. The short, black, cylindrical shape gave this unit the unofficial nickname of “trashcan,” while the older, first-generation models were retroactively branded “cheese graters” due to their silver color and grate of ventilation holes on the front and back. These trashcan computers, like their predecessors, were a monumental leap forward for their day. While many expected the newly designed Mac Pro to rely on SATA III and USB 3.0 interfaces for hard drive and peripheral connection, the company chose to leap beyond these standards. SATA III was bypassed altogether in favor of PCIe solid-state technology, which had a theoretical maximum speed five times greater than that of SATA III. USB 3.0 integration was included but this was done alongside Apple’s semi-proprietary Thunderbolt 2 interface—a more versatile connection with up to four times the speed of USB 3.0 and the ability to connect a much larger range of peripherals. Combined with impressive processor speeds, processor core counts, and AMD GPU options, the second-generation Mac Pros were arguably the first widely available computers ready to tackle the then upcoming rigors of 4K production.
    
Unfortunately, this generation of computers was not without some rather monumental flaws. This was also the first professional-grade machine produced by Apple not to feature any sort of optical disk drive. Many users had hoped to see the Super DVD drives from the previous generation replaced by Blu-ray capable equivalents and found themselves disappointed. In order to accommodate its small form factor, it required that all of the components be soldered directly into the Logic Board, making both maintenance and upgrades difficult for some components and impossible for others. This meant that users could no longer install third-party components into their machines in order to extend the product life cycle or customize them to suit their specific needs. After several years, this caused the trashcans to run into the same problem as the cheese graters. Technology that was revolutionary in 2013 was becoming out-of-date just a few years later.

This versatility restores the ability for users to incorporate devices manufactured after the Mac Pro’s launch into their system—a feature that has been lacking since the 2013 revamp.

By 2017, users were desperately searching for ways to coax more computing power out of their systems in order to accommodate the newly emerging 6K, 8K, RAW, and HDR workflows that Hollywood productions were beginning to demand. Ironically, some turned back to the first-generation cheese graters. By developing and using a variety of adapters and peripherals, they were able to install more modern components into their aged machines. The process was not always (or even usually) perfect but it did allow some to surpass the capabilities of their 2013 computers by using their 2006-2012 ones. Others turned to more drastic solutions. By carefully selecting a variety of PC components that were technically supported by the OS X/MacOS code, many DIY users found success in illegally bypassing Apple’s software security and installing the Mac operating system on computers built using Windows-oriented components. These “Hackintosh” computers were (and still are) often unstable but they provided customers with a means of running their preferred operating system on workstations that were not out-of-date. Nonetheless, none of these solutions were ideal.

Apple provided a stopgap in 2017 when they introduced the iMac Pro—an all-in-one workstation unit that boasted bleeding-edge performance and 5K retina display that was up to the challenges of high resolution and processor intensive tasks that production environments demand. While these units could replace the trashcan Mac Pros, many were quick to point out that the iMac Pro suffered from many of the same problems. The components were still soldered directly into the Logic Board, making maintenance and upgrades problematic. Within a few years, these computers would also begin to show their age and customers would have no means of upgrading them to meet the challenges of tomorrow.
    

Enter the third-generation Mac Pro.

 
At the World Wide Developer’s Conference in April, Apple unveiled a completely redesigned machine that was a large departure from either of its predecessors. The large aluminum chassis resembles the older bodies of the original cheese graters (leading many to dub the new machines Cheese Grater 2.0) but what’s under the hood bears little similarity. The new Mac Pro features an exclusive line of Intel Xeon W processors that can be configured in a range from 8-core 3.5ghz chipsets to 28-core 2.5ghz chipsets, making it well suited for an impressively wide variety of users. Moreover, while most motherboards for Windows computers only support up to 64gb or 128gb of RAM, the new Mac Pro can make use of an astonishing 1.5tb of 2933ghz memory. For applications that frequently use memory caching (such as Adobe’s video production suite, Avid’s Media Composer & Pro Tools, and Apple’s Final Cut Pro), substantially more video and audio previews can be temporarily cached before a full render is required for real-time playback. Apple also offers a variety of graphics card options, including AMD’s Radeon Pro Vega II Duo card—a GPU capable of performing up to 56.8 teraflops of calculations. For comparison, the 2013 Mac Pro running two AMD FirePro D700 GPUs (the best configuration Apple offered) could only manage seven teraflops.

In addition to the impressive components inside the box, it is worth noting that the new Mac Pro features a versatile range of connectivity options. With two USB 3.0 ports for traditional peripherals, two Thunderbolt 3 ports for high-speed devices or expansion (Thunderbolt 3 is capable of being adapted to most other connection types such as USB, HDMI, and DisplayPort), and two 10gb Ethernet ports for high bandwidth online work, this new computer should be more than capable of integrating with just about any workflow currently on the market.
    
What’s most exciting to enthusiasts of the first-generation cheese graters is the potential for these devices to develop over time. Regardless of configuration, the device comes with two full MPX modules and three full-length PCIe 3.0 slots, allowing for hardware such as additional GPUs, PCIe-based hard drives, Apple’s Afterburner ProRes and ProRes RAW accelerator cards, Avid’s Pro Tools HDX cards, Red’s Red Rocket Pro cards, and many others to be seamlessly installed and integrated without the need for sprawling cables and expansion chassis around the primary computer. This versatility restores the ability for users to incorporate devices manufactured after the Mac Pro’s launch into their system—a feature that has been lacking since the 2013 revamp.
Despite all of the positives this new line of computers has to offer, it is not without its criticisms. Apple’s lack of support for Nvidia products means that users will likely be restricted to AMD’s line of GPUs for the foreseeable future. It is also worth noting that certain components are still locked into the Logic Board, meaning that performing maintenance on those components will likely be no easier than it was with the trashcan units. There is also no support for SATA connectivity, meaning that storage expansion may be limited. Most hard disk drives and solid-state drives still utilize SATA III interfaces, meaning that the only way to install these devices into the new Mac Pro would be via a PCIe adapter. In cases where internal storage is a priority, this can be a definite drawback. It is also much heavier than the trashcan generation. Having reverted back to the old cheese grater design, the new units weigh in at just under forty lbs. This will be a bit of an adjustment for those who have grown accustomed to taking their eleven-lb trashcan Mac Pro on the go.  
    
The biggest problem for many prospective users, however, is the price point. While Apple has not yet released the exact cost of each component option in a comprehensive breakdown, the base configuration machine—which comes with a 256gb SSD, 32gb of RAM, an 8-core 3.5ghz processor, and an AMD Radeon Pro 580X GPU—will set you back $5,999 plus applicable tax. This is unfortunate because a similarly configured Windows machine costs substantially less. In their review of the new Mac Pro, the Linus Media Group assembled an itemized list of components needed to build a Windows-based equivalent to Apple’s introductory machine. Their shopping cart ran up a much more manageable total of $3,160. Assuming comparable components are indeed fairly matched up in their assessment, that is almost a one hundred percent markup per unit. This has left many excited customers concerned that the highest end configuration Cheese Grater 2.0 may cost anywhere between $35,000 and $45,000. For individual users, that is a substantial investment and will need to be considered carefully before making any purchasing decisions.
    
All of that being said, the new Mac Pro will not release until fall of this year. Until then, it is impossible to know for certain what the exact price point will be and whether or not this computing powerhouse will perform as expected. When the new Mac Pro is released, Production Sound & Video will provide a comprehensive review featuring benchmarks, technical comparisons with older Mac Pros and contemporary Windows machines, and more. In the mean time, prospective buyers ought to start saving their pennies or apply for an extended line of credit if they are hoping to bring home one of these monster machines.

The Sound Cart Builders

by Richard Lightstone CAS AMPS

Sound carts have evolved over the last four decades in form and function.

Today, there are numerous variations of sound carts to meet just about every mixer’s needs. When I began mixing in Montreal back in the ’70s, there were no film sound equipment suppliers or professional sound carts sold in Canada. On a Walt Disney show in Alberta in 1974, its sound cart was a modified golf bag trolley with shelves holding a Perfectone mixer and a Nagra IV, with hooks for microphone cables.

During the Montreal Olympics in 1976, I worked for ABC Television’s Wide World of Sports and they used a modified hand truck with welded cages for the Sony video deck, Sony camera, microphones, and cables. It inspired me to build my first cart in 1980 on the same principle. It was a large ‘Anvil-style’ shipping case; with shelves and drawers bolted to a hand truck and it served me well for more than twenty years.

RL Disney cart 1974

Perfectone mixer

Jeff Wexler started building his own sound carts in 1970, first using a Sears TV stand and then moving on to more customization by modifying a produce cart. Jeff explains, “When I saw the cart that Michael Evje had built, an upright vertical cart with an aluminum frame and sliding shelves, I settled in to building that new style of sound cart. At first, I had to rely on fabricators with metal-arc equipment and the skills to make the frames. Later, I discovered the professional erector set of aluminum profiles from the company 80/20. I could assemble and bolt them together myself, not having to rely on outside fabricators. The last two carts I built before retiring used 80/20 materials.”

“The only commercially available cart was made by a company called Wheelit,” says Ron Meyer of Professional Sound Corporation (PSC). “These carts were designed to be used for AV equipment such as film projectors and other classroom and corporate equipment. The Wheelit carts had two folding shelves made out of white melamine-laminated particle board and were very heavy at about sixty-five pounds. Audio Services Corp. (ASC) had some custom shelves made from aircraft aluminum to lighten the cart by twenty pounds or more.”

1993 Mauai, with ’80s built sound cart

In 1984, ASC decided to produce its sound cart, the SC-4. Ron believes that the design was based on a similar smaller cart made by Wilcox Sound.

That style of cart served our needs as we were using Nagra recorders, Sela or Nagra mixers, cabled microphones and very few wireless. As equipment requirements grew, the next progression was to the Magliner with custom shelves and attachments, introduced by Backstage Equipment, Inc.

There are many mixers still using the PSC upright and Magliner sound carts more than four decades later.

Several sound mixers began designing their own customized carts and offering them for sale, Brett Grant Grierson, Matthew Bacon, Gene Martin, Rob Stalder, Devendra Cleary, Eric Ballew, and of course, the late Chinhda Khommarath.

Expert Audio Recording Services, Inc.

For almost two decades, Brett Grant Grierson began by modifying Magliners with custom welded parts to hold SKB upright rack cases. Brett continues to evolve new designs, whether it is a custom 80/20 cart, special brackets, or cheese plates on a rolling stand. I must disclose that I have had four carts built by Brett over the last eight years.

BGG carts

Rastorder, PTY

Sound Mixer Rob Stalder of Australia took a break and worked in sales. Inspired by the Backstage modified Magliner, Rob made a completely new design he called the 2g and his company Rastorder, PTY was born. Rob learned to weld steel in his early years. He remembers building a canopy for his Landcruiser and a tri-bike trailer for his dirt bikes. Rob’s design of choice was the upright cart, the SU was the first built with welded aluminum. “The SU was big. It has split modules because not everyone doing drama had a van, but nearly everyone had a larger analog mixer,” says Rob. “It’s interesting to note that the SU has morphed along with everyone else, finding a following with those large digital mixers.” Rob continues, “The Foldup came next, perhaps my best seller, one hundred and forty to date. Yes, it’s similar to the PSC folding cart. But I wanted a different configuration, a different size, and some different assembly techniques.”

Rob joined the move in building smaller carts and has a new prototype, which will go into production later this year. It offers more flexibility and inter-changeability. Rob concludes, “My challenge has always been freight cost, I am now located further inland in Australia. This is offset by the low Australian dollar, currently one Australian dollar is worth 77 cents US.

VK cart
Mb cart

DC Audio & Music

Devendra Cleary’s interest in cart building began simply in building a cart for himself in 2013. Eventually, he took the leap in what he admits is a crowded market to have a side business and thus, DC Audio was born. Devendra posits, “The concept was to design an affordable sound cart made of ‘off-the-shelf” items. I wanted to take industrial and consumer pieces and find a way to snap them together without the need for fabrication.” The DC Intro cart included an all-weather cover designed and manufactured by Susan Ottalini of MTO.

Devendra built two more versions until he arrived at the DC-TRM. Enlisting the collaboration of Sig Guzman at Backstage Equipment

“I gave Sig my pencil drawings of the cart with the idea that the base supported the rear axles with Trionic wheels and six-inch front casters, designed around a new extended size ‘Nagra Shelf.’ Simple 1×1 steel posts served as the spines and have baby pins on top. This design includes many Backstage accessories like handles and a crossbar with a baby pin in the center.”

Devendra field-tested the prototype loaded to the max on eighty days of production of the Mayans M.C., a very heavy location episodic. Devendra concludes, “I’m someone who is a sound cart user and enthusiast and contributing my design to the market. In the end, the customer base wins with DC carts and many other choices.”

Backstage Equipment

Sig Guzman has assisted many sound mixers in finding accessory parts, as well as the fabrication of sound carts. Backstage is a family-owned business run by Cary Griffith. In the ’70s, Mr. Griffith designed a grip truck, as well as grip, electric, and 4×4 carts. Backstage now manufactures more than one hundred and fifty specialized carts for virtually every craft, including lighting, grip, camera, video, sound, photography, Steadicam, DIT, basecamp, and props. Sig explains, “We continuously work on new products that we hope will become useful, and this is the best part of the job. You think of an idea, you draw it on a napkin, design it on a computer CAD system. Then you build a prototype, and have technicians test it and give you feedback. When you finish the product and see it being used around the world, that’s job satisfaction. We build everything in house with a few exceptions such as laser cutting, water jet, CNC, and finishing (paint, chrome, nickel plating), the most common materials we use for manufacturing are steel, as well as aluminum, depending on the product we are dealing with. All products are assembled, packed, and shipped from our location in North Hollywood.”  
 

Magliner junior sound cart
PSC EuroCart

Professional Sound Corporation

Professional Sound Corporation and Ron Meyer have been building sound carts for more than three decades. When the owners of ASC formed Professional Sound Corp. in 1986, it took over production of the SC-4 sound cart. Since that time, there have been several new features added, these include rack mount rails and various other options such as sun umbrellas, boom pole mounts, and rack mount drawers. Ron says, “Since I became the owner of PSC in 1996, Professional Sound Corporation has worked hard to make sure any new option we offered over the years is backward-compatible with all existing SC-4 sound carts. I never want to leave an existing customer behind.”

As equipment manufacturers began further miniaturization of the sound equipment, in 2014, PSC designed its EuroCart for this new, smaller world. The PSC EuroCart was not only a smaller, more convenient-sized sound cart, but also fit into smaller vehicles now commonly used by sound mixers worldwide. The PSC EuroCart can be assembled without the use of tools as it uses only “thumb” screws for all of its assembly. The EuroCart can also be split in half from top to bottom. This allows for easy storage and travel in the trunk of a car. The top half can also be used by itself for a small insert car sound package.

Audio Department, LLC

At the Audio Department, Gene and Drew Martin started designing and building carts from the ground up in 2016. Before that, they mostly did customization and modifications on Magliners. Gene says, “Drew and I collaborate on every project. We wanted a vertical design for our main cart that was modular, allowing for different options based on the equipment being used. The majority of our carts have a 16-rack-space bottom unit that allows for a variety of different size (or custom) top sections to be bolted on. This gives the customer the opportunity to change the use of the cart if ever needed.”

They strive for the carts to be as light as possible, but still have strength so the majority of their projects are made from one-eighth-inch aluminum, small steel tubing, or 80/20 aluminum if it’s better for the job. Gene and Drew do all of the design, assembly, and small machining. They credit more than thirty carts built so far for a wide range of sound mixers.

SOUNDCART.audio

Matthew Bacon founded SOUNDCART.audio in the UK. The selection of sound carts was extremely limited in the UK twenty years ago; they were rarely seen on anything that wasn’t a scripted drama or a feature film. Just like Jeff Wexler, Matthew’s first cart was a homemade adaption of an existing industrial trolley, a BEKVÄM Kitchen trolley from IKEA. As his career progressed, his need for a solid sound cart became imperative so he set about designing one using the popular 80/20 material.

“Other mixers began expressing interest in either buying my cart or having me build one for them.” Matthew continues, “One thing led to another and after some R&D time, he launched SOUNDCART.audio in March of 2016. Following the success of the production sound cart, demand for other models soon followed. In particular, there was interest in a smaller, more compact design, thus the MiniCart was born!” The approach to designing the MiniCart was similar to that taken with the production sound cart, to design a cart that was functional, flexible, and easily configurable to allow mixers to work in the way they wanted. Matthew explains, “No two mixers work the same even with the same equipment. For example, some prefer to sit or stand, position their monitors high or low, fix their boom poles to the side or rear of the cart, use adjustable feet or small or large caster wheels, etc.”

Matthew offers design services to customize his sound carts to the needs of his fellow mixers. His newest entry is the Explorer sound cart, which is a self-assembly design.
 

Cannibal Industries

Eric Ballew and his Cannibal Industries is the newest entrant in the sound cart-building business. While doing early mixing projects in 2010 in Los Angeles, he supported his career by working with stunt crews in designing rigging and emergency braking systems. It wasn’t until 2017 that Eric decided to collaborate with a machinist and partner with Sound Mixer Daniel S. McCoy.  

Eric says, “Daniel asked me if modifying a Zuca cart for sound work was possible. In comparison to the complexities and mechanics of the cart I built for Michael Hoffman CAS, adapting a generic Zuca cart for sound was a simple and straightforward venture. I took the Zuca to my machinist who has been my mentor. I completely disassembled it and began drawing pictures of brackets and support pieces. We milled the parts out on the CNC machine and Bridgeport mills. Each piece was made to very tight tolerances; the result was an incredibly ridged and lightweight rig. To add some flair, I machined some very specific quick-release hooks for a Sonosax/Orca bag Daniel owns.”

After posting the build photos to social media and on several sound mixing forums, voila, it gained enough orders to mass-produce an initial twenty-five of their Super Zucas. Eric continues, “The Super Zuca is the smallest and most lightweight professional sound cart ever designed for our industry. We use CNC-machined solid aluminum supports, stainless steel fasteners, and industrial-grade coatings, including both Cardinal Powder Coat and LineX. Because the footprint of our cart is so small, we did not compromise structural integrity with collapsibility. This results in a remarkably stout mobile platform.”

Eric is now retrofitting some Chinhda carts and also does custom fabrication and builds.

Coda

Ron Meyer, who has been building sound carts the longest, has a cautionary question to the many independent vendors. “Most of them build small batches of carts. Then they completely change their designs and build another small number of units. Are they stocking any extra replacement parts and standing by their earlier iterations?”

My Godlike Reputation Part 2

MY GODLIKE  REPUTATION PART 2

(A tutorial for those without half a century in the business, and a few with)

by Jim Tanenbaum CAS

I consider my ultimate loyalty to be to the project, to make it the best movie possible, so my always getting the best sound possible is subordinated to working as quickly as I can with the minimum impingement on the other departments, so long as I get sound that is at least “good enough.” Of course, my idea of “good enough” is pretty damn good, but not perfect.

What is “good enough”? For starters, remember that production dialog will be put through a dialog EQ: rolling off the bottom end and cutting off the top at 8 KHz (or for some upscale mixes, at 10 KHz or even 12 KHz). And unfortunately, the dialog will often be buried under the music and effects. (I defy most production mixers to distinguish between a Sennheiser and a Neumann on the release track under these conditions.)

The director, editor, and producer have heard the actors’ lines hundreds or thousands of times in post production, and it has become permanently imbedded in their brains. They can still hear it even when the dialog track is completely muted. The re-recording mixers will try to push the dialog levels, but they are often overridden by the higher-ups, especially for dialog following the punchline of a joke. (I think it would help to have a laugh track to play on the dubbing stage at the appropriate times.)

I try to get my “good enough” production sound through to the release print in spite of this.

To avoid holding up production, I bought a second Nagra IV for a spare as soon as I could afford it, and also a Nagra QFC cross-feed coupler, which mated the two recorders so they could record identical tracks, and use all four mike inputs and the two line inputs.

Since the 7”-reel lid adapters weren’t available way back then, this helped me tremendously in dealing with the 15-minute runtime at 7½ IPS of the 5” tape reels. When I got near the end of the first reel, I just started the second machine to give some overlap, and then reloaded the first Nagra at my leisure. Unfortunately, calling for a tail slate caused too much confusion, so I had to note the overlap on sound reports and depend on the transfer house to handle the splicing. More unfortunately, the quarter-frame resolution of the magstripe sometimes caused a glitch at the splice, so I used the 2-recorder overlap only when absolutely necessary.

I don’t need a mix panel—just thin fingers

Now that “running out of tape” is no more, I still avoid production delays by having my bag rig ready to go for car shots between stage setups. And the disaster of losing a recorded ¼”-tape is also a thing of the past, but I always put the day’s flash-memory card in a DVD case, along with the sound report, and make sure not to reuse my primary CF cards until well after shooting is finished.

One of the “10 Holy Truths” I teach my UCLA students is: The most important thing experience teaches you is what you can get away with, and what you can’t. And you usually have to make this decision instantly.

Many years ago, I was on a commercial with the late Leonard Nimoy as spokesperson, and he was justifiably annoyed with the production company for their many screw-ups, such as the car with no air conditioning that that picked him up for the 3-hour ride to the location. My boom operator had to work very hard to get a quiet mounting for the radio mike, since his wardrobe was 100% polyester, including the necktie. The rehearsals were fine, so Leonard went back to his motor home to await the “magic hour” for shooting.

I’m not uncertain—am I?

Magic hour arrived, but Leonard didn’t. As the light was fading, he finally showed up, not in the best of moods, as the A/C in his motor home wasn’t working very well either. He was hustled into position on the set and the director yelled “Roll!” As the dialog started, so did the clothes rustle, on every fourth or fifth word—definitely unusable. My experience told me that he had messed with his tie while away from the set, and it will take a complete rip-out and re-rig to fix—at least two to three minutes of the at-most ten minutes of usable light remaining.

The director was one of my regulars, very talented, but stern and disdainful of incompetence. Fortunately, he trusted me. Unfortunately, he trusted me to the point that he wasn’t wearing headphones because he needed to listen to one of the clients during the shot. I immediately got up, during the take, and told him the sound was N.G. and I needed two minutes to fix it. He wasn’t pleased with the news, but called “Cut!”

I headed directly for Leonard, because I had already taken the necessary items with me before I left my cart. I re-rigged the lav in a minute-and-a-half, and put the Nagra into “Record” even before I sat back down in my chair. Because of the crucial nature of the situation, I knew it would not look good if I merely sent in my boom op, even though they could handle the problem as well as I (or perhaps even better). Seeing me dealing with the lav, my boom op had automatically brought a Comtek to the director, but he waved him away.

After the sun set, the director went to video village to review the takes, and I stood quietly behind him. The clients were happy, and thus so was he. A few minutes later, I approached him to offer an explanation. He smiled and said, “Nimoy f’d up your mike in his motor home, didn’t he.” It wasn’t a question.

Here is the flowchart I use to deal with problems on the set:

The other side of the ¼-inch tape is that sometimes (okay, rarely), the re-recoding mixers will mess up my tracks. To help guard against this, I have found that if I ride the gain properly, they will tend to leave my tracks alone and pay more attention to the music and effects tracks instead.

The two areas to which I pay special attention are: 1, matching background ambience from take to take and between different angles of a scene, and 2, limiting the total dynamic range of the dialog.

Having the ambience not jump on picture cuts follows another one of my 10 Holy Truths: Sound that calls attention to itself has failed. Raising low-level dialog and reducing high-level dialog (manually—I don’t like the sound of limiters or compressors) means that the re-recording mixers won’t have to do it themselves. However, these two factors are interrelated—simply raising or lowering the recording-channel gain to adjust dialog levels will affect the ambience the same way.

Describing all the various methods to control dialog levels without changing the ambience is beyond the scope of this article, but a very common one is to simply move the mike closer or farther from the actor without changing its orientation. On a soundstage or non-reverberant exterior, the mike’s distance can usually be altered about 3:1 without noticeably affecting the perspective of the dialog while giving almost a 20-dB dialog gain change. Since the distance to the source(s) of the background noise will not be changed significantly, the ambience will not change either, as long as the mike’s orientation with respect to the source of the ambience is not changed while it is being moved closer or farther from the actor.

While I just said my ultimate loyalty is to the picture—in typical Hollywood fashion that’s a lie. My first loyalty is to myself, my life, and limb. During car stunts, I always mix standing up, with my chair out of the way, and have located all the escape routes in advance. However, on one show, this wasn’t enough.

Working on the TBS back lot, there was a scene where a tractor-trailer careens around a corner and takes out a curb mailbox. With my physics background, I set up the cart in a safe place should the truck skid off the street. Then the 1st AD told me that it was the worst place to be, and insisted I move to a location he selected. I knew better than to try teaching him Physics 101 about centrifugal “force,” and did as he said. On the first take, the truck lost control and headed directly for me, or rather where I had just been standing. Unfortunately, the escape path I had chosen was also picked by a lot of the crew, and we were all jammed together and stuck there. Fortunately, the driver got the vehicle stopped in time. Unfortunately, it also came up just short of the sound cart. (It was WB Studio equipment.)

On a night shoot, we had completely blocked off a street with barricades, flashing red lights, and had police traffic control. A speeding drunk driver plowed though all the barriers, and only that he then T-boned a police car parked crosswise in the street kept many of us from being killed or maimed.

You are never absolutely safe anywhere, but especially on a movie set. Fire can spread amazingly fast. Safety chains on effects shots can snap. Remember that when you are tempted to “get a good view” of something being blown up.

Even on an “ordinary” shoot, you need to maintain cautious work habits, but more importantly, be aware of your surroundings. Coming back from lunch, I was in a line of people entering the stage, but was I the only one to get a faint whiff of natural gas?  Apparently so. I immediately notified the 1st AD, who was not overjoyed at the news, but (properly) followed me outside to verify it, evacuated the building, and called the gas company. When they arrived, they found a cracked meter, and replaced it. But because I don’t trust anyone (including myself), the next morning I went straight to the meter and … still smelled gas, even more strongly. The AD didn’t even bother to check, just got everyone out and called the company back. The main gas line had split from the stress after it was re-attached and checked for leaks.

Even when the director or AD wants you or your crew to “hurry”—don’t run. Not only are you more likely to trip or have an accident, but they will eventually expect you to move that fast all the time. Save the running for a real need, like losing the light on a magic-hour shot.

Whatever you may have heard about grips’ “Sex, Drugs, and Rock & Roll” mentality, I have found most of them to be very knowledgeable about safety, and trust their judgement. (The pix shows me only part-way rigged: a pad for my left knee and more safety straps were added before leaving with the police escort.)

Traveling in style thanks to Local 80

Finally, there is a very real possibility that you may have to choose between your job and your life. I was working a nonunion feature in a small town in another state. Yesterday had been 18 hours, and I had already worked 10 today when the producer showed up and said that at wrap, some 2 or 3 hours hence, the company will have 3 hours to pack up and check out of the hotel, and then drive our own cars another 2 hours, at night on unlit mountain roads, to the next town on the schedule. When we get there, some more shooting will be done before dawn. (God bless the I.A.T.S.E. and other unions for protecting workers’ lives.)

I went to the 1st AD, and politely explained that I and my boom operator are too tired and sleepy to drive safely, and must have a proper night’s rest before leaving. The AD replied that the company must be out of the hotel by a certain time, and that there is nothing he can do.

“There is something you can do,” I told him, “hire a new sound crew, because after this scene, Jean and I are going back to the hotel to sleep, and we’re driving back to LA tomorrow.”

The producer left the set, but returned half an hour later to announce that we would be staying in the hotel there that night after all, and not leaving until the next day.

The UPM who had hired me was one of my regulars, and very happy he could make me the bad guy (and keep anyone from getting killed on his watch), and continued to hire me for other shows (read on).

While not quite as important as safety, comfort is a serious concern. If I’m sitting out in the hot sun, or shivering in the cold, I know I’m not doing the best work I could. Learning how to deal with temperature extremes is just as important as learning about audio. I have never regretted the many hundreds of dollars I’ve spent on specialized cold-weather gear. There’s no such thing as a really warm glove—the secret is down hunting mittens worn over thin glove liners. The mitten has a slit on the underside for extending fingers to (fire the rifle) work the mixer pots, then pull them back inside.

Hot weather requires light-colored 100% cotton clothing with long sleeves and pant legs to absorb perspiration and cool you as it evaporates. Sweat that drips on the ground cools only it. Note that I have clipped a space blanket to the top of my umbrella to completely block the sunlight.

There’s another kind of comfort that’s important, too—emotional comfort. Early in my career, I put up with a lot of sh*t because I needed lines on my resume, but as soon as I got “enough” of them, I decided that what I didn’t need was a heart attack or a perforated ulcer.

Shiny-side up makes a BIG difference

On a nonunion shoot, the first day of shooting was a disaster, and we didn’t finish the scheduled work. At the end of the day, the a** hole director called each crew member “on the carpet,” in front of everyone else. (God bless the I.A.T.S.E. and other unions for protecting workers from this sort of abuse.)

When my turn came, I got: “Tanenbaum, you’re completely unprofessional. We had to wait half an hour for you to put a radio mike on the actor.”

I calmly replied, “Kenny was standing outside the actor’s motor home, but the doctor wouldn’t let him in until he finished sewing up the actor’s hand. I’m sorry that’s not professional enough for you, Sir. Would you like me to leave now, or stay on until you can bring in a replacement professional mixer?”

The UPM I just mentioned, was standing on the sideline and frantically motioning me to shut up, as he didn’t want to lose me, or a day’s production while they were getting a replacement sound crew and equipment.

The director backed down, moved on to the next victim, and left me alone for the rest of the shoot, even when I didn’t notice that I had run out of tape halfway through the only take of a shot until we had moved to the next location. (I had to work that day with the flu and a 103° fever.)

Text and pictures ©2019 by James Tanenbaum, all rights reserved.
Editors’ note:  Part 3 of 3 continues in the fall edition.

As Productions Go Online

by James Delhauer

The evolution of communication technology since the turn of the century has revolutionized the way that filmmakers approach their craft. A short twenty years ago, productions made nightly phone calls and distributed paper call sheets each day to ensure that cast and crew were aware of the correct location and call time for each day’s work. Widespread access to personal email accounts rendered this manual process obsolete and saw it replaced with mass mailing lists and digital attachment files. This is just one example that scratches the surface of how sending files over the internet can make production workflows simpler and more efficient. As we move toward a more globalized world of film production, the ability to communicate via the web has become an integral part of day-to-day life. More and more assets can be shared instantaneously, saving countless hours or the cost of constantly transporting assets back-and-forth. The most recent developments in file transfer protocol technology are allowing for entire productions to be uploaded to the internet and sent to multiple destinations across the globe in real time.

A file transfer protocol, or FTP, is simply a network protocol used to transfer files between a computer client and a server. On a small scale, every email attachment makes use of an FTP in order to move the attached file from a device, onto an email provider’s server, and then send it to the recipient’s device. They have become a common, albeit nearly invisible part of the daily routine in production. More and more offices are adopting browser-based FTP services like Google Drive, Dropbox, and Amazon Cloud Storage in order to make sharing and communication channels uniform across the team. In cases such as these, the user need only enter the address of the FTP server into their web browser in order to access data that has been stored there by another member of the team. Username, password, and sharing credentials are often added as a measure of security.

Unfortunately, commonly known platforms such as these have their drawbacks. Most web browsers, such as Google Chrome, Mozilla Firefox, Microsoft Edge, and Apple Safari are not optimized for large or automated transfer tasks. Similarly, most consumer computers are not outfitted for transfer speeds beyond one gigabit per second. Additionally, these sorts of services can also become quite costly as both the expense of cloud-based storage and slow upload speeds make the time commitment impractical. Moreover, FTP clients that utilize third-party servers present a security risk. If a production were to place all of its assets on a non-private server, those files would be vulnerable to theft should anyone obtain the correct login credentials. There is also the remote but still present threat that the server’s provider (Amazon, Google, etc.) may undergo some sort of catastrophe and data loss could occur.

Recent developments are removing these limiting factors and large-scale digital delivery is becoming more commonplace. Ten-gigabit internet connection pipelines have become more prevalent and cost-effective with time, which have in turn made ten-gig connectivity on consumer machines such as Apple’s Mac Mini and recently announced Mac Pro, far more common. This allows for larger amounts of data to be uploaded to the cloud and then sent to network servers. There are also more FTP workflows that involve using a specific client software, eliminating the inherent flaws of browser-based FTPs. Private servers and network-attached storage devices are more prevalent, meaning that production companies can build or purchase their own server solutions, which eliminates the vulnerabilities of third-party storage options.

Fortunately, these improvements are occurring at an ideal time within our industry.

As productions have left the traditional filmmaking hubs of Los Angeles and New York in favor of exotic locations around the globe, production activities have become decentralized. Rich tax incentives offer unique opportunities that incentivize productions to shoot in new hubs that may not be where the footage is ultimately processed and undergoes post production. A project may shoot in Atlanta, undergo general editing in New York, and source out visual effects, color grading, and post-production sound to companies based anywhere else in the world. In complicated workflows such as these, every entity involved with the project needs access to the digital assets. Transporting physical drives can be costly and time-consuming. But simply uploading the entirety of the project’s assets to a server where any relevant parties can download the information is an ideal solution. This allows post-production teams to begin processing footage almost immediately, regardless of whether the shoot is occurring a few blocks away or on another continent. The expediency of this process also allows for near real-time feedback from the editors and dailies, which can eliminate the need for costly reshoots later in the production.

“For Local 695 video engineers…
this emerging technology presents an opportunity.”

For Local 695 video engineers, whose responsibilities include media playback, off-camera recording, copying files from camera media to external storage devices, backup and redundancy creation, transcoding, quality control, syncing and recording copies for the purpose of dailies creation, this emerging technology presents an opportunity. Those who become well versed in its finer points will be ideally suited to jobs looking to take advantage of improved online distribution. While the need for traditional media managers who offload cards to hard drives is not going to disappear in the foreseeable future, a new breed of FTP media managers will become necessary going forward. Media managers who possess a basic understanding of network-attached storage (such as Avid’s Nexis or Pronology’s rNAS m.3) and common FTP client programs (such as Signiant’s Media Shuttle or Aspera’s Client) are ideally suited to take on these new roles as they become more prevalent.

In an ideal setting, a media manager is able to ingest/offload multiple streams of 4K media, perform transcodes if necessary, and then drop completed footage into what is called a “watch folder.” From there, the FTP client can parse the folder for any file containing a specific string of characters, such as the “.r3d” file extension of a Red camera or the identifying label of a particular camera. When it finds files that meet its criteria, the FTP client picks up the file and begins duplicating a copy of it to its web-based server. When the file has finished being copied to the cloud, the client software moves the original copy to a “transferred” folder in order to avoid sending the file multiple times. Signiant’s Media Shuttle is even capable of replicating folder structures, meaning that all of the organization and offline to online directories created on set by the media manager are retained when they are sent to post production.

This sort of workflow has already been successfully carried out on shows such as MTV’s Wild ’N Out, MTV’s Video Music Awards, TED Conferences, and NBC red carpet shows. Wild ’N Out, in particular, is noteworthy for sending multiple episodes’ worth of content from Atlanta to New York on a daily basis during the production of its recent fourteenth season. In total, the show’s video engineers successfully transmitted fourteen cameras’ worth of video, amounting to more than twenty-five terabytes of data.

Nonetheless, this technology is not without its limits. In April of 2019, a team of astronomers successfully captured the first photograph of a black hole—a revolutionary feat that required more than five petabytes (or five thousand terabytes) of data to accomplish. Unfortunately, limitations in bandwidth and storage costs still meant that it was faster and more cost-effective to mail the physical drives back-and-forth across the globe than it would have been to upload the data to any online server. So for the time being, if a production has a few million gigabytes of data that they need to send out, it may be more efficient to physically transport the data back-and-forth.

Even limitations such as these are a temporary matter at best. As the nightly phone call was replaced by a single mass email, internet limitations will also disappear. Ten years ago, uploading even a terabyte of data to the cloud was a monumental task. As our industry continues to evolve and demand more efficient workflows for bulkier and higher resolution files, FTP clients will rise to the occasion. Ten years from now, it is likely that the threshold for transmitting a petabyte’s worth of data will be crossed. When that time comes, 695 video engineers should be prepared to jump at the opportunity for the new work created by this ever-evolving technology.

Using an Exoskeleton

Using an Exoskeleton in Real-World Settings

by Ken Strain and Bryan Cahill

As Bryan Cahill wrote in the 2019 spring edition of Production Sound & Video, Brandon Frees, Ekso Bionics VP of Sales, provided him with a loaner EksoVest, thus allowing working Microphone Boom Operators to give it a test drive in real-world conditions for a week or two at a time. As part of that trial, Ken Strain and Corey Woods were able to use the vest on the Apple TV+ series Mythic Quest. Bryan then brought the vest to Mike Anderson on Goliath.

In this article, Ken, Corey and Mike share their thoughts about using the vest on actual shoots.

Ken Strain: I’m six feet, five inches tall so, I tend to hold my arms over my head less than shorter boom ops, as I favor keeping the boom at shoulder level and resting my elbow on my hips for long takes. On this new comedy however, there are many resets and pickups of alt lines, and we get into very long takes. I often find myself having to hold the boom overhead for 15 to 30 minutes, in order to get around practical lights and reflections. So I jumped at the chance to give this vest a try to see if it would help.

This vest was not made for our industry. It is designed for the automotive industry where the line workers raise and lower their arms with tools thousands of times per day, and wear the vest all day long. It’s built for that type of intense daily usage, with very strong joints and bearings. The autoworkers work in their own stations, with clear space all around them, so the elbows (link assemblies) that stick out in the back are not an issue like they are for us on cramped film sets.

The vest can be set up for various body types, and I’m sharing it with my Second Boom, Corey Woods, who is around five feet, eight inches tall. He tends to hold his arms overhead more often than I do, and he has shoulder issues, which makes it more difficult for him to boom. I do my best to keep him off the stick, but we are an ensemble comedy, so he has to work on many slots.

The vest has four spring options, and we are using the strongest spring, labeled “4,” which gives about fifteen pounds of upward assistance once your elbow reaches a horizontal height. When at rest with arms by your side, the system is not engaged. The buckles and straps that attach around the bicep part of the body are comfortable and easy to use. The angle of assistance can be tweaked as well, to provide the max assistance at a higher engagement point if you happen to reach higher normally.

Ours is set up neutral, so its “shelf” of support is right at shoulder level. I only engage the spring in the leading arm, and keep the trailing arm turned off, so that my arm and the vest itself acts as the counterweight, and it works great this way. It is very easy to lean against the back of the vest, with your arms overhead holding and guiding the boom pole. It looks like work, but instead of the forces going into the shoulders, they are routed down to the hip belt. So all you do is guide the pole, and the nasty heaviness of it is painlessly transferred to your hips. It’s a great feeling when the situation is right.

I tried it on several different types of shots, the first was a crane shot that was just wildly shooting around the room. That was just to get the feel of it, as it wasn’t a major dialog scene. This is where I figured out the max length of boom pole extension before it overwhelms the spring. For my particular setup, that was an 18 foot K-Tek boom pole with internal coiled cable and a CMIT mic on a PSC mount, extended one half section short of full extension. My setup is already well balanced, if you are using a plug-on transmitter on the mic end, you’ll have less extension for sure. If you were to use this outside with a zeppelin, then you will only have assistance, not complete support unless you’re on a relatively short pole.

“Walking backward while booming is a very dynamic situation, and having the assistance change while I was sort of bouncing around behind camera made for an awkward feel.” –Ken Strain

The next shot was a fairly straightforward Steadicam walk and talk through our bullpen, and that did not work so well for me. Walking backward while boomi​ng is a very dynamic situation, and having the assistance change while I was sort of bouncing around behind camera made for an awkward feel; the spring engaging on and off unnaturally. Also, the way the vest fits snug around the hips makes it feel as if it is not designed for backpedaling. I didn’t feel free or comfortable. When I mentioned this to Bryan Cahill, he suggested I might try less assistance or lighter springs which I’ll try if I get another opportunity.In general, Steadicam walk and talks don’t tend to be ridiculously long with endless resets, as everyone seems to be aware of the Steadicam Op’s fatigue. And, I think you run a higher risk of hitting something with this vest on a walk and talk which would be a total party foul.

My next opportunity was exactly what I envisioned the vest to be for: a three-page dialog scene among six actors in a conference room. I had to keep the boom working over a long LED practical that hangs over the table. There was no other way to pull this off without having my arms overhead. I also had to stay mobile because the three cameras were doing moves on the dance floor around me and my actor had a back-and- forth move that went deep. Since I couldn’t use a ladder, this was the perfect scenario for the vest.  

 I had ample space behind me to allow for the link assemblies that stick out from the vest. It worked like a charm. I had total mobility like normal, yet my arm was completely supported at the elbow. I did notice over the duration of the take, that support would start to sag a little,   meaning I was right at the limit of full support for this spring. I could have used less pole and experienced less sag over time. Basically, the spring needs to be stronger. For autoworkers, they don’t need a super strong spring to provide a continuous solid shelf of support, so this is one limitation that needs to be overcome. EksoVest is prototyping a stronger, number “5” spring which could solve the problem.

 The other limitation is the most obvious one, and that is the space it takes up behind the back. You need to place yourself carefully on the set to avoid hitting anything or anyone. It alters the decision making of where you place your self or even walking around the set, you need to be conscious of your space. It definitely makes you feel like Robocop, and that’s exactly what people on set were calling the vest (it does draw an enormous amount of attention and curious questions). The articulating link assemblies limit you to working only in areas which have at least an extra eight inches free behind you. This is a deal killer if someone needed to use the vest all of the time.

One advantage of this design over the Shoulder X vest, which has a much trimmer back profile, is that this system of articulating link assemblies behind the back does not interfere with your arms and shoulders when you find yourself raising your arms straight up. There is nothing above your shoulders, like the Shoulder X, which has a frame that hinges a few inches above the shoulder. This is how they get around the problem of maintaining a tight profile. When I tried that vest on at our union meeting, it felt great until I raised my arms straight up, and basically contacted the metal frame, and it felt like now I was fighting it. Other than THAT, it seems really well designed. As Boom Operators, we have some pretty specific and unique requirements, and exoskeleton manufacturers are going to have their hands full designing something that works with our range of motion and in the tight quarters that we find ourselves. I’m sure it will be possible.

The EksoVest feels good and very high quality. The new stronger spring will be a welcome addition. The stronger the better, but not so strong that we can’t bring our arm back down!

Corey Woods: I concur with everything that Ken wrote. I would only add that the level of adjustment is quick and the ease of putting the suit on when needed can be as quick as thirty seconds. We don’t always need the vest, but when we do, it might be at the last minute. A consideration for Boom Operators who find it hard to leave the set for fear of missing a rehearsal, lighting change, etc.

Mike Anderson: Recently, I had the opportunity to try one of the exoskeletons being tested for use by microphone Boom Operators. The EksoVest exoskeleton system was loaned to me by for a ten-day stretch during production of the third season of Goliath, the Amazon series. Once Bryan Cahill gave me a quick run-through of how the vest works and we got it fitted, I put it to the test.

I found it had a surprisingly comfortable fit coupled with a very good range of motion. The vest has interchangeable springs that add and/or reduce the vest’s lifting power. I started with the heaviest load lift thinking, “Hey, why not?” Once I really started to get used to it, I found myself almost fighting it.

When I switched the springs to a lower lift tension, I found the lower tension made all the difference and the system was extremely helpful. I suggest everyone booming give it a try. It can’t hurt, literally! I want to thank Bryan for all of the work he has done trying to find ways for us Fishpole Boom Operators to minimize the abuse we do to our bodies on a daily basis. If you think I’m wrong, you are kidding yourself. Maybe we can make these devices a standard on set. After all, when was the last set you were on where the camera operators didn’t have an Easy Rig on standby?

71st Emmy Nominations

71st EMMY Award Nominations for Sound Production

Nominations for the 71st Primetime Emmy Awards were announced Tuesday, July 16, 2019. The awards show will be held on Sunday, September 22, at the Microsoft Theater in Los Angeles.

Local 695 congratulates all the nominees!

Outstanding Sound Mixing for a Comedy or Drama Series (Half-Hour) and Animation

Barry  
“ronny/lily”
Nominees:
Elmo Ponsdomenech CAS, Jason “Frenchie” Gaya, Aaron Hasson,
Benjamin Patrick CAS
Production Sound Team:
Jacques Pienaar, Corey Woods, Kraig Kishi, Scott Harber, Christopher Walmer, Erik Altstadt, Srdjan Popovic, Dan Lipe

Modern Family
“A Year of Birthdays”    
 
Nominees:
Dean Okrand CAS, Brian R. Harman CAS, Stephen Alan Tibbo CAS
Production Sound Team:
Srdjan Popovic, William Munroe, Daniel Lipe

Russian Doll
“The Way Out” 
  
Nominees:
Lewis Goldstein, Phil Rosati
Production Sound Team:
Chris Fondulas, Bret Scheinfeld,
Patricia Brolsma​

The Kominsky Method
“An Actor Avoids”

Nominees: Yuri Reese, Bill Smith,
Michael Hoffman CAS
Production Sound Team:
Rob Cunningham, Tim Song Jones, Jennifer Winslow, Sara Glaser

Veep
“Veep”  

Nominees: John W. Cook II, William Freesh, William MacPherson CAS
Production Sound Team:
Doug Shamburger, Michael Nicastro, Rob Cunningham, Glenn Berkovitz, Matt Taylor

Outstanding Sound Mixing for a Comedy or Drama Series (One Hour)

Better Call Saul
“Talk”

Nominees:
Larry Benjamin, Kevin Valentine, Phillip W. Palmer
Production Sound Team:
Mitchell Gebhard, Steven Willer

Game of Thrones
“The Long Night”

Nominees: Onnalee Blank CAS, Mathew Waters CAS, Simon Kerr, Danny Crowley, Ronan Hill
Production Sound Team: Guillaume Beauron, Andrew McNeill, Jonathan Riddell, Sean O’Toole, Andrew McArthur, Joe Furness

Ozark  
“The Badger” 
 
Nominees:
Larry Benjamin, Kevin Valentine, Felipe “Flip” Borrero, Dave Torres
Production Sound Team:
Jared Watt, Akira Fukasawa​

The Handmaid’s Tale  
“Holly”

Nominees: Lou Solakofski, Joe Morrow, Sylvain Arseneault
Production Sound Team:
Michael Kearns, Erik Southey,
Joseph Siracusa

The Marvelous Mrs. Maisel  
“Vote for Kennedy,Vote for Kennedy”

Nominees: Ron Bochar CAS, Mathew Price CAS, David Bolton, George A. Lara  
Production Sound Team:
Carmine Picarello, Spyros Poulos, Egor Pachenko, Tammy Douglas

Outstanding Sound Mixing for a Limited Series or Movie

Chernobyl
“1:23:45”

Nominees:
Stuart Hilliker, Vincent Piponnier
Production Sound Team:
Nicolas Fejoz, Margaux Peyre​

Deadwood
Nominees:
John W. Cook II, William Freesh, Geoffrey Patterson CAS
Production Sound Team:
Jeffrey A. Humphreys, Chris Cooper

Fosse/Verdon
“All I Care About Is Love”  

Nominees: Joseph White Jr. CAS, Tony Volante, Robert Johanson, Derik Lee
Production Sound Team:
Jason Benjamin, Timothy R. Boyce Jr., Derek Pacuk​

True Detective
“The Great War and Modern Memory”

Nominees:
Tateum Kohut, Greg Orloff, Geoffrey Patterson CAS, Biff Dawes
Production Sound Team:
Jeffrey A. Humphreys, Chris Cooper

When They See Us
“Part Four”

Nominees: Joe DeAngelis,
Chris Carpenter, Jan McLaughlin
Production Sound Team:
Brendan J. O’Brien, Joe Origlieri,
Matthew Manson

Outstanding Sound Mixing for Nonfiction Programming
(Single- or Multi-Camera)

Anthony Bourdain: Parts Unknown  
“Kenya”

Nominee:
Brian Bracken

Free Solo
Nominees:
Tom Fleischman CAS, Ric Schnupp, Tyson Lozensky, Jim Hurst

FYRE: The Greatest Party That Never Happened  
Nominee:
Tom Paul

Leaving Neverland
Nominees:
Matt Skilton, Marguerite Gaudin

Our Planet
“One Planet”

Nominee:
Graham Wild

Outstanding Sound Mixing for A Variety Series or Special

Aretha: A Grammy Celebration for the Queen of Soul
Nominees: Paul Wittman, Josh Morton, Paul Sandweiss, Kristian Pedregon, Christian Schrader, Michael Parker, Patrick Baltzell
Production Sound Team: Juan Pablo Velasco,Ric Teller, Tom Banghart, Michael Faustino, Ray Lindsey

Carpool Karaoke
“When Corden Met McCartney
Live From Liverpool”

Nominee: Conner Moore
Production Sound Team:
Renato Ferrari, Adam Fletcher, Mick Haydock​

Last Week Tonight With John Oliver
“Authoritarianism”

Nominees: Steve Watson, Charlie Jones,
Max Perez, Steve Lettie​

The 61st Grammy Awards
Nominees: Thomas Holmes, Mikael Stuart, John Harris, Erik Schilling, Ron Reaves, Thomas Pesa, Michael Parker, Eric Johnston, Pablo Mungia,
Juan Pablo Velasco, 
Josh Morton, Paul Sandweiss, Kristian Pedregon, Bob LaMasney
Production Sound Team: Michael Abbott, Rick Bramlette, Jeff Peterson, Andrew Fletcher, Barry Warwick, Andres Arango, Jason Sears, Billy McKarge, Peter Gary, Doug Mountain, Robert Wartenbee, Brian Vibberts, Brian T. Flanzbaum, Jimmy Goldsmith, Steven Anderson, Craig Rovello, Bill Kappelman, Kirk Donovan, Peter San Filipo, Ric Teller, Michael Faustino, Mike Cruz, Phil Valdivia, Damon Andres, Eddie McKarge, Paul Chapman, Alex Hoyo, Bruce Arledge, Hugh Healy, Steve Vaughn, Corey Dodd, Michael Hahn, Roderick Sigmon, Christopher Nakamura, John Arenas, Niles Buckner, Trevor Arenas, Bob Milligan

The Oscars
Nominees: Paul Sandweiss, Tommy Vicari, Pablo Mungia, Kristian Pedregon, Patrick Baltzell, Michael Parker,  Christian Schrader, Jonn Perez, Tom Pesa, Mark Repp, Biff Dawes


BAFTA Television 2019

Winners of the British Academy Television Craft Awards, May 12, 2019

Sound: Fiction

Winner: Killing Eve
Sound Mixer: Steve Phillips
First Assistant Sound: John Lewis Aschenbrenner
Second Assistant Sound: Jack Woods

A Very English Scandal
Sound Mixer: Alistair Crocker
Boom Operator: Jo Vale
Second Assistant Sound: Dave Thacker
Second Assistant Sound: Emma Chilton

The Little Drummer Girl
Sound Mixer: Martin Beresford
First Assistant Sound: Lee James
Second Assistant Sound: Julian Bale
Sound Trainee: Rob Scown

Bodyguard
Sound Mixer: Simon Farmer
Boom Operator: Andrew Jones
Assistant Sound: Craig Conybeare
Sound Trainee: Ross McGowan


Sound: Factual

Winner: Later … Live with Jools Holland
Sound Supervisor/Mixer: Mike Felton
Assistant Sound Supervisor/Dubbing Editor: Tudor Davies
Floor Crew: Ian Turner
Floor Crew: Chris Healey
Floor Crew: Carol Clay
FOH Mixer: Gafyn Owen
Michael Palin in North Korea
Sound Mixer: Doug Dreger
Dubbing Mixer: Rowan Jennings

Classic Albums: Amy Winehouse Back to Black
Sound Recordist: Steve Onopa
Dubbing Mixer: Kate Davis

Dynasties: Chimpanzee
V.O. Recordist: James O’Brien
Sound Editors: Tim Owens, Kate Hopkins
Dubbing Mixer: Graham Wild

Names in bold are Local 695 members

The Way We Were: Sound Mixing Equipment (Part 4)

Overview

As centuries past have served to define advances in technology, the year 2000 has seen a decided shift in the approach taken toward film sound recording. Analog tape is gone, along with the traditional concept of mixers and recorders functioning as separate devices. Gone as well are mixers that actually have a signal path and control section that carries audio, replaced instead by DSP technology. Similar to larger consoles used in music recording, broadcast and live sound, portable film mixers now function primarily as control surfaces, with the actual audio section being part of a separate I/O rack, or in the recorder itself. And with the proliferation of AES, TDIF, and DANTE interfaces, in many cases, there is very little analog audio to be found at all.

The advantages of this approach for film sound recording are significant. No longer do consoles require a dedicated channel strip, with numerous controls and associated components for signal processing functions such as EQ, filtering, and limiting. All signal routing (buss assigns, solo, panning) are similarly handled by DSP. All that is required of each input is a chip set that allows these various functions to be controlled by an external signal that is tied to a primary data buss. And since it isn’t always necessary to have all of those controls individually accessible for every input channel, space requirements (as well as cost) can be reduced by using a common set of controls to address the individual channels via a selector switch. While some sound mixers prefer the dedicated controls that are the hallmark of analog mixers, it can’t be denied that the DSP approach provides for a range of features that would be difficult to implement in a compact footprint. Additionally, the ability to save primary settings is a huge plus when changing setups.

As noted in our previous installment, one of the first portable mixers for film use that adopted this approach was the Zaxcom Cameo, introduced in late 1999. Since that time, there has been a steady stream of developments by Zaxcom, Sound Devices, Sonosax, Allen & Heath, Zoom, and other manufacturers, all of which take a similar approach when it comes to treating the mixer as an adjunct control device to the recorder.

Here is a look at what is currently on the market, and some thoughts about where it might be headed:

Aaton Cantaress

Aaton Cantaress mixing surface. Note inline meters located above channel strips. (Courtesy Aaton)


Conceived by Jean Pierre Beauviala, the Aaton Cantaress is a 12-input mixer designed to work in conjunction with the Aaton Cantar X3 and Cantar Mini recorders. In a departure from the approach used on many other mixer surfaces, the Cantaress employs magnetic faders, which help to keep dirt and debris from the fader mechanism and resistive element. Additionally, the connection to the recorders is handled over an Ethernet connection as opposed to a USB port. Similar to the Zaxcom Mix-16, it also sports dedicated LED metering for each input channel, which provides the user with a ready display of levels. Power (12-17 VDC) is provided separately via an XLR-4 connector.

Sound Devices CL-9

Sound Devices CL-9. Now discontinued, this was the first mixing surface introduced by Sound Devices.


After the Zaxcom Cameo, one of the second entries in the realm of mixing surfaces was the (now discontinued) CL-9, introduced by Sound Devices, and designed as a dedicated mixing surface for the 788T series recorders. Connected to the 788 via a USB cable, the CL-9 included many features found on traditional analog mixers, including 100mm linear faders, dedicated gain trim controls, two aux sends, parametric EQ, and two bi-directional boom comm channels. The power was provided via the USB port on the recorder, and the EQ functions mimicked the EQ features already included on the 788.

Zaxcom Mix-16

Zaxcom Mix-16, a 16-channel mixing surface intended to interface with the Deva 16 recorder.


Introduced in 2018, the Zaxcom Mix-16 is a 16-channel mixing surface intended to interface with the Deva 16 recorder. With linear faders, dedicated input channel metering, buss assignment, PFL and gain trim controls, the Mix-16 provides many features found on traditional analog mixers, but also has some capabilities that can’t be found on analog consoles, such as individual input channel delay, grouping functions, plus the ability to control the primary functions of the Deva 16 recorder. Additionally, provision is made to control the gain of Zaxcom radio mic systems via the Zaxnet control interface, enabling the mixer to instantaneously control transmitter gains for the corresponding inputs. Power for the mixer is supplied by a separate power source of 8-18 VDC.

Sound Devices CL-6

Sound Devices CL-6. While technically not a mixing surface, this add-on unit provided expansion capabilities for the 664 and 668 series recorders.


While not really intended as a fully featured mixing surface, the CL-6 functions as an expansion device for the 664 and 668 series recorders, allowing for additional input for control capabilities for inputs seven to twelve. Equipped with rotary faders, it allows the user to control the input gain with dedicated controls, as opposed to small trim knobs on the recorder.

Sound Devices CL-12

Sound Devices CL-12. This full-function mixing surface provides additional capabilities for the 6 series recorders.


Conceptually similar to the Sound Devices CL-9, the CL-12 mixer is intended as an adjunct to the Sound Devices 6 series recorders, and likewise functions primarily as a control surface. The mixer provides independent fader control for twelve inputs, giving the user control of level, input gain, phase, HP filters (and when used with the 688 recorder), 3-band equalization. It also provides control of recorder functions, along with monitoring and talkback communication functions for two boom operators. When used with a 688 recorder equipped with the SL-6 SuperSlot wireless receiver option, it is also possible to control the features of the wireless receivers. While the mixer interfaces with all the 6 series recorders via the USB port, power needs to be supplied externally for 633 and 664 series recorders.

Sonosax SX-ST

Sonosax SX-ST mixer/recorder. A unique combination of an analog mixer with digital recorder.


Although the Sonosax SX-ST mixer is technically not a mixer surface, the inclusion of an onboard recorder as part of analog mixer is unique approach to integrated mixer/recorder products. The recording system can be provided as part of the standard SX-ST mixer line, and can be ordered with eight to twelve inputs. The SX-ST mixer utilizes eight program mix busses, which can be assigned to both the recorder and the analog or digital outputs. The input channels are equipped with separate limiters, 3-band equalization, and direct outs (channel inserts are optional). It can be powered via internal D batteries, or an external 10.5 VDC to 20 VDC power source. Prized by many production mixers for its sonic quality, the Sonosax is the only mixer/recorder combo to provide an analog mixer with a digital recorder.

Zoom LiveTrack L-12 Mixer/Recorder

Zoom LiveTrack L-12 mixer/recorder. Aimed at the music recording market, the L-12 is a 12-input mixer with 24-bit 96 kHz recording capability.


While the Zoom LiveTrack L-12 would not typically be found on a film set (due to lack of timecode, Scene/Take/Note metadata functions, and other limitations), I am including it here as an example of how far this technology has come. Sporting twelve tracks, twelve inputs, and 24-bit 96 kHz recording capability, along with basic equalization, at $600 street price, it certainly is something to be reckoned with. Thirty years ago, one would have needed at least a one-inch 8-track recorder and separate mixer to duplicate the basic functions that this unit provides (of course, mic preamps and other features may likely not be up to the same standards one would expect). Still, it’s hard to ignore what can be accomplished at this price point.  

The Future

So, in the span of about six decades, we have gone from analog mixing consoles equipped with large rotary faders, requiring significant amounts of power (and with very few features), down to small digital control surfaces that have virtually no analog signal path, whose primarily function is as an outboard control for the recorder. While the concept of a standalone mixer has not completely disappeared from the landscape (especially for larger channel counts), it is clear that when it comes to portable systems destined for location shooting of film and TV productions, integrated mixer/recorder systems will continue to be the primary path of development.

With the ability now to integrate wireless mic systems into the control path, we will undoubtedly see an expansion of the features related to RF systems (for example, the ability to load a full set of channel presets for both transmitters and receivers, and have real-time monitoring of the system as part of the control surface). As the trend toward miniaturization continues, the primary limitation will likely be that of the human interface, not technology itself. A sound mixer still has only ten fingers and two feet, so until the “direct brain interface” comes along, where one can control a mixer by thought, we are probably close to the limit as to how many channels can be packed into a given footprint. But I could be wrong…

©2019 by Scott D. Smith CAS, all rights reserved.

My Godlike Reputation Part 1

HOW I GOT MY GODLIKE REPUTATION PART 1

(A tutorial for those without half a century in the business, and a few with)

by Jim Tanenbaum CAS

There is more than one way to record good production sound, but there are millions of ways not to. Over the years, many fine production mixers have written articles about their guiding philosophies and recording methods.

After rereading mixer Bruce Bisenz’s story in the 695 Quarterly Winter 2015 Issue, now Production Sound & Video, I finally decided to add my 2 dB’s worth. Many good production mixers have elements of their modus operandi in common, and others that are unique to the particular individual. So do I.

Whatever I’m recording: dialog, effects, music, ambience, wild lines—I consider them all just noise. Different kinds of noise to be sure, but when all is said and done, just noise. When I started out back in the late ’60s, I thought my job was to record these noises as accurately as I could so that their playback would sound exactly like the original. Do you remember: “Is it real or is it Memorex?”

I soon learned that that wasn’t such a good idea.

WHAT I HAVE IN COMMON WITH OTHER GOOD MIXERS is (or should be) obvious:

EQUIPMENT

Think about what kind of equipment you’re going to buy when setting up your cart. First, rent one of each possibility and play with them for a week or so. Which unit feels “right”?
In the good old (analog) days, there was basically only one recorder (Nagra), just a few mix panels (Cooper, SELA, Sonosax, Stellavox), and a few radio mikes (Audio Limited, Micron, Swintek, Vega). They were simple, and similar enough that if I got a last-minute call to replace someone, I didn’t have to think twice about using their gear. (Though I did make up and carry adapter cables so I could use my favorite lavs with their transmitters for actors or plant-mike situations.)  Now, I have to ask what’s in their package—I would be hopelessly lost with a Cantar.

Sound Devices and Zaxcom both make top-notch recorders. I prefer Zaxcom’s touch-screen Fusion-12 and Deva-24 to any of the scroll-menu CF/SD-card flash-memory recorders by Sound Devices or even Zaxcom’s Nomad and Maxx, but other first-rate mixers feel just the opposite. My brain’s wiring finds the touch screen’s layout more intuitive, and helpful if I suddenly need to do something I don’t do often (or ever).

After you’ve acquired all your gear, you need to spend a great deal of time familiarizing yourself with it. Your hands need to learn how to operate everything without your head having to think about it. Likewise, the connection between your ears and your fingers needs to work without conscious intervention (most of the time). You need to calibrate your ears so you don’t have to watch the level meters constantly because with the new digital or digital-hybrid radio mikes, you can’t tell just by listening when the transmitter battery is getting low, or an actor is getting almost out of range. You have to scan all the receivers’ displays instead, to see the transmitter-battery-life remaining or the RF signal strength. You also need to keep an eye on your video monitors to warn your boom op when he or she is getting too close to the frame line.

Speaking (or writing, in this case) of your ears, you need to protect them—you can’t do good work without them. For most of my career, I used the old Koss PRO-4A and then PRO-4AA headphones because of their superior isolation of outside noises, so I know if something is on the track or just bleeding through the cups, without having to raise the headphone volume. You should lock off that control at a fixed level, and only change it under very unusual circumstances. If you find yourself straining to hear toward the end of the day and turn up the level, that is a sign of aural fatigue, and an indication that your regular listening level is too high. “Ringing” in your ears is a far more serious warning sign—it may go away, but the damage has already been done. The insidious nature of the damage is that it won’t manifest itself for decades. The PRO-4AA’s air-filled pads fail after a time, and need to be replaced periodically—they either leak or become too stiff. I’ve gone through almost a gross of them and don’t know if I can get any more, but fortunately, there are new headphones available from Remote Audio, which are even more isolating, and I have recently started using them.

Carry a spare for everything, and for “mission critical” items, carry a spare spare. If the original unit fails from an external cause, you may not discover the problem until it happens again while you are investigating. While it is nice to have an exact duplicate for the spare, some very expensive items can be backed up with a lesser device that will do until a proper replacement can be obtained. (The Zoom F8n recorder is a prime example: ten tracks, eight  mike/line inputs, timecode, metadata entry … all for less than $1,000. The TASCAM HS-P82 at twice the price is even better if you can afford it.) IMPORTANT: You may need adapter cables to patch the different backup unit into your cart—always pack them with the unit! (And have a spare set of cables.)

Check out all your gear before shooting begins. Were batteries left in a seldom-used unit and have corroded the contacts? Or worse, the circuitry? Is a hard-to-get cable that is “always” stored in the case with a particular piece of equipment missing? This is even more important with rental items.

Have manuals for every unit available on the set at all times. Not only for problems that arise, but also if you need some arcane function you have never used before. PDFs on your laptop are extremely convenient, but a hard copy under the foam lining in the carrying case can be a lifesaver if a problem arises when you can’t get to your computer. (If not the original printed version, be sure that any copy is Xerox or laser-printed, not a water-soluble inkjet copy.)

Don’t forget to research sound carts as well, and at least look at all the different styles at the various dealers’ showrooms. There are vertical and horizontal layouts, enclosed and open construction, different wheel options, etc. Over the years, my preferences have changed several times, first, because of the larger productions I recorded, and later, because of shortcomings I discovered in new situations.

My first cart, a Sears & Roebuck tea cart, and no more room


I started with a folding Sears & Roebuck tea cart. It was light and folded flat, which made it easy to store and transport, and set up and wrap quickly. It also let me work in small spaces. Unfortunately, it wasn’t designed for the rigors of production, and the plastic casters broke early on, followed by failure of the spot-welded joints. I replaced the casters with industrial ones, and brazed all the joints. I still use it for some one-man-band shoots. You can see its major deficiency: lack of real estate.

Giant tea cart on The Stunt Man with director Richard Rush


But I liked the concept, so I had a custom cart of the same design manufactured by a company that made airline food-service carts. This solved the lack of space, but created the problem of needing a lot of room to set up shop. It was so big, it had to lay flat on top of all my other gear in the back of my 1976 International Harvester Scout II (an SUV before they were called that). There were also the same problems that I had with my first folding cart: stuff bounced off when moving over rough surfaces; rain, especially coming on suddenly; and having to unpack all the gear in the morning and interconnect it, then having to put everything away at wrap.

My current shipping-case cart, built in 1979, and no more cables!


My final cart solved all these issues. In 1979, I designed a tall shipping case that had all the equipment built in and permanently connected. All I had to do to set up was remove the front cover and attach the antenna mast assembly. It was narrow enough to fit easily through a 24-inch-wide doorway. The height of the pull-out mix panel allows me to mix while standing (a good idea when car stunts are involved) or seated in a custom-made chair. And I can stand up on the chair’s specially-reinforced foot rest to see over anyone standing in front of me. The cart is completely self-contained, with 105 A-H of SLA batteries. The extra set of tires lets me travel horizontally over rough ground, and the dolly can be unbolted to use separately if needed. The only problem remaining is the 325-pound weight.

Here is a lightweight alternative vertical design belonging to Chinese mixer Cloud Wang. Whatever cart style you choose initially, be prepared to replace it after you have gained some experience, and perhaps again … and again … and…

Chinese mixer Cloud Wang’s vertical lightweight cart


If you’re going to work out of a bag, rent various rigs, fill the pouch with thirty pounds of exercise weights, and wear it for many hours. Experiment with changing the strap tension, bag mounting height, and all the other variables, including different size carbineers to attach the loads. A hip belt is a necessity to distribute the weight and reduce the pressure on your shoulders. Nothing available worked perfectly for me, so I wound up buying both Porta-Brace and Versa-Flex rigs and creating a Frankenstein from the top of one and the bottom of the other.

Wireless transmitters both numbered and color-coded


The organization of your stuff is Paramount (or Warner or Disney or…) for efficient operation and avoiding errors, especially in stressful situations. I use both colors and numbers, according to the RETMA Standard (which is used for the colored bands on electronic components): 0=Black, 1=Brown, 2=Red, 3=Orange, 4=Yellow, 5=Green, 6=Blue, 7=Violet, 8=Grey, 9=White. My radio mikes and mixer pots are all color-coded, as are all my same-sized equipment cases, which allow my brain’s right side to take some of the load off its left side. Note that I have permanently taped the unused faders on the mixer’s right side, and temporarily taped off the Channel-4 fader of an actor who’s not in the shot, using his label from his radio mike receiver. I also taped over his radio mike receiver’s screen for good measure. I can tape off the faders at the top, full-open position, to keep them out of the way because the Cooper mixer can power off individual channels. I also label the faders with the character names on the front edge of the mixer. I usually don’t have to highlight individual character’s dialog on my script sides, but if I do, I use the same color as their channel. “Constant Consistency Continually” is my motto.

In addition to numbering cases, you need to label their contents somehow, either by category (e.g., “CABLES”) or more detailed contents. Whatever system you decide on, it needs to be intuitive and quickly learned because you may be using different crews from job to job.

When brand-new gear doesn’t work out of the box


I do a lot of my business with one major equipment dealer and rental house, but I make it a point to buy a fair amount of stuff from another company as well. Not only does this keep both of them competitive on prices, but if I’m in the middle of the Sahara Desert and the camel kicks over my sound cart, I’m not limited to what one of them has in stock for immediate replacements. (Not an unrealistic example—in Morocco, my local third person accidently dumped over the sound cart in the street. Fortunately, it was rental gear.) Have you ever tried to set up a high-limit credit account on the spot, over the phone, with a company you’ve never done business with before? (Besides, I get twice as many free T-shirts, hoodies, and baseball caps.)

Don’t neglect to visit smaller dealer/rental houses as well. They may be willing to take more time to help you learn the equipment, or open the store at 3 a.m. Sunday to handle a last-minute emergency.

If you’re just starting out and can’t afford to buy everything at once, rent the recorder and radio mikes. They rent for a smaller fraction of the purchase price and evolve the most rapidly. WARNING: Don’t buy a new model as soon as it comes out—the first few production runs sometimes have problems that require a hardware fix that is expensive or impossible! This happened to me twice—I didn’t learn the first time. My first-run Vega diversity radios were in the shop more than on the set for several years. My first-run StellaDAT was so unreliable, I was happy when it got stolen. Also, if a well-established product suddenly is offered by the manufacturer at a discount, it may be about to be replaced by a new model—this happened to me recently after I bought $8,000-plus of name-brand lavs.

Equipment insurance is as important as the gear itself, but takes an entire article to do the subject justice.

PRE-PRODUCTION

If you don’t know the director, research their shows on IMDb, and watch some of them to get a feel for the director’s style and technique. Talk to people who have worked with her or him.
Read the script as soon as possible, looking for scenes that might have challenges for the Sound Department, or an opportunity for you to make an esthetic contribution to the project.

Speak with the director at the earliest opportunity to discover what his or her feelings are regarding sound and its relation to the project. The director may have sound ideas that are impractical, if not outright impossible, but saying: “I can’t do this,” is always a bad idea. I prefer saying: “It would be even better if you did this instead.” If I can convince the director it was her or his own idea, so much the better, because they will be less likely to fight me later on.

The most important question to ask the director is: “What do you want me to do if there is a sound problem during a take?” (It probably won’t be “Jump up and yell ‘Cut!’”)
The second most important question is: “What do you want me to do if I need an actor to speak up?”
Believe it or not, the “Don’t bother me with sound problems—I’ll loop it,” is by far NOT the worst attitude. If the director doesn’t care about the production sound, that leaves me free to do whatever

I want, so long as I stay out of their way.
The worst type is the director that looks over my shoulder and tells me what to do. Or the producer—fortunately, I found out about him before I accepted the job (to replace their “bad” mixer) and turned it down. The show lasted five more miserable (for sound) episodes before it was cancelled—I talked to the replacement mixer afterward. If I find myself stuck with one of these shoots, I ask the director either: “Why did you hire me if you wanted to mix the show yourself?” or as many questions as I can think of about every shot, even when the director doesn’t come over to my cart first. This results in either: A, my being fired; or B, being left alone for the rest of the shoot. So far, I haven’t been fired.

If you are not familiar with the DP, gaffer, and/or key cast members, research their attitudes toward sound by interviewing other mixers who have worked with them. (Search IMDb for the info.)

If you can’t get any of your regular crew people, be careful about accepting recommendations from other mixers. Be particularly skeptical if they won’t or can’t tell you what the person’s weaknesses or shortcomings are—everyone has some. And personalities are important—a detail-oriented utility person may be perfectly compatible with one mixer but annoying to another. (Again, this isn’t a made-up situation. I did a TV pilot with a “highly- recommended” 3rd person that turned out to not know how to do anything “my way,” and took a very long learning curve to get up to speed.

Even crew you have used before need to be vetted if they haven’t worked for you recently. They may have changed their styles from working for other mixers, or just been away from the business for several years.

Go on all the location scouts. (Of course, you should make every effort to convince the production company that your presence there will be worth far more than what they pay you.)
I know that when I see a practical location next to an automotive tire and brake shop and under the LAX flight pattern, the UPM will respond to my request for an alternate venue with: “The director likes the look, it’s easy to get the trucks in and out, and the rent is cheap.”


Why I do go is to get a head start on solving the sound problems I find, so that on the day I will have what I need. For example, a courtyard with a dozen splashing fountains may need two-hundred square- feet of “hog’s hair” and a hundred bricks to support it just above the water’s surface, and this is not likely to be available at a moment’s notice from the Special Effects Department.

PRODUCTION

If they are being used on the show, go to the dailies (“rushes” for those of you on the East Coast) whenever possible. Besides the possibility of getting you a free meal, you will have a chance to judge your work without the distractions of recording it. I find it usually sounds better than I remember it—if it sounds worse (though still “good enough”), I need to find out why. Also, if someone questions some aspect of the production track, I am there to explain it before the Sound Department gets blamed for something that wasn’t its fault—a bad transfer, for example.


Sadly, the pace of modern production often eliminates screening the footage in a proper theater for the director and keys on a regular basis—the director and DP have to look at a DVD or flash drive of the shots on a video monitor between setups—when they can spare the time. Still, attend this if you can.

Make friends with the Teamsters early on. When they come around to get used batteries for their kids’ toys, give them a box of new ones instead. Then, if you need the genny moved farther away, they will be more cooperative, especially if you tell them as soon as you get to the set.

Make friends with the electricians early on. When they come around to get used batteries for their kids’ toys, give them a box of new ones instead. Then, if you need the genny moved farther away, they will be more cooperative in stringing the additional cable, especially if you tell them as soon as you get to the set.

Make friends with the grips early on. When they come around to get used batteries for their kids’ toys, give them a box of new ones instead. Then, when you need a ladder for your boom op, or a courtesy flag to shade your sound cart… Ditto for props, wardrobe, and all the other departments.

WHAT I DON’T HAVE IN COMMON WITH OTHER MIXERS isn’t obvious:

EQUIPMENT

Many mixers require their equipment to “earn its keep.” They won’t buy a piece of gear that they may never (or seldom) use. I have a different philosophy: if there is a gadget that will do something that nothing else I have will do, that is reason enough to acquire it. (And one element of my godlike reputation.) Some examples:

1. I have several bi-directional (Figure-8) boom mikes and lavaliers, even though they are not commonly used in production dialog recording (except for M-S, which itself is rarely needed). But their direction of minimum sensitivity (at 90º off-axis) has a much deeper notch than cardioids, super-/hyper-cardioid, or shotguns. On just two occasions in over half a century, they have allowed me to get “good enough” sound under seemingly impossible conditions. The US Postmaster General was standing on the loading dock of the Los Angeles Main Post Office while surrounded by swarming trucks and forklifts and shouting employees, which completely drowned out his voice on the omni lav. I replaced it with a Countryman Isomax bi-directional lavalier, oriented with the lobes pointing up and down. This aimed the null between them horizontally 360° and reduced the vehicle noise to the correct proportion to match the visuals. Since we didn’t see his feet, I was able to have him stand on a pile of sound blankets to help deaden the pickup from the rear, downward-facing lobe. (Of course, afterward, the director asked, “Why don’t you use that mike all the time?” Then I had to explain about all types of directional mikes’ sensitivity to clothing and handling noise and wind.)

2. I also have a small, battery-operated noise gate. While not suitable for use during production recording because the adjustment of the multiple parameters requires repeated trials, it has enabled me to make “field-expedient” modifications to an already-recorded track. I cleaned up a Q-track so a foreign actor wouldn’t be distracted by boom-box music and birds in the background while he looped it on location before flying back to his home country. I removed some low-level traffic that was disturbing a “know-nothing, worry-wort” client on a commercial shoot and earned the undying gratitude of the director, who knew that it wasn’t a problem but couldn’t convince the client. (I also was able to close-mike some birds in the back yard and add them to cover the “dead air” between the words.)

3. Every time I find bulk mike cable in a color I don’t already have, I buy 50 feet and make up a 3-pin XLR cable. This allows me to hide them “in plain sight” by snaking them through grass (various shades of green), or running them along the side of a house where the wall meets the ground (various shades of brown for dirt and fifty shades of gray for concrete or asphalt). Wireless links have all but eliminated the need for cables, but in the rare case where they are needed…

Cindy Gess booming with the Cuemaster on Babylon 5


I read many trade magazines, and investigate any new piece of gear to see what it will do. I offer to beta-test equipment, like the Zaxcom Deva I or the Lightwave Cuemaster. I soon bought production models of both of them, and still use the Deva I (upgraded to a Deva II) for playback of music and the prerecorded side of telephone conversations. I had Rabbit Audio upgrade my Cuemaster, too. Boom op Cindy Gess used the beta on Babylon 5: The Gathering, where a walk-and-talk in a narrow aisle reversed direction while she was behind the camera the entire shot. The mike had to be almost horizontal to avoid footsteps on the plywood-supposed-to-be-metal floor, and swivel 180° to track the actors. I don’t need it often, but when I do…

I know that the mike doesn’t always have to be pointed directly at the actor’s mouth. Good cardioid (and super-/hyper-cardioid) mikes have an acceptance angle of about ±45° from the front where the sound of dialog won’t be audibly affected when shown in the theater. This allows my boom op to orient the mike so that its minimum sensitivity direction is aimed at a noise source and still gets the actor’s voice acceptably. Have the boom op adjust the mike to minimize the noise and then see if they can get the actor within its 90° acceptance cone.

Text and pictures ©2019 by James Tanenbaum, all rights reserved.
Editors’ note: Article continues with Part 2 of 3 in our Summer edition.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 7
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Interim pages omitted …
  • Page 16
  • Go to Next Page »

IATSE LOCAL 695
5439 Cahuenga Boulevard
North Hollywood, CA 91601

phone  (818) 985-9204
email  info@local695.com

  • Facebook
  • Instagram
  • Twitter

IATSE Local 695

Copyright © 2025 · IATSE Local 695 · All Rights Reserved · Notices · Log out