• Skip to primary navigation
  • Skip to main content
  • Login

IATSE Local 695

Production Sound, Video Engineers & Studio Projectionists

  • About
    • About Local 695
    • Why & How to Join 695
    • Labor News & Info
    • IATSE Resolution on Abuse
    • IATSE Equality Statement
    • In Memoriam
    • Contact Us
  • Magazine
    • CURRENT and Past Issues
    • About the Magazine
    • Contact the Editors
    • How to Advertise
    • Subscribe
  • Resources
    • COVID-19 Info
    • Safety Hotlines
    • MyConnext
    • Health & Safety Info
    • FCC Licensing
    • IATSE Holiday Calendar
    • Assistance Programs
    • Photo Gallery
    • Organizing
    • Do Buy / Don’t Buy
    • Retiree Info & Resources
    • Industry Links
    • Film & TV Downloads
    • E-Waste & Recycling
    • Online Store
  • Show Search
Hide Search

Features

Bruce Bisenz

Bruce Bisenz: His Personal Best

by David Waelder
Photos courtesy of Bruce Bisenz

Robert Towne: (recalling an interview prior to hiring him for Personal Best) What got me about Bruce— he did a movie about horse racing and I remember asking him about how he set up the sound on that movie. He told me there was nothing that he had seen or heard [in other horse racing films] that was unique in the way he figured it should be. He went out and recorded sound out on the track of the jockeys in the middle of a race and he said he’d never heard anything like it. It involved the way that the jockeys spoke and how significant [that] was and he described to me the ways in which it was different. I was fascinated because I felt that that’s the sort of thing that I wanted to do with track and field.

Jeff Wexler, CAS: I consider him somewhat of a mentor to me because anytime I was having any difficulty or I was, wanted to build something or had to do a job that I didn’t really understand completely, I would always ask Bruce, well, how would you do this … and Bruce always had an answer. It often was not the answer that I would get from any other Sound Mixer …

Bruce Bisenz has a well-earned reputation as a technical wizard. He designed and built much of the equipment he used throughout his career and he personally performed bias and alignment calibration (not a simple task) for all his recorders. It is particularly remarkable that he is essentially self-taught with little or no formal training in electronics or sound recording practices.

Coming out of military service in 1967, he was unsure what to do next but he had a good friend in David Ronne, who had already established a career in production sound. Bruce had an interest in hi-fi and work as an electronics technician, so David encouraged him to apply to FilmFair where he, until recently, had been working as George Alch’s assistant.

He stayed with FilmFair about two years, replacing David as George Alch’s assistant. He learned everything he could about production sound recording from George and then moved up when George left. He was also involved in Post Production, making transfers and preparing tracks for mixing, an experience that helped develop a sense of what was needed and what worked on screen.

Although he was earning good money at FilmFair, Bruce only stayed another year and left to tour Europe for a network miniseries hosted by skier Jean-Claude Killy. Returning home, he found work on documentaries and corporate projects. His friend David Ronne was then heading the Sound Department for Wolper Productions and recommended him for assignments including a special with historians Will and Ariel Durant and documentaries with Jacques Cousteau.

David Ronne introduced him to the practice of working with a handheld Nagra and a shotgun microphone, starting with a Nagra III and an EV 642 and progressing quickly to a Nagra 4.2 and the Sennheiser 804. That combination was a game-changer at the time.

A recordist, working alone, could produce a quality track that had previously required several people and a truck full of equipment. It was also an excellent training ground; the immediacy of working directly with the recorder and a handheld microphone imparts a keen sense of how microphone position determines the sound.

During this period and his time at FilmFair, he read everything he could find about sound recording and the science behind it, making a vigorous effort to understand all of the factors that determined the characteristics of a recording. This practice of total immersion investigation became a life habit. Portable radio transmitter/receiver sets were becoming more reliable so Bruce wanted to make the lavalier microphones used with them sound more natural. Over time he determined the placement and EQ that would allow him to ‘Mix and Match’ with his fishpole microphones. This was especially important in the days of singletrack dialog recording (no pre-fade backup tracks) when all microphones were mixed together.

Portable mixing panels with full parametric EQ were not available at that time, but David Ronne was building a device with potential. Ronne took out the guts of a Nagra and coupled a microphone preamp to a line preamp and bundled them together in a separate enclosure. Using this outboard preamp allowed feeding a third microphone to the two-input Nagra (or more if one daisy-chained the devices) and several Production Mixers built similar interfaces. (See the profile of Courtney Goodin in the Summer 2011 issue of the 695 Quarterly.) Bruce Bisenz took the design a bit further.

He recognized that the Nagra line output card was sufficiently hot to drive a passive Program or Graphic equalizer and still yield an output that could be recorded through the Nagra line input.

Bruce collaborated with his engineer friend, Paul Bennett, to custom-build a microphone mixer using Nagra cards with Bennett-modified Altec Program and Graphic EQs configured with the curve that Bruce specified to make radio microphones sound natural. They also fit custom 24 dB/octave high-pass and 18 dB/ octave low-pass filters. They even hand-selected capacitors and other components. The capability of this mixer-equalizer coupled with his experiments in microphone placement gave him the tools to tackle nearly any recording challenge.

Television commercials and documentaries were the sole beneficiaries of these skills for a long time but a change in advertising practice nudged Bruce to change direction in his career. He had been happy working commercials but the change from sixty-second to thirty-second spots diminished the work days needed on commercials and encouraged him to seek out long-form work. He was hired to record Damnation Alley in 1977 after audaciously telling Jack Smight, who was directing the picture for Fox, that if he didn’t prefer his work to their regular Mixer (who was unavailable at the time), they should fire him. This was his first studio picture.

Glenn (Rusty) Roland: The Sound Department at Fox got the dailies in from the location recordings and were amazed at how totally perfect they were. [They] didn’t need any additional work ’cause Bruce, you know, was a perfectionist right on set.

Using all the tricks and specialty equipment he had developed for commercials, Bruce produced an excellent track that needed little adjustment in Post. For commercials, he had been a NABET mixer but this project gave him the IA Signatory days he needed to qualify for IATSE membership. After his acceptance into IATSE Local 695, Bruce was able to work on studio pictures.

He worked with Cinematographer John Alonzo on FM and it was Alonzo who recommended him to Director Martin Ritt for Norma Rae.

Norma Rae cemented his reputation as a Sound Mixer of remarkable ability. Much of the action took place in the din of a working textile mill and Ritt’s expectation was that Bruce would only be able to get a scratch track in that environment but even that was not at all certain. On the location scout, he used a Radio Shack sound level meter and measured 103 dB on the machine room floor. That’s a deafening racket but not so loud that people couldn’t communicate. Mill workers wore custom-fitted ear protection in the machine room and he watched them as they would approach one another and speak directly into the other person’s ear. Even then, only the person listening could hear what was said; it was essentially private communication. He had principal actors fitted for ear protection by the mill and specified that the plugs should be molded around his 26 27 miniature microphones. Rather than stringing the earplugs on a cord, he sourced especially thin microphone wire and used that both as a neck-loop and to carry signal to the transmitter. Ritt naturally staged the action to match normal behavior in the machine room and the actors would holler their dialog into a microphone only an inch or so from their lips. While the results didn’t have the quality needed for a production track, they were quite sufficient as a guide track..

Bruce made another key contribution to Norma Rae. Near the end of the film, Sally Field as Norma Rae has a confrontation with the management of the mill and is carted off by police officers. It’s a climatic scene with dialog from several characters and would be chaotic if characters could only communicate by screaming in each other’s ears, one on one. Bruce reviewed this with the Director and encouraged him to find a way to shut down the machines for that scene. Nothing of the sort was scripted but Bruce’s suggestion came a few weeks prior to filming the scene so Ritt had some time to consider the advantages. He and his writers structured the scene so that, after Norma Rae displays her “union” sign, the workers, one by one, shut down the machinery. The scene played very much as Crystal Lee Sutton, the actual Norma Rae, recalled it but it hadn’t been part of the first draft of the script. This work stoppage is arguably the key moment of the movie and intensely powerful.

Each project in a career brings its own set of challenges. Bruce evaluated each circumstance individually and adjusted his approach for the best result. He used whatever tools or techniques would produce a good track.

Nick Allen, CAS: It was so [much] fun to work with Bruce because he would use lots of tools. With Bruce, you’d open the truck and, which of the forty-seven microphones would you like to use today, kid?

Glenn (Rusty) Roland: Bruce was always doing that on sets, he would always hide microphones everywhere … he was always placing those huge, I guess they were Neumann, those huge microphones …

Nick Allen, CAS: He was putting U-87s in the middle of a set and cranking it and getting real dialog they’d use in the movies. He did the wackiest, most obscure things but, like you said, his ears said, you know what, they’ll use that in the mix …

In some cases, the simplest method was the best choice but Bruce was not afraid to swim against common practice if that yielded results. For 10, there was a scene with Julie Andrews singing and Dudley Moore accompanying her on piano. Although he experimented with a plant for the piano, he ended up recording it off Dudley’s radio microphone. Post Production didn’t believe at first, that the piano was recorded on a wireless but he was fearless if a scene sounded good to him. Conversely, hiding microphones the size of a Buick, if they sounded good, was, for him, entirely normal.

Nick Allen, CAS: And he had a “keep trying” attitude. He taught me that if take one was wrong, put something else in on take two. When you find something that’s getting close, tweak it, don’t change. There was this path of methodology.

Regrettably, as Nick went on to say, the pace of production is now so relentless that the first take is often it and there may be no opportunity for adjustment. Whenever possible, he was a bold experimenter in the pursuit of excellence. It’s a dangerous business to be running EQ in a shot—and changing it on the fly, no less! Multi-tracking was not an option at the time and there was a risk of over-compensating and spoiling a track. Nobody gets it right 100% of the time, but Bruce had an enviable batting average. He worked to maintain that record both by doing his preparation carefully to be sure he knew what to listen for and also by keeping his hearing in top form.

Douglas Schulman, CAS: Another thing Bruce does, you know, I don’t know if he’s still doing it, but he would always wear earplugs in his ears when he wasn’t wearing his headphones.

Glenn (Rusty) Roland: Bruce was very protective of his hearing. If we were in a loud place, he’d have earplugs in or something. He did not want to get his ears damaged by bad, loud noises. He had incredible ears for sound.

Bruce regarded his ears as his primary instrument and took pains throughout his career to protect them.

Douglas Schulman, CAS: He didn’t have a problem, I mean, with teaching you something but Bruce was always funny. If he was going to show you something new, he would say, “Now, this is a secret. Don’t tell anybody.”

Nick Allen, CAS: I went to Berklee College of Music, very briefly, only for a couple years. I was studying production engineering and jazz piano and I didn’t learn as much there as I did from being around Bruce.

In the course of his work, Bruce acted as a mentor to several of his boom operators. He recalls a time with Nick Allen when they spent half a day listening to windscreens and cataloging how each one slightly altered tone and ambiance. This kind of attention to detail might seem obsessive but it provides the foundation of understanding that permits responding rapidly to challenges.

Douglas Schulman, CAS: The thing that I learned from Bruce is actually how to listen to stuff … we tend to, with our minds, focus on things and take things out and what I learned from Bruce was to listen more like a microphone which hears everything.

The summation or direction of a 37-year career isn’t often represented in a list of credits. This is especially true with crew people who don’t usually initiate projects but must accept or decline offers as they are available. Bruce Bisenz’s career was more eclectic than most, ranging from Reds, a grand historical vision spanning continents (he did the scenes shot in California) to intimate portrait films like Without Limits, the Steve Prefontaine story. He did performance films like Purple Rain with Prince and he continued to do documentaries like The Making of a Legend: Gone With the Wind. (I’m pleased to have worked with him on a few of the smaller projects like Legend.) Other highlights included Captain Eo and Smooth Criminal/Moonwalker where he engineered the off-speed playback, not a common thing at the time, so that Michael Jackson could dance in slow motion and still be in sync with the music. The one common element of all these projects is that they all received his focused attention and considerable thought. Bruce never “walked” through an assignment; he evaluated each one to consider what an audience should hear on the track and worked to accomplish that. It was just this intelligence that Robert Towne recognized in that first interview with Bruce for Personal Best.

Glenn (Rusty) Roland: Oh yeah. I always thought Bruce was, he was just the best, I mean when you worked with him. It is different than others, that’s for sure, but in a very good way.

Robert Towne: You know, I just said, this is what I need and he somehow delivered it. I honestly can’t say enough good about Bruce in terms of what he brought to his work.

The first thing that Bruce said to me when I interviewed him was that a “successful career implies a successful retirement. If you die in harness, that’s not a successful career.” He’s been retired for eleven years now but continues to be active. He records a live swing band weekly. The Jerome Robbins Dance Archive accepted for the New York City Library the photos of performing flamenco dancers he has been making over the ten years of his retirement.


Interview Contributors

I thank Bruce Bisenz for making himself freely available and for supplying the images that illustrate his profile. I’m also grateful to the following colleagues who made themselves available for interviews:

Nicholas Allen, CAS was a Boom Operator for Bruce starting with Crimes of the Heart through Gilmore Girls. He works today as a Production Mixer.

Glenn (Rusty) Roland, a Cameraman/Director, remembers working with Bruce on motorcycle documentaries like On Any Sunday. He worked with Bruce on commercials and brought him in to do The Making of a Legend: Gone With the Wind.

Douglas Schulman, CAS was Bruce’s Boom Operator on Personal Best, Heart Like a Wheel, Something Wicked This Way Comes, Captain EO and many others. He is a Production Mixer today.

Robert Towne is a Writer/Director. He hired Bruce for Personal Best, Tequila Sunrise and Without Limits.

Jeff Wexler, CAS considers Bruce a mentor; his first assignment in sound was booming for Bruce.

The American West

A remembrance of the The American West

by Bruce Bisenz

I hope you’ll believe me when I say this story’s so good that I didn’t have to embellish—even a little bit.

Seven years into my freelance career, I began to get occasional documentary work with my friends at Wolper Productions and became a regular with Group One Productions.

Those were heady times; newly available portable equipment like the 16mm self-blimped Éclair NPR, George Quellet’s Stellavox SP 7 (Sync/Stereo before the Nagra 4S) and VHF wireless from Vega made it possible for two or three to do the work of a whole crew. And Group One had assembled a skilled group of technicians who could use these new tools to best advantage. Bob Collins, their regular Director of Photography, and Editor Keith Olson had already won Emmys for Peggy Fleming at Sun Valley.

David Vowell, a documentary writer recruited for the project, had to interview a bedridden old man to construct a script. He asked for my help with equipment but jealously controlled access to his invalid, the famous film director, John Ford.

The idea was to film this amazing wreck of a man (still nobody’s fool—scant months from his passing) and combine it with footage from his legendary Westerns and interviews with actors closely identified with him to make a TV special. The American West of John Ford was to be his last project.

 I soon found myself at the “Four Corners” juncture of Arizona, Utah, Colorado and New Mexico. It is now home to Navajo, Hopi, Ute and Zuni reservations; it is also the location of Monument Valley and the Canyon de Chelly National Monument, the background for virtually all of the John Ford Westerns.

For “Pappy’s” last visit, there was no difficulty getting accommodations or enticing iconic actors like John Wayne, Henry Fonda or Jimmy Stewart to be available. Dennis Sanders, the Director/Editor and Bob Collins, the DP, set up their first shot, an image of “The Duke” (I called him Mr. Wayne) against the desert west that had played such an important part in his own mythic career. Wayne didn’t seem to take to me (after all, I was a “Long Hair”) so I was relieved when a minor incident focused his attention elsewhere. Duke Wayne had his big hat set to shade his eyes from the blazing desert sun when Dennis asked him to tip his hat back a bit for the camera. We all heard him rumble, “the hat stays right where it is!”

I knew that after a hard day of exteriors in the sun, there wouldn’t be any fun in town. We were deep in the Navajo reservation and hours from any town so I brought a deck of cards. I sashayed (this was the Old West) into the lodge’s dayroom and attracted a few players.

After a while, a 240-lb, 6-foot 4-in icon, moved by his unique gait, stood over our table and said to nobody in particular, “Is this a private game or can anybody play?” And a hush came over the room …

Even I knew the response to this straight line. I took a moment to catch his eye, way up there from my chair, and then, remembering to give him a thin smile, I said, “Your money’s as good as anyone’s.” And the Duke sat down!

John Wayne in his element Now, for some reason, everyone in the room wanted to play poker at my table. Soon I found myself with five cronies in Duke’s “Home Game,” all of them millionaires except me. But I had a wad of $100 bills in my jeans and $3,000 was a lot of money back then.

“What’ll we play, Duke?”
“Oh, dealer’s choice” (5/7 card stud and 5-card NOTHING wild).
“What about the stakes, Duke?”
“Um, table stakes?”
“Sure, sure Duke.”

Whatever “table stakes” meant; I was green in this game in more ways than one.

High-stakes poker with John Wayne; it was cutthroat. His cronies kept on “Oh goshing” the Duke as he pulled on the neck of the bottle at his right hand.

“Ah, I can’t drink the way I used to.”

But I noticed the level go steadily down past half.

I started out lucky. I grinned, bet and won with some pots running $500– $700. That was three or four days’ pay. Finally, I had the best “up cards” for seven-card stud. But, for once, my first-to-bet upcards weren’t getting any better. I was surely behind but I kept up my silly grin and the betting as, one by one, the others dropped out.

 Head-to-head with John Wayne! I reckoned he was holding two solid pairs. I was drawing to a “Two Outer” and “Dead to Trips” (I learned this lingo in a later century). The last down card was a “Brick” so I “Value Bet” $50 (and kept the dumb grin too).

His cronies kept calling out: “He’s bluffing, call ’em, Duke, call ’em, You gotta call ’em.” That’s when I realized: if they caught me bluffing, they wouldn’t kill me, but I might wish I was dead!

Well, Wayne thought for quite some time before he said, “Nah, he’s been getting some good cards lately” and threw in his hand. Desperately grateful this was the kind of a game no one would think to upend my down cards, I said to myself, remember this one—you just “Bought One From the Duke!”

Editor’s note: A variation of this story was originally published in The Coffey Files.

Photos courtesy of Bruce Bisenz

Gone Girl

Production Sound for Gone Girl

by Steve Cantamessa

My first impulse when the editors of the 695 Quarterly called to ask if I would write a piece on doing production sound for Gone Girl, was to pass and finish my round of golf. More thought and my wonderful wife’s prodding changed my mind but presented me with my next quandary: though I try to keep up with the tech end of my craft and, with affectionate apologies to Mike Paul, my go-to guy for technical matters, production sound is not a terribly literary topic. A microphone with a skilled boom operator in control will render the best sounding track. Not always a doable task these days. It was a pleasure to work on a project that allowed us to use this technique. I spoke about penning this with a friend who reminded me that I have now become one of the new “old guys” and that I should just write what I know. So here we are.

Before my interview with David Fincher, I didn’t really know much about him, other than that I enjoyed his films and the work of his sound designer, Ren Klyce. That they sought me out was a compliment in itself. Thanks to my dad Gino’s lifelong involvement, cinema has been part of my life since before I was born. Talking with a respected director like Fincher promised to be interesting at the least. We sat down and I instantly saw that he knows what I do and has an informed opinion of how he thinks it should be done. Sadly, these days many may think they know what we do and how we do it, but they are usually mistaken. David Fincher knows. We discussed my approach to things on the set along with various advances in technology, and then he asked me to do the show.

It didn’t take me more than a couple of hours into the first day to realize that David Fincher was that rarest of birds these days—a director who does his homework and knows exactly what he wants when he gets on the set. And I mean exactly. Too often these days, many directors like to temporize important decisions. Others look at the call sheet and check things off and still others never seem to want to go home. On a Fincher movie, it is clear that everyone on the set has the same goal: to do his or her best work in order to make the best movie possible. Fincher crews his shows with the best people based, not on their ZIP codes, but on their body of work. From DP Jeff Cronenweth to Costume Designer Trish Summerville, everyone was helpful and mindful of the big picture, not just his or her own tasks. He understands that such efficiency on set will save far more money than any state tax credit ever will. He shoots his movie, not a corporate schedule.

Gone Girl was, I believe, the first show to use the Red Epic Dragon—a 6k camera capable of stunning imagery. As with the “first” of anything, there were issues. Specifically, considerable fan noise emanated from the front of the camera, where the actors usually are. The temperature of the chip in the Red was crucial, and seventy-two degrees was the magic number; once the camera readout on my video assist monitor went to seventy-three, I knew things were going to get noisy. Happily, the people at Red were most helpful. Though we started shooting in Missouri on mostly exterior locations, I knew that once we returned to LA for stage work, that fan noise would pose a big problem. Having been around the block a few times, when I explained this to the folks at Red I assumed that they wouldn’t give a damn about some sound mixer’s problems, probably being up to their own back ends with image tech problems. I was wrong. They listened, they asked questions and they wanted to hear my ideas. I told them that the fan noise coming out the front was a big issue and that it needed to be re-routed in some way. By the time we had returned to California, they had designed and built baffles that mounted to the front of the camera, routing the “sonic exhaust” around the side and then out the back. I was genuinely thankful and impressed that they cared. Most importantly, it worked.

The movie is wall-to-wall dialog, and my Boom Operator, Scott LaRue, was a busy guy; personal relief and lunch were the extent of his downtime. Our utility man was Brad Ralston, always a big asset with the gear and frequently serving as Second Boom Operator. Scott and I have been together since about 1992, so any discussion of how a shot is to be done is usually quite brief. Scott tells me how he sees it and I say, “Whatever you think.” There are those few occasions where I ask him to wire someone, but such requests are sometimes unwelcome. I do recall that when I was booming, I hated it when a mixer would tell me to wire someone I felt I could easily get with the boom. It must be a boom operator thing, but the fact is that he’s been putting me over for years and I am truly grateful.

It seems like we were always either rolling or setting up. To repeat, Fincher does his homework. The sets in Hollywood were beautiful and well built, but without greenbeds. It’s just the way it is now, most sets have ceilings and lighting is done differently, but booming on a ladder over a wall through a slightly raised ceiling piece certainly isn’t the most elegant approach.

HARDWARE NOTE: My standard package consists of a Cooper 208 mixing panel, an Aaton Cantar X2, Lectrosonics 411 radios with SMA transmitters, Tram lavaliers and 416s on booms transmitted via Lectrosonics 400 transmitters. And, of course, plenty of IFBs for the Director, Script Supervisor, Camera and Producers.

Even with all the dialog we had to record, I can’t recall David ever telling our department how we should do things. Again, Fincher’s hiring philosophy: he made it clear that being streamlined was important. Just do the job to the best of your ability without making everything into a sound issue. Find problems during rehearsals and get them solved while the DP is lighting so that, when the actors step in, you are ready to go. I don’t know how others do things, but this is how we always work. I can only judge from what I saw when I was booming. There’s a lot to be taken from having seen the likes of Kirk Francis, Jeff Wexler, Bill Kaplan, Eddie Tise and my dad Gino. These guys got to the top because they are professional filmmakers, very good at a most unique craft.

Back to Fincher: ask him a question and he’ll give you an answer immediately without any doubt or hesitation. This makes for a far more rewarding way to spend the day than the currently popular “Oh, just wire them all.” On the few occasions where he ran a tight camera with a wide shot, he would roll ten seconds of a clear frame and then have us bring the microphone into the shot. The ten seconds of clear frame provided a plate he could use to remove the microphone from the wide shot so the sound would not be compromised by the shooting plan.

Everyone involved in Gone Girl respected one another’s department and worked as colleagues. The focus and intensity on Fincher’s set benefited our department immensely. The actors never complained about wires. The Electric Department never jammed a generator up our butt. We never had to ask the dolly grips to quiet a track. The effects people were preternaturally aware of any sound implications. Locations were pre-scouted by the Locations Department for any sound issues, and those problems were solved before we got there. The sets were always dead quiet. For instance, we had a practical police station set located on a busy street in Culver City. David had the Construction Department build vestibules at each of the doors to the outside so we might have significantly diminished traffic noise yet still see extras come and go.

The scene with Ben Affleck and Rosamund Pike (Nick Dunne and Amy Dunne) in the shower was the one place that was a bit challenging and was the only set where David couldn’t arrange things to our advantage. There was quite a bit of dialog at a low level with water running the whole time. The FX Department was helpful by painting “hogs hair” the same color as the tile. This allowed us to quiet the sound of the water hitting the tile floor and, being the same color, didn’t adversely affect any of the lighting. They also plumbed the water lines and packed them with sound-absorbing material to ease the sound of the water running through the pipes. The Construction Department removed out-of-shot glass panes as we worked so Scott could get the microphone into the shower and get the dialog. Again, everyone helped.

Some of the questions that come up when someone finds out that I mixed Gone Girl are fascinating to me. One is about the number of takes. Granted, he does a lot of takes but what’s the big deal? There is always a reason for them. Any direction I heard him give to an actor, or any department for that matter, was concise and thoughtful. The second question is regarding the sound style used on the dialog throughout the film. I wish I could answer this but I wasn’t there in Post; Ren Klyce, the Re-recording Mixer, would have the answer. What I do know is that David likes to use sound, even the dialog, to create certain moods and feelings that do not follow the normal rules. Everything you see and hear in a Fincher movie is intentional and controlled by David’s sensibilities. You may like the style, you may not, but I assure you that none of it is due to a mistake, laziness or lack of attention.

Those who know me know that I would make a poor courtier and seldom sugarcoat things. That said, I enjoyed the Gone Girl experience and got immense personal satisfaction from working on such a good film that people will enjoy watching. Working with good actors, talented technicians and a good director is all any sane person in this business could or should ever want. It is no mistake that nearly every review of Gone Girl includes mention of the high quality of the technical aspects of the film. David Fincher showed the good sense to hire from the best talent pool in the land to get what he wanted. Now, about my golf swing…


Glossary of highlighted words

Greenbeds A series of catwalks above the sets in a studio.
Hogs hair A woven filter material for heaters and air conditioners, used on sets to soften the sound of water droplets hitting a surface.

Jimmy Songer and the Development of Video Assist

by David Waelder

Video Village is a standard feature on the modern movie set. Producers, writers, clients and others can view the action clustered around a monitor far enough away from the set to stay out of trouble. Their segregation in the video ghetto allows camera people and others to go about their tasks without the distraction of people jockeying for position at the viewfinder. It also helps makeup and wardrobe personnel to see how their work appears on camera and it has become an essential tool for the director and continuity person. Even the sound crew benefits by having extension monitors to see the frame and position the boom microphone. All this is made possible by a video assist system perfected by Jimmie Songer, a Local 695 technician.

The advantages of using a video camera as an aid to directing movies were apparent from the very beginning. Several directors began to set up TV cameras adjacent to the film camera so they could see an approximate frame. This became a common practice particularly on commercials where the placement of the product is crucially important. To match the view and perspective, assistants would carefully adjust the aim and image size to closely approximate the view of the film camera.

Of course, that isn’t really a video assist system. The image is useful for the simplest shots but not much help when the camera moves or the lens is adjusted. Every setup change or lens adjustment necessitates a recalibration of video camera position and exposure settings. To be a fully functional system, both the video and film cameras would have to view the scene through the same lens to avoid parallax errors and exposure sensitivities would have to track together. This presents a series of technical challenges.

It was a cowboy from East Texas with little formal education who took on the challenge and worked out all the engineering obstacles. Jimmie Songer grew up on a ranch in Burleston, south of Fort Worth, with a keen interest in how radio and television worked. He and his friend, Don Zuccaro, would purchase crystal radio kits, assemble them and string the antenna wire along his mother’s clothesline.

As a teenager, he took a road trip that would set up the course of his life. He and his friends traveled north as far as Bloomington, Indiana, when funds began to run out. Looking for a job to replenish assets, he applied to the RCA plant on Rogers Street. Ordinarily, his lack of formal training would have been an impediment but RCA was just then experimenting with designs for color sets and there was no established technology to learn. By diagramming from memory the circuit design of a popular RCA model, he demonstrated familiarity with the major components and was hired on the spot to be a runner for the engineers developing the new color system.

His duties at RCA consisted largely of gathering components requested by the engineers and distributing them. Along the way, he asked questions about the function of each element and how it fit into the overall design. He stayed about a year, not long enough to see the model CTC4 they were developing go on sale. That didn’t happen until a couple of years later in 1955. But, when he did move back to Texas, he had a pretty good understanding of how video, and color video in particular, worked.

Graduating from crystal radio sets, he and his friend, Don Zuccaro, made a mail-order purchase of plans for a black & white television. Components were not readily available at that time but Jimmie and Don were ingenious and purchased a war surplus radar set with A&B scopes and cannibalized it for parts. The task of hand-winding the tuning coil was simplified because Fort Worth had only one TV station so there was no need to tune anything other than Channel 5.

With skills honed from building his own set and working at the RCA plant in Indiana, Jimmie Songer quickly found work with appliance shops in the Fort Worth area that were beginning to sell television sets but had no one to set them up, connect antennas and service them when needed. This led to an offer, in 1953, to work setting up KMID, Channel 2, in the Midland Odessa area. After a few years with KMID, he worked awhile in the Odessa area and then returned to Fort Worth but he stayed only a year before setting out for Los Angeles in April 1963.

In Los Angeles, he worked at first for a TV repair shop in Burbank while he tinkered with his own experimental projects. Hearing that Dr. Richard Goldberg, the chief scientist at Technicolor, was looking for people with experience with color, he sought him out and secured a job calibrating the color printers. Dr. Goldberg was also developing a two-perforation pull-down camera for widescreen use. Songer became fascinated by the possibility of using that design at 48 fps to make alternate images, one atop the other, which might be used for 3D and built some experimental rigs to test the idea.

This work with Dr. Goldberg in the early ’60s brought him to the attention of Gordon Sawyer at Samuel Goldwyn Studios. Sawyer wanted him to help with an ongoing project for Stan Freberg involving simultaneous video and film recording. Freberg was using side-by-side cameras to create video records of film commercials. The side-byside positioning produced parallax errors but his commercials were mostly static. Generally, the results were good enough for timing and performance checks. But issues of accurately tracking motion would arise whenever the camera did move and Stan Freberg wanted a better system.

Under general supervision from Gordon Sawyer, the team first addressed the issue by adjusting the position of the video camera. They attached a small Panasonic camera to the mount for an Obie light. This put the video lens exactly in line with the film camera lens and only a couple of inches above it. Left-right parallax was effectively eliminated and the vertical alignment could be adjusted to match the film camera with only minimal keystone effect. By affixing a mirror just above the lens mount at a 45-degree angle and mounting the video camera vertically to shoot into the mirror, they reduced vertical parallax to almost nothing. Jimmie Songer addressed the keystone problem by devising a circuit that slightly adjusted the horizontal scan, applying an opposite keystone effect to neutralize the optical effect that was a consequence of slightly tilting the video camera to match the film camera image. Most of the time, this system worked well but there were still limitations. The video system needed to be recalibrated with every lens change. Even with careful adjustment, use of a separate lens for the video meant that depth of field would be different so the video image would only approximate the film image. Blake Edwards knew Gordon Sawyer and approached the team to design a system suitable for movies with moving cameras and frequent lens changes.

The limitations could only be resolved if the video camera used the very same lens used by the film camera. Accomplishing that would require exact positioning of the video lens and adjusting sensitivity of the system both to obtain sufficient light for exposure and to track with the film exposure. Jimmie Songer set about developing a system that could be built into a Panavision Silent Reflex camera (PSR) that used a pellicle mirror to reflect the image to the viewfinder. They left the image path from the lens to the film completely untouched but introduced a second pellicle mirror to reflect the image from the ground glass to a video camera they built into the camera door. This one design change eliminated many of the limitations of previous systems in one stroke. Since the video used the film camera lens and picked up the exact image seen by the film and the camera operator, issues of parallax and matching depth of field were completely eliminated. There was no need to recalibrate the system with every lens change and the video camera was configured to use the same battery supply as the camera. The introduction of a second pellicle mirror did flip the image but Songer corrected this easily by reversing the wires on the deflection coil. But the issue of having sufficient light for the video image still remained.

In one way, a pellicle reflex system is ideal for video use. Unlike a mirror shutter, the pellicle system delivers an uninterrupted image to the viewfinder so there is no need to coordinate the 30-frame video system with a 24-frame film camera. While there would be more frames in a single second of video, the running times would match and that was all that was important. Furthermore, the video image would be free of the flicker seen in the viewfinder of a mirror shutter camera. However, the pellicle mirror used in the reflex path deflected only about one-third of the light to the viewfinder. That was no problem when filming outside in daylight but there was insufficient light when working interiors.

Jimmie Songer needed to make three refinements to the system to address the exposure issue. First, he replaced the vidicon tube that was normally fitted to the camera with a newly available saticon tube that was more sensitive and also provided 1,600 lines of resolution. That helped but wasn’t enough. He then adjusted the optics so that the image, rather than being spread over the full sensitive area of the tube, was delivered only to the center portion. By concentrating on the image, he obtained more exposure and adjusting the horizontal and vertical gain allowed him to spread out the smaller image to fill the monitor. But, there are limits to how much can be gained by this approach. Even with a high-resolution saticon tube, the image will begin to degrade if magnified too far. There was still not enough light for an exposure but the video system had been pushed to its limits so Songer turned his attention to the film camera.

Recognizing that the ground glass itself absorbed a considerable amount of light, Songer contacted Panavision and asked them to fabricate a replacement imaging glass using fiber optic material. Although the potential of using optical fibers for light transmission had been recognized since the 19th century, the availability of sheets of tightly bundled fiber suitable for optics was a recent development in the 1960s. The fiber optic ground “glass” was the trick that made the system work, allowing the video camera function with the light diverted to the viewfinder.

Jimmie Songer and his assistant used the system, first called “instant replay” but now renamed “video assist” to avoid confusion with sports replay systems, on The Party in 1968 and then Darling Lili in 1970. It worked flawlessly, delivering the exact image of the main camera so Blake Edwards, the Director, could follow the action as it happened. It never held up production; to the contrary, Edwards said that it streamlined production because the certain knowledge of how the take looked freed him from making protection takes.

After Darling Lili, the key figures behind the project formed a company, Video West, to further develop the system. They met with rep representatives of the ASC to draw up a series of specifications for video assist systems. Don Howard was brought in to interface the camera with the playback system and operate it in the field. Harry Flagle, the inventor of Quad-Split viewing technology and one of the Ampex engineers who worked on the development of the Model VR-660 portable two-inch recorder, joined the team soon after.

They next used the system on Soldier Blue, directed by Ralph Nelson, and then Wild Rovers, again with Blake Edwards. It proved so popular with producers that Songer and Don Howard, his assistant who was primarily responsible for operating and cuing the video recorder, scheduled projects months in advance and went from film to film. The work was so tightly booked that they sometimes had to ship the camera directly from one project to the next without a return to the shop.

Jimmie Songer joined Local 695, sponsored by Gordon Sawyer, shortly after Darling Lili and continued as a member until his membership was transferred to Local 776 in 1997. In the course of his career, he obtained seventeen US patents for a variety of innovations in high-definition TV and 3D video imaging.

In 2002, he received a Technical Achievement Award from the Academy for his work developing video assist. He lives today on a ranch near Fort Worth but continues to refine the video engineering work that has been his life.


Video Assist

A quote, attributed to Tacitus, claims that success has many fathers while defeat is an orphan. It’s just so with the invention of video assist which is claimed by several people. Jerry Lewis is often cited as the inventor and he certainly incorporated simultaneous video recording in his filming practices very early. He began development work in 1956 and first used a video record and playback system during the filming of The Bellboy in 1960. He used the system to view and evaluate his own performance immediately after each take. But the system he used on The Bellboy was the simplest version; a video camera was lashed just above the main lens and would be adjusted to approximately match the view of the film camera lens with each setup. Later, Jerry Lewis also worked to develop a system that would use a pellicle mirror to view the image through the primary lens.

The assertion that Jerry Lewis “invented” video assist is overstated. The original patent for a video assist system dates to 1947 and subsequent patents in 1954 and 1955 added the refinements of merging optical systems to eliminate parallax and adding a second beamsplitter to permit simultaneous use of video and film viewfinders. The integrated video systems that came into general use in films were the work of many individuals each building on the accomplishments of predecessors. Jimmie Songer’s contributions were many and essential as recognized in 2002 by the Academy of Motion Picture Arts and Sciences.


Glossary for highlighted words

Deflection coil – In a CRT (cathode ray tube), the beam of electrons is aimed by magnetic fields generated by coils of wire surrounding the tube. Adjusting the electrical energy sent to different coils directs the electron stream.

Obie light – A diffuse light mounted very near the camera lens, typically just above the matte box, to provide soft fill on faces in close-ups. Lucien Ballard, ASC developed the light to photograph Merle Oberon after her face was scarred in an auto accident.

Pellicle mirror – A semi-transparent mirror used in optical devices. A pellicle reflects a certain percentage of light and allows the remainder to pass through. In the Panavision PSR camera, a pellicle mirror deflected approximately 30% of light to the viewfinder and passed about 70% to the film plane.

Saticon tube – A saticon tube is a refinement of the vidicon tube design that adds particular chemicals to the photosensitive surface to stabilize the signal.

Vidicon tube – A vidicon is one of the early image capture devices made for television cameras. An image focused on a photoconductive surface produces a charge-density pattern that may be scanned and read by an electron beam.

Jersey Boys Music

by Mark Agostino

When Production Sound Mixer Walt Martin called me to do playback on Jersey Boys, he and I assumed it was going to be just that, playback. Why would we think otherwise? This was how it had been done for decades. Well, we didn’t know it yet, but we were totally wrong.

There were weeks of emails and phone calls with the producers and Walt, and rumor after rumor about how we were going to shoot the musical scenes of the film. Finally, we had an answer. Clint Eastwood wanted to record everything live on set—all of the instruments and all of the vocals. He didn’t want to do pre-records. He wanted to get on set and actually shoot the guys performing in the moment. It sounded challenging. With my background in studio and live recording, I welcomed the challenge. Well, this challenge was going to be monstrous. It would be the most exciting and technically complex production work of my career.

It would have been less complicated if the actors sang and played their own instruments. If that were the case, we would simply set up some microphones for the vocals and drum kit, plug in a few direct boxes for the guitars and keyboard, put together a monitor mix so everyone could hear each other and start recording some takes. Unfortunately, the actors were only singers, not musicians, and one of them was neither. This was getting more complex. We were going to need offcamera musicians to play the instruments, bass, guitar, keyboard and drums, that the actors couldn’t play themselves. Most of the time we had an on-camera drummer and this added an additional challenge since we needed to mic the entire kit and none of the microphones could be visible. Let’s just say, it was rarely the same thing twice. Many times we were informed which musicians were being used themorning- of. Fun!

After a few meetings with the Producers, Walt and the 1st AD, David M. Bernstein, determined that we were going to need three things:

First, in order to give Post-Production the most flexibility, we needed to individually rig microphones for all of the instruments and vocals and record them to discreet tracks.

Second, after the performances were shot from the front, and, to allow the actors to save their voices, we needed the ability to quickly play back a good sounding mix of any of the previously recorded takes once the cameras had turned around to shoot the audience. The faster this could happen, the better. I had heard stories about the efficiency and speed of Clint’s shoots, and was told by the producers and 1st AD that he moved fast. They weren’t kidding!

Third, since we were going to have off-camera performers and oncamera performers, everybody needed to be able to hear themselves and each other. We would need a headphone mix for the off-camera performers and a separate foldback monitor mix for the on-camera performers.

One of the major obstacles doing a live recording is controlling the amount of unavoidable leakage from the foldback monitors into the stage microphones. This would have to be minimized as much as possible. I made sure the producers were totally aware of this from the start. They completely understood and were prepared to replace things in Post if need be.

With all of this in mind, I devised a plan that basically (and luckily) held true to form throughout the shoot.

We were going to need two multitrack recording systems on set at all times. The first would be specifically used for recording the bands. The second (which I was going to operate) would record, on up to no more than eight tracks, submixes of the band from the first system and, at the same time, all of the vocal microphones. This would allow me, with a more consolidated session, to switch from recording mode to playback mode, do a quick mix and be ready for playback as soon as the cameras had turned around to shoot the audience. On top of that, since I then had all of the musical elements at my disposal, I could route whatever was needed to either the headphone or on-stage monitoring systems.

I soon realized that this project was going to be a huge technical undertaking. It was going to require live recording, sound reinforcement, music mixing and music playback. I could do all of these things myself, but there was no way that I could do all of these things by myself at the same time.

For the first time in my 18-year career in music playback, I needed a crew. With the immediate and full support of the producers, I began my search.

I needed someone to run the first recording system and focus specifically on mic’ing and recording all the instruments, someone who had studio and live music recording experience, someone who would take control and make decisions without me having to hold their hand. (I was going to have a huge amount of work myself.) I needed someone who knew how to do it all low profile. Those people who wanted to bring in a recording studio in a semi-truck just wouldn’t do; this person had to have a small footprint and be mobile. Looking for someone with the rare compliment of skills this job would require, two people came immediately to mind, but neither of them were in the industry anymore. I gave it a shot anyway. To my delight, one of them was all in. Tim Boot became my first crew member and he would turn out to be phenomenal, as I had expected. Next, we needed someone to primarily assist Tim with mic’ing all of the instruments each and every day, making sure the off-camera musicians had all the elements they needed for their headphone mixes, handing out earwigs, and other less-than-glamorous tasks. We needed someone to hold it all together, someone to keep us out of trouble. Along came Cristina Meyer. She too was phenomenal and truly invaluable to both Tim and me throughout the shoot. In fact, on Day One, even though Tim and I had only been working with her for a few hours, she was doing such an incredible job that we convinced her to ditch a scheduled gig so she could stay with us for the entire Jersey Boys shoot. What a relief!

Now that we had a team, I was excited to get to it. It was definitely the largest sound/music department I had ever worked in. On the regular music days, the combined department consisted of six people (three sound personnel and three music personnel). However, we had eight sound/music people working together on the really complex music days. . There were a few days I specifically recall from the shoot because of their complexity or it was a completely new experience or the amount of work we did in one day was greater than what I had done in an entire week on other shows. Let me tell you about a couple of them.

Day One: For some reason, I always remember the first day of a show. I remember this first day in particular because I got to meet Clint as we were setting up. (That was cool. He was cool.)

The location was a bar. There were four singers and a drummer on a very small stage. In an adjacent room, Tim and I had our systems set up. Tim used Boom Recorder on a Mac Mini and I used a Pro Tools HD Native Thunderbolt system on my Macbook Pro. Beside us were the guitar, bass and keyboard players. All of these off-camera instruments went through direct boxes. This gave us very clean recordings and created no extraneous sound on stage (above the foldback mix) to interfere with the acoustic recording.

To mic the drums, Tim used this wonderful set of miniature DPA microphones. He and Cristina developed a fantastic system of attaching the microphones to the rims (or sometimes the shells) of the drums and to the cymbal stands away from camera so they wouldn’t be seen.

For off-camera monitoring, we set up a small mixer near the offcamera musicians so that they could adjust their own headphone mix. The outputs of their direct boxes were split. One output went to Tim’s console to be recorded and the other output fed the headphone mixer. Since Tim was feeding me submixes of all of the instruments, I then routed whatever instruments were on stage (in this case the drums) to additional channels on the headphone mixer so the offcamera players could also hear the on-stage musicians.

The vocal microphones were another challenge. Since the time period of this film ranged from the ’50s into the ’90s, Props was going to have some pretty old microphones for several of the performances. Walt and I went through as many microphones as we could before we started shooting. Luckily, many of them worked well and sounded pretty damn good for being so old. There were a few, however, that were completely unusable for recording purposes. In these cases, Gail Carroll-Coe and Randy Johnson of the Sound Department did a fabulous job of attaching lavalier microphones to the faces of the old microphones, with great results. Then, since we did playback when the cameras went on stage to shoot the audience, the lavaliers would not be needed anymore and we removed them.

All of the vocal microphones were split at my cart. One set of outputs went to Walt for recording and I recorded a second set. I also sent a mix of the vocal microphones to an additional channel on the offcamera headphone mixer so those guys could hear the vocalists.

Finally, I hid a few speakers on stage so the performers there could hear their vocals and, most importantly, the off-camera instruments. That tied it all together.

As you can imagine, we had a pretty extensive setup each day. I initially asked for three hours and wasn’t sure even that would be enough. We weren’t just setting up a simple multitrack live recording, we were setting up a dual system multitrack recording session linked between two different rooms with a headphone monitoring system in one and a completely independent foldback system in the other. We just made it on Day One. Of course, we became more and more proficient with our setup as the show progressed. I think we may have cut the setup time down to an hour by the middle of the shoot. We had the process dialed in.

The most important thing was that the actors could hear everything they needed to hear in order to give their best performances. We had to provide whatever they needed to feel comfortable on stage.

Eventually, our setup was complete, and we did our first shot of the show. It was the songs “Apple of My Eye” and “I Can’t Give You Anything but Love.” We were all pleased that it went well and I must say, I was certainly a little relieved.

As we went along, there were, of course, minor adjustments/additions to the initial system design. The musicians that were hired were absolute professionals, but the first few performances were recorded completely free-time. No click (metronome) was used. Clint made it clear in the beginning that there was to be no click track. We weren’t even allowed to say the “c” word on set. The drummer was amazing. He truly kept as solid a tempo as anyone could. However, in order to help in the editing process, the producers wanted to be sure that the tempo did not slip between takes. As a result, we began sending a click to an earwig for the drummer. It was also fed to the off-camera musicians to keep everyone in time. The only problem that arose here was the drummer’s inability to hear the click through the tiny earwig. As anyone who’s ever used earwigs knows, they are best for hearing cues when the rest of the environment is relatively quiet. With the noise created by the drummer actually playing and the foldback speakers putting out a decent amount of level, it was understandably rather difficult to hear the click through the little earwig. Many times we had to give him one for each ear. This wasn’t a problem because there was hardly ever a close-up of the drummer.

As I’ve already said, it was rarely the same thing twice. Musicians were constantly being added to the ensemble the-morning-of just prior to shooting. Sometimes they were off-camera, other times they were to be on-camera. We learned to expect and be prepared for anything, and being prepared simply meant being prepared to change.

There were certainly many other notable days on the shoot. A few of them occurred during what the 1st AD called Hell Week. Ha! That was an understatement.

During this week there were going to be three absolutely crazy days. Two of those days, each at a different location, would have multiple performance areas, and one day would have a single performance at each of two different locations. Yup, one day we were going to have performances at two different locations. WHAT? It takes us close to three hours to put together our basic setup. How are we going to have time to break that down, pack up, move everything, load into the next location, and set it all back up again? Not only that, this particular day was to begin at one location with our basic four-piece band/four-vocalist setup for “My Boyfriend’s Back” and “Walk Like a Man.” Then, we had a company move to a location where we were going to shoot a twenty-piece big band performance of “Can’t Take My Eyes Off You.” Were we going to have another three hours to set up at the second location? I thought, we are going to need a bit more than three hours for THAT setup. We would later be informed that, in addition to the twenty-piece big band, there would be a separate three-piece off-camera band at the same time. YIKES!! That day was going to be huge.

Our saving grace was that we weren’t shooting the day before. It was a holiday, and we were able to pre-rig the second location as much as possible. This meant setting up Tim’s gigantic Yamaha DM 2000 console, plugging in and testing over 40 microphones (which then had to be disconnected because the set wasn’t finished), and putting up some extra speakers and amps I luckily had that wouldn’t be needed at the first location.

The plan was for Tim to immediately break down at the first location as soon as we entered playback mode and get any gear he needed over to the second location as fast as possible. I think we actually had Cristina transport a few of Tim’s things over there during the first part of the day when they were no longer needed at the first location. Cristina would follow him and they would begin reconnecting the big band microphones, setting up the off-camera band microphones and headphone system, and preparing everything they could before the rest of the company arrived. I was left behind to do playback for our final shots of the audience.

As soon as we were finished at the first location, I packed up and moved my gear over to the next one as quickly as possible. Tim and Cristina were flying through the setup. I dove full on into connecting my system to Tim’s, verifying signal flow from his console to mine, and making sure that the on-stage foldback system was happy. I happened to glance around and was amazed at how much gear we had at that location … and on the show itself. I remember having to pull something out of my garage in the middle of the project and thinking to myself how empty it looked. By that time, I had brought in just about every piece of gear I owned, and we used every bit of it at one point or another.

We were moving right along. Tim was going to be recording 48 tracks, so I quickly filled up and surpassed my self-allotted eight-track band submix limit for playback. Happily, there was only one vocal microphone.

Once again, we had barely finished our setup when the 1st AD called for the first rehearsal. There was so much equipment, so many microphones, so many cable runs, electrons flowing and neurons firing. When I think about it sometimes, with all of the thousands of components and interconnections, I am amazed and relieved when it all just works.

We eventually started shooting and everything went as smoothly as on Day One. It was really exciting to be involved in such a grand production. I felt proud to have been chosen to be a part of it.

As it would happen though, I had been so busy that I didn’t fully absorb the spectacular musical creation that had just taken place. Only after we had entirely finished the live recording segment of that location did it begin to sink in. It was time to go into playback mode and put together a nice mix for everyone to listen to while we did shots of the audience. I finally had a chance to breathe and something happened: I was now actually listening to the full mix of the performance. I wasn’t zoning in and soloing individual microphones to make sure the signal was clean. I wasn’t monitoring recording levels to be sure there weren’t any overloads. I wasn’t riding the vocal channel to be sure “Frankie’s” vocal level was consistent for him in the foldback speakers. I was now listening to over 20 musicians playing as one with “Frankie Valli” singing his heart out, and it sounded incredible. I was only listening in mono through a four-inch monitor on my cart, but it truly sounded amazing. When we did the first playback on the big speakers, I literally got goose bumps. The dynamics of the band, the smoothness of “Frankie’s” voice, the energy of all those involved was absolutely thrilling. All through my career there have been moments like this that have brought forth powerful and electrifying emotions I’ve never felt in any other line of work. To experience feelings like these on the job is my definition of success. As I had done so many times before, I thought to myself … there is, without a doubt, nothing else I’d rather be doing.

That day was the most exhilarating and, at the same time, most technically challenging movie-making experience of my career. Tim and Cristina did an exceptional job and would continue to excel throughout the rest of the shoot. I couldn’t have found a better team and can’t thank them enough. It was an absolute pleasure working with Walt and his team. They were always willing to lend a hand if we needed it.

In the end, it really felt great to have accomplished all that we did. To my knowledge, what we had done had never been done before. I am so thankful to Walt Martin, the producers, and Clint Eastwood for allowing me to join them in such an extraordinary adventure. As I was packing up on the last day, the producer, Rob Lorenz, said to me, “Thanks, Mark. Thanks for making it work.” That meant a lot.

Recording Production Sound for Fury

Recording Sound For Fury

By Lisa Piñero

In June 2013, I was called to do a series of reshoots for Sabotage, the David Ayer–directed film that I’d worked on the previous fall. While shooting, I learned that Dave’s next film, Fury, was gearing up to shoot in England. I wanted in.

I love working with Dave; his unconventional shooting style and focused vision on End of Watch and Sabotage, the two films on which I’d previously collaborated with him, forced us to find creative ways to capture the dialog along with real-time sync tracks of actual environmental sounds.

But Fury was going to be different. This was a passion project for Dave; he had written a story that had attracted a fantastic cast, including Brad Pitt, Shia LaBeouf, Logan Lerman, Jon Bernthal, Michael Pena, Jason Isaacs and Scott Eastwood. It was set in the battlefields of World War II, in Germany near the end of the war, and would be shot on 35mm anamorphic film by Roman Vasyanov, the Director of Photography I had enjoyed working with on End of Watch. Dody Dorn was onboard to edit and Andrew Menzies, whose work I had admired on other films, would be the Production Designer.

For this film, Dave planned a much more conventional visual style. Although the idea of a conventional visual style may imply a comfort zone of a familiar process and “old school” sound recording techniques, this is never the case on a David Ayer project. Dave challenges everyone involved in his films to push the limits of their craft. He strives for a sense of reality in his work that forces us to re-think our assumptions about the filmmaking process.

Once I knew I was headed to England, priority one became finding a crew that could handle the job and that was able to work in the UK. I immediately thought of Ben Greaves, who I had enjoyed working with earlier in the year and who I knew had the demeanor and skills to get the job done properly. Ben currently works and resides in Los Angeles, but he has a UK
​
passport, a flat in London and the contacts to pull together a good local crew for the show. Ben came aboard as my Boom Operator and we brought on local London Production Sound Mixer Tarn Willers to handle the sound utility position and act as our Second Unit Sound Mixer. We also brought on Tim Surrey to work as our fourth, along with Sound Utility Frank Barlow, who came in frequently as our top dailies hire.

At the end of August 2013, I set off for a month of prep at Pinewood Studios. Packed in my bags were manuals and notebooks filled with photos and diagrams of actual World War II tanks and tank crew field gear, including communications systems, along with actual pieces of US surplus Sherman tank communications gear, including plug-in BC-606 comm boxes, throat microphones and helmet headphone wiring. Forty-five cases of sound gear were shipped and on the way to Pinewood Studios, and a new sound cart, designed with our shooting environment in mind, was being built for me by Malcolm Davies in Manchester, UK.

Early in prep it was determined that we would have essentially three shooting scenarios involving tanks:

1) Exterior Tank Action, in which tank commanders would perform scenes with each other and need to speak/hear one another on one channel (tank-to-tank), while tank drivers (specialist/stunt drivers) would have to be on their own channel with our tank coordinator in order to hear commands and cues. In these situations, we would record our cast only through production microphones. We would wire all cast members and use either helmet or body-mounted microphones (DPA-4061 or DPA-4071).

2) Exterior Process Vehicle, in which our cast was riding in or on a custom-designed tracking “process vehicle.” This vehicle was essentially a highly detailed, life-size fiberglass model of our Sherman tank Fury, attached to the base and suspension of a heavy‐duty military tracking vehicle. It featured a large steel platform apron, suitable for mounting up to two Chapman hydrascope cranes, lots of camera, lighting, and grip gear, and necessary crew. In this case, as above, we would wire all cast, and the tank coordinator would be in direct communication with the process vehicle’s driver seated at the front of the vehicle.

3) Interior Tank, in which our cast played out scenes inside a gimbal-mounted interior tank set. Here, we would wire all the cast and either boom or plant microphones for production dialog. We would also find a way to record the cast through the microphones of a modified vintage tank communications system.

In our first discussions regarding this project, David Ayer indicated to me that, in addition to our production microphones, he wanted to try to record dialog tracks through the vintage microphones used in the original Sherman tank communications systems. Many World War II Sherman tank crews used a SCR-508 turret bustle-mounted radio/interphone system that allowed the five-person tank crew to communicate with each other (interphone) as well as allowed the tank commander to communicate via the FM radio set with other tank commanders and military personnel outside the tank. The tank crews had communications components, including their headphones and microphones, integrated into their military-issue apparel. The headphones were wired into the tanker’s helmet and connected to a push-to-talk switchbox and a throat microphone that was then connected to a communications box at each man’s station in the tank. The tank commander uses the same style helmet; however, his microphone is a push-to-talk handheld microphone. Dave asked me to look into options for recording our cast’s battle scene dialog through these microphones, using either vintage radios in our tanks or through modifications that would leave the outward appearance of the vintage gear intact.

Before leaving Los Angeles, I had acquired several sets of T-30 throat mikes. T-30s are essentially two small carbon microphone elements encased in rubber that are attached to an elastic strap and worn snugly around the neck. The capsules should be positioned on either side of the Adam’s apple. The microphone was designed to pick up sound vibrations through contact at the throat; this was more effective than relying on sound waves transmitted through the air in the extremely high noise environment of a tank’s interior. I needed to hear what these microphones sounded like; however, sadly, the mikes produced a very low-level and noisy signal. After more research, I learned that, over time, the carbon powder in these old surplus microphones solidifies into a solid mass, which does not allow the carbon granules to vibrate as they should with sound pressure in order to change the electrical resistance between the elements’ plates enough to significantly modulate the signal. These old microphones weren’t going to work without some modification.

There were other complications with the surplus vintage gear as well. The connectors used in these systems were specific to military systems and in some cases very rare and difficult to locate. Also, some elements were considered expendable in the day and not field repairable, which made them difficult to modify. We investigated the possibilities of using the actual radio systems in the tanks, but were dissuaded by our period tank mechanics, who recounted stories illustrating the extremely unreliable performance of these old tube radios. We needed a clever, resourceful engineer who understood the filmmaking process and was interested in tackling this project. I called Production Mixer Chris Munro for ideas, and he immediately assured me that he had the man for the job: James McBride.

Jim is royalty in the sound recording world, yet he is such a humble man you would never know it by meeting him. He was a valued studio engineer and an important technical contributor at the legendary Olympic Studios in London during its heyday. Jim designed and built the facility’s Studio One recording console. Many of the most important acts of the ’60s and ’70s, including the Beatles, the Rolling Stones, Pink Floyd and Led Zeppelin, recorded their most famous records through the consoles at Olympic. (http://www.soundon sound.com/sos/aug12/articles/keith-grant.htm)

Jim has become Chris Munro’s go-to man for designing and building custom sound equipment for specific applications. One of Jim’s recent projects was finding a way to build a radio mike transmitter directly into a space helmet for the film Gravity. (Editor: See Gravity and Captain Phillips by Chris Munro, CAS, 695 Quarterly, Spring 2014)

Jim came aboard Fury, and after numerous meetings with all departments involved, we had a plan for our interior set vintage microphone recording scenario. Jim would modify enough existing vintage elements from the cast costumes, props and set dressing to give us a signal path from the vintage microphones (T-30 throat mikes and T-17 handheld mike) used by the actors to the inputs on my recorders. He would also modify the interior tank plug-in comm boxes (BC-606) to accept a return signal from my cart, so that each cast member could hear the mix-minus feed I was sending them via wireless monitoring from my cart. The actors would only have to plug in to what was now modified and practical set dressing in order to be recorded through their vintage microphones and hear each other (minus themselves) through their vintage helmet headphones.

In order to make this happen, Jim and his assistant had to meticulously modify equipment that was manufactured to be expendable and certainly not accessible for modification. In the case of the T-30 throat mikes, Jim carefully sawed through the very small Bakelite connector material and internal contacts in order to replace the old and unusable cable. Then, he glued it all back together so it was impossible to see that they were different from the unmodified pieces. He used a similar surgical technique on the flexible rubber piece that contains the carbon throat elements and replaced those with new elements that, while not high fidelity, replicated the sound of the originals when they were new. He sourced and found all matter of parts, including new cable for our cast PTT switch boxes that looked exactly like the original; a new SM-58-like capsule to fit inside the T-17 microphone that our tank commander would use; and small headphone speakers that could be glued to the sawed-off back of the visible part of our vintage headphones, so that they would look original but work as practical for our cast. The list goes on and on. I asked him for miracles, and Jim delivered them time after time.

Jim also worked on the exterior tank communication with the wireless company hired by the production company. Wireless Works had been hired to work out a duplex radio system for all the exterior tank operations. They were responsible for three huge areas: the communications between the tank coordinator and the tank drivers, the communications between our cast of tank commanders during exterior maneuvers, and all RF coordination on our sets. Jim worked with the Wireless Works onsite technician to help them integrate their duplex equipment into modified vintage equipment that the cast was using. Jim’s ingenuity and tireless work made it possible to incorporate vintage communication equipment into the production process on this show.

I should also mention the invaluable assist provided to us by Rob Lihani, who also happened to be the EPK Producer hired by Sony Pictures to document the making of Fury. Rob is ex-military and an expert in World War II militaria. He utilized his many contacts in the world of military surplus dealers and collectors in order to help us acquire authentic pieces of unused military surplus parts and equipment when no one else could find them.

When it came to production dialog, we knew that we would be dependent on wireless microphones whenever the tanks or process vehicle was moving. The tanks are LOUD and cast members might be in any number of positions while the tank was moving, so it was important to test various wireless mike positions before we started shooting. Tarn Willers and I spent several days at the tank training grounds testing a variety of lavalier microphones and mike positions on subjects as we placed them in different positions on the running tanks and the tank process vehicle. We tested a number of microphones in various positions, including several in the cast tanker helmets.

The results:

1) The tanks were really loud.

2) The tank process vehicle was even louder than the tanks, and dialog recorded on it would in all likelihood have to be replaced.

3)The DPA-4061 sounded best when used in the helmet flap position and the DPA-4071 sounded best when mounted to a chest position.

I came onto the project fully understanding that tanks are loud and that a group of many tanks are louder still. The fact that the “process tank vehicle” was much louder than an actual tank was somewhat disheartening. I discussed this with Dave, and although he knew it was a challenge, he felt very strongly that a towed process vehicle would not move like a tank and was even more problematic than having to ADR some dialog scenes shot on the current tracking process vehicle.

Given our challenge, we started to look for the best alternative solution that would give us the best results.

Our costume technician, Mark Wyndham, worked with Ben to modify the hero cast helmets for permanent placement of a DPA-4061 in each. The microphone was fitted into the helmet’s leather lining and exposed through a hole that was punched through for the purpose. Ben had our textile specialists dye Rycote Overcovers to match the helmets, and in the end, it was difficult to see where the mike was located, unless you were looking for it. Richmond Film Services modified these microphones with screw-on extension cables, so that the helmets could be removed easily without de-rigging the microphone cable and transmitter from the actor. Alternate mike placements were worked out on all our regular cast members’ costumes.

Meanwhile, Tarn and I worked on fitting out the cart that Malcolm Davies built for me. The cart was based on a frame made of small diameter speed rail, with shelving and accessories designed and manufactured by Malcolm in his shop. Malcolm builds many carts of this style for the BBC and other production mixers in the UK and Europe. It was Ben Greaves who urged me to investigate this style of cart based on his experiences shooting in the wet and mud of the English countryside, where weight and unwieldiness cost valuable time. The cart was fitted with my gear, including a Sonosax SX-ST, Deva 16 recorder, a Denecke GR-2 Master Clock, two Lectrosonics WBL Venue Racks with a total of 12 VRT modules, a Marshall dual HD monitor rack and a Sennheiser EW-300 stereo transmitter, all powered by a Remote Audio Meon LiFE. My follow/support cart is a Backstage Equipment cart fitted out with a top shelf and an SKB case filled with foamed out rack drawers that hold microphones, transmitters and other sensitive pieces of gear. Our sound trailer was provided by English film production transportation provider Translux, and Ben worked with them to fit it out properly for our sound equipment. It was stocked with expendables, snacks, and, most importantly, a teakettle—that essential piece of equipment in British culture. For transporting our gear at location, the production company built us a small covered trailer with a ramp for our follow cart and gear along with a Gator to pull it with.

As shooting began, we received huge support from our 1st AD, Toby Hefferman, regarding a standalone mobile shooting platform for sound and video. We were given a 4×4 insert car vehicle with a driver. A “room” was built on the rear platform of the truck using a speed rail frame covered with a fitted weatherproof cover. An antenna rack was attached up high over the rear rollup cloth door and a Honda generator was rigged to the front of the truck. Voila! We were a powered off-road sound & video follow vehicle, with a wall of director’s monitors hung along one side. This setup allowed us to track with moving tanks over any surface and to be instantly ready to record as soon as our truck landed for static shots. The “sparks” (that’s British for “electricians”) wired a box next to our onboard generator, giving us the ability to kill it and receive quiet power from their blimped generator when we weren’t tracking with tanks. For shots when the tanks weren’t moving, Ben had our guys run cable out for boom and ambient microphone positions. There was a lot of shouting and gunfire, and good ol’ copper gave us the best signal-to-noise ratio along with more dynamic range than that available using radio mike transmitters. I developed a huge amount of respect for our crew as they unflinchingly ran hundreds of feet of microphone cable through deep, flinty, clay mud day after long, wet and cold day.

But we still had to solve the challenge of the extremely high noise environment of the working tanks. In the end, after a number of false starts at other solutions, we hired a truck designed for location dailies screening. We had construction soundproof it further and Dody Dorn, our Editor, found ADR Mixer Jon Olive, who would bring a portable ADR setup and would do all the prepping required for the necessary ADR sessions. Once the sessions were prepped, our ADs would schedule cast members into our ADR unit for Jon Olive to record. One of our Boom Operators was always on hand to mic the ADR sessions. We used exactly the microphones and mike placement we used in the shot; we added a boom microphone and encouraged our cast members to take the positions they were in during their on camera performances. Using this method, we were able to get clean dialog for most of those scenes aboard the tracking process vehicle.

It was also extremely important to Dave for us to document the sounds of the many extremely rare, vintage World War II vehicles that we would be using in the production. The British Tank Museum’s German Tiger I tank was a particularly important subject. The last surviving operational Tiger I, we had this tank at our base camp for only a few days and its use was severely restricted to a certain amount of running time, all of which David Ayer wanted to use on camera. We needed to bring in someone who was familiar with the workings of vintage military vehicles and was skilled in sound effects recording. That man was Eilam Hoffman, who has traveled the world seeking out and recording effects. He has an impressive reel of multi-track sound effects recordings that includes many of the rarest military vehicles in the world. Eilam and his assistant scheduled several sessions at our Bovingdon Field base camp and the British Tank Museum in Dorset, and they were on set to record the Tiger tank on the day we shot it, so he could get Dave the sound of the tank’s tracks as they ground through the actual muddy field surface we were shooting on. His recordings of this Tiger are the only known multitrack recordings of a working original equipment Tiger tank. Eilam was a pleasure to work with, and yet another one of the dedicated film sound professionals I had the honor of working with in the UK.

Fury was an amazing experience and adventure. I am incredibly fortunate to have had the opportunity to meet and work with such a talented group of filmmakers on such a remarkable project.

 

Remembering Walt Martin

Remembering Walt Martin

In his long and varied career as a Production Sound Mixer, Walt was proudest of the fifteen pictures he did with Clint Eastwood starting with True Crime in 1999. He was Oscar-nominated for Flags of Our Fathers and recorded sound for Best Picture winner Million Dollar Baby and Best Picture nominees Letters From Iwo Jima and Mystic River. He was a longtime member of the Eastwood team and is affectionately remembered.

CLINT EASTWOOD (in a phone interview):

We completed American Sniper, everybody went home, and then we got the news after a couple weeks that Walt had passed away. It’s like losing a member of the family. Walt was a terrific guy and the easiest person I’ve ever worked with in my life. He brought no antagonism or clumsiness to the work; he was just always ready.

He was an interesting guy to shoot with because, you know, most crew members, you see ’em periodically, but you could go three or four days and not ever see Walt. He had a way of finding a spot for himself where he’s out of everybody’s way. I sometimes shoot quietly [especially] when working with children or active people who aren’t experienced. I would just wave and the boom operator would whisper, “They’re shooting.” I never heard him and it worked really great. Of all the people I’ve ever worked with, he was the most unobtrusive, still getting the job done and in fine fashion.

I’ll miss him on the next project. It’ll be like a missing link, missing part of the chain, because he was that good and that reliable.

GAIL CARROLL-COE

I had the good fortune of doing eight movies for Clint Eastwood with Walt Martin and a few other projects in addition. He began his career by shooting documentaries with his father who was a missionary. I think everyone knows of his accomplishments, but thought I would share some personal things. We traveled quite a bit together for some of the projects. In each location, he made sure he experienced every location to its fullest. The last project took us to Morocco, albeit being difficult, he made the most of it. Walt loved traveling with his wife Elena and his daughter Claudia. He took along his mother-in-law and sister-in-law as well on some trips and made sure everyone had a great time. He loved gardening and sharing his fruit from his trees with others. He loved the Beatles and every Sunday spent some time listening to a program that played their music. Lastly, he loved recording sound.

JONATHAN FUH

Working with Walt was a highlight in my career. He was quiet, unassuming and professional. He loved his family and was loyal to his friends. His wife Elena likes elephants. I remember, on his day off, while on locations, he was looking for elephant motif souvenirs to bring back for her. He will truly be missed by those of us who were fortunate to have worked with him

RANDY JOHNSON

I came to know Walt late in both his career and mine. I was familiar with his large body of work and awards so, when he called to ask me about filling out his sound team on Jersey Boys along with Gail Carroll-Coe, I jumped at the chance to work with him and, of course, with the man, the icon, Clint Eastwood. Working with Walt was a pleasure every day. He had a big heart and was a gentle soul. His sharp sense of humor made the days go by quickly. He was never afraid to laugh at himself and I think the crews sensed that and held him in high regard as a bit of a father figure.

His work on Jersey Boys was multi-layered and thorough. With the help of Mark Agostino and Tim Boot, the task of capturing the live performances went very smoothly. As difficult and challenging as Jersey Boys was, it was still a local shoot and a stage show. Mr. Eastwood’s next project, American Sniper, demanded much more physical effort from Walt, but he embraced the adventure wholeheartedly. He had to climb on vehicles tethered to the top of Humvee’s but I think he was proud of his ability to do whatever was needed to capture the performances. Very sadly, it was his last adventure but one he was very proud of. His talent and humility are his great legacy. I will miss Mr. Walter B. Martin.

Review: Sound Devices 970

Review of Sound Devices 970

by Richard Lightstone, CAS

Evolution/Revolution:How we got to now When I began mixing some forty-four years ago, we shot on film with one camera. A second camera only came out during big stunt work scenes. I mixed on a mono Nagra; recording to ¼ audiotape and the production track I delivered to editorial was THE only track of dialog.

 

 

 

Production changed in the ’90s when two cameras were used for every setup. This usually meant simultaneous wide and close shots. Suddenly, we were using more wireless microphones and there was a need to have both mix and iso tracks of each wire in use.

There were several professional twotrack audio tape-based recorders available then: the Nagra and Stellavox. DAT recorders from Fostex, Stellavox (the Stelladat) and HHB soon supplanted the analog machines and some enterprising Production Mixers embraced the technology of the music industry and started to use the newer eight-track recorders, either the Tascam DA-88 or the Alesis XT-8.

When Zaxcom, led by Glenn Sanders and Howard Stark, introduced the first portable four-channel hard-disc recorder in 1996, it revolutionized location recording. Sound Devices brought out their version of a nonlinear recorder in 2003 and we’ve rapidly come to the present where eight channels is the minimum. Some shows now expect to have individual tracks available for everyone in the cast and even eight tracks may not be enough.

I use a Yamaha 01V96 console (since 2004) and have always wanted to be able to record up to sixteen tracks if and when required. Previously I could achieve that only with a kluge of eight AES and eight analog outputs. But the new Sound Devices 970, an audio-only version of the PIX 260i, offers capabilities that greatly simplify high-track-count recording.

Features

The 970 is a half rack, 2U device capable of recording up to sixty-four tracks to multiple drives. There are two front-panel drive bays and two eSata drives accessible from the rear panel. The drives may be configured for simultaneous or sequential recording as needed.

Eight line-level inputs permit connecting devices directly but full use of high track capabilities comes with connection to a mixer that can supply Ethernet-based Dante or either optical, or coaxial MADI connections. The 970 will also accept eight tracks of AES via a DB-25 connection.

Dual power inputs through standard 4-pin XLRs provide operational redundancy. In the event of a failure of both sources, proprietary PowerSafe™ circuitry provides ten seconds of reserve and an orderly shutdown.

Sound Devices continues to use the rock-steady Ambient Recording Lockit timecode technology offering sufficient accuracy and stability for use as a master clock.

 

A large five-inch screen provides visual metering up to sixty-four tracks and fast, intuitive menu control. Many of the same button actions on the SD 788T are duplicated here on the front face of the 970.

For example, pressing the STOP + FF buttons increments the Scene or Slate. Pressing STOP + RW buttons allows you to delete false start takes. A window asks you to confirm the action, YES or NO, before proceeding.

Similar dedicated keystrokes give access to the Metadata screen where scene number, takes, notes and other functions may be rapidly edited. Commonly used phrases may be selected and edited from a list manager.

By pressing AUDIO + FILES or pressing Ctrl + P on an attached USB keyboard will create a CSV file Sound Report on the current folder for all applicable drives.

 

 

The real clincher to me was the ability to use the Audinate Dante network of up to sixty-four tracks!

Yes, far more than I might need, but I always believe in future proofing my investment. Combined with a Dante card for the Yamaha, one Cat 5E cable gives me 16 x 16 I/O to the 970. I simultaneously record to both an SSD and CF card, which are mounted via the Sound Devices PIXCADDY and PIX-CADDY CF respectively.

The 970 also features an embedded Web-based control panel, PIXNET, for machine transport and setup control over Ethernet-based networks, as well as file transfer over the data network with SMB.

File metadata editing of scene name, take name, notes, track names and reel folders can be done across all drives during, before and after recording.

The 970 may also be controlled through an RS-422 port and GPIO (General Purpose Input/Output).

 

Use in the Field

I picked up my 970 in early April and, after installing it on my newly reconfigured cart, I had about two weeks of “sea trials” before beginning production on a television series in May. Having never used a PIX or Audinate Dante, I wanted to be comfortable with it well before I was on any set. This included familiarizing myself with the operation of the 970 and the routing of the Dante network.

The Dante Controller on a PC or Mac is the master for all the I/O to all the devices on the network.

The series shot for five months and the 970 worked flawlessly every day. I powered it on about a half an hour before call and shut it down at wrap. That added up to at least twelve hours a day of constant use.

My I/O setup was built around the Dante network. As I mentioned previously, I added the Dante MY16-AUD card to my Yamaha 01V.

I run the Dante Controller from a Mac Mini on my cart. The Controller routes the I/O from the Yamaha to the 970 (and any other Dante device on the network). PIXNET also runs from the Mac.

Each day I would create a new “Reel #” or folder. The 970 offers “Custom” (default) or “Daily.” Choosing “Custom” allows the Reel to be edited with any alphanumeric value. “Daily” will automatically generate a value derived from the System Date. i.e., YYMMDD.

Aside from our most important responsibility of mixing great tracks, we have the added duties of accurate metadata and arming and disarming of tracks. These operations are available both directly on the 970 and via PIXNET.

Pressing the AUDIO button on the PIX and rotating the Control Knob allows me to scroll to a track. Pushing the Knob highlights my selection, for example, arm a track. Scrolling further I can edit the Track Name. These features are duplicated by an attached keyboard as well as on PIXNET.

Conclusion

The most important part of any new HD recorder is how it sounds; the answer to that is the 970 sounds great, like all of the Sound Devices recorders. Sound Devices is known for rigorously testing their products before introducing them for sale in the marketplace. They are also fast to respond and fix any software bugs reported by users and owners. This is a personal testimonial; they have my back.

 

 

 

The 970 is definitely a fixed installation recorder, not a bag-type machine, as it has no on-board mixing capabilities. However, its small footprint will easily fit any cart configuration.

With over five months of daily use, I can safely say I really enjoy the 970. It is reliable, well designed with an easily accessible menu and does the job it was designed for.


Images of Richard Lightstone and his cart are courtesy of Richard Lightstone. Images of the 970 recorder are courtesy of Sound Devices.

Anchorman 2

The Serious Side of Comedy
On the Set of Anchorman 2

This account of the very complex business of building and operating a largely functional TV studio for Anchorman 2 was drawn from an interview with Todd Marks, Jeb Johenning and Perry Freeze on June 1 and an interview with Ben Betts on July 5.

It began, as these things do, with an availability check. The primary task was to build a functional CNN-style television studio circa 1980, and the secondary task would be to acquire and create era-specific video playback content for the studio.

The call went to Todd Marks; he would be the Computer and Video Playback Supervisor. Recognizing the scope and complexity of the assignment, he immediately set about assembling a team of Local 695 Video Engineers. He brought in Perry Freeze as Video Playback Coordinator, Jeb Johenning as Video Playback Engineer, and then Ben Betts as Supervising Engineer. Later, he added Chris Adams and Phil Haskell, who provided invaluable support.

Among the first tasks was to draw up a budget, an assignment made more difficult because they had yet to be entrusted with a script. There were general notes of what might be needed but no scene plan or comprehensive gear list. Todd and Ben had experience putting together a fictional television studio, working together on Deep Impact and independently on several other shows like Studio 60 on the Sunset Strip. They had some of the necessary elements and collaborated on a speculative budget. Jan Pascale, the Set Decorator, had a strong working relationship with Todd from their work together on The Internship. This helped as they wrestled with determining the needs and sourcing all the era-specific equipment. Perry worked with Todd breaking down the extensive list of graphics and playback content as the Art Department provided more details.

To dress the GNN studio, the fictional start-up news organization, they would need more than 125 CRT monitors of assorted sizes, four electronic lighting panels, four electronic control panels, four satellite trackers, four audio mixers, four video mixer boards, three studio news cameras, six portable news cameras and a bunch of other support gear.

Much of the gear would come from Ben’s company, Digital Image Associates, but they also drew on the resources of History for Hire, Todd Marks’ company, Production Suppliers, and Jeb Johenning’s company, Ocean Video, as well as from Playback Technologies.

Making a complete television broadcast studio is difficult enough but this was a period story, set in the early 1980s. In the video world, thirty-year-old equipment is antique and getting it all to work well enough to at least appear functional was one of the challenges of the project. To assist in this process, the team recruited John Monsour, a self-taught video engineer of legendary ingenuity.

Todd Marks: John is one of the original twenty-four-frame engineers. He’s the one who did the original Apple commercials. He’s the one who figured out how to take the Mac Pluses and modify them to go twenty-four frame.

The pedestal cameras rented from History for Hire had been gutted and were really just empty shells. Just to get an image to appear on the video viewfinders required reinstalling functional camera circuits within the empty boxes. They ended up fitting a smaller and more modern box camera within the empty lens housings. But then there was an issue of rigging control cables to the lenses of the cameras so that the operators could run the zoom lenses from the controls on the pan and tilt handles. The focus controls from the period were manual rather than electric and operated from lines similar to speedometer cables. Two factors made this complicated: first, the original thirty-two-inch cables were not long enough to reach within the camera shell and be routed to the replacement camera in the lens housing and, second, the original lines drove Fujinon lenses and these were Canon lenses with a different coupling. Longer cables were unavailable from any regular source but John Monsour found a Venice Beach supplier of speedometer cables for hot rods who was able to fashion a cable and coupling at the needed length.

By long tradition, movies are projected at twenty-four fps. Video is normally distributed at 30 fps (actually 29.97 frames) so, to avoid seeing a noticeable and distracting strobe effect on CRT monitors, some means must be found to reconcile the discrepancy whenever video monitors appear in the image. The workaround is to process the video so that it runs at exactly camera speed and can be locked to the camera, one frame of video to one frame of camera image. (We call this video sync.) This same practice applies even when an electronic camera, like an Arri Alexa, is used to make the movie, although the speed for a digital camera is 23.98 frames per second rather than the 24 frames of a film camera. This sort of alteration is necessary to allow the production cameras, the Alexas, to photograph the functional prop video cameras and also see the images on their viewing monitors. It’s also needed whenever the output of the functional prop cameras must appear on a monitor.

So, everything must be changed over to twenty-four frames to be filmed by the production cameras. But not quite. There are several points in the routing process where a traditional image may still be needed. The vintage signal processing units were designed to run at thirty frames and would balk at a lower frame rate. These were used to provide Chroma Key or to insert “lower third” or period-specific graphics.

John Monsour came up with an effective but complex scheme to address this need for different frame rates. He modified the cameras and their genlock circuits to scan the CCD at 23.98 fps, instead of 29.97. This involved a custom circuit board, tightly integrated into the entire camera system, attached to the side of the existing box camera. The cameras were genlocked back at the CCUs via a custom 23.98 sync generator. The 24-frame composite video signal was then fed into the 24-frame matrix video router. The RGB signals were scan-converted back to 30 fps and fed into the 30-frame router. The CRT viewfinders in the vintage cameras had been replaced by History for Hire with LCDs, so it was necessary to send a 30 fps signal back to the cameras, strip the chroma out and then re-color correct them within the camera housings.

 

 

The cameras were further modified to have “tally” lights that could be activated on demand. For a long time, studio cameras have been fitted with a red light, called a tally light, that would identify the live camera for the talent. For use in the movie, where dramatic needs might not be exactly in sync with the operation of prop cameras, the tally lights had to be custom-wired. Relays were built and wired to a controller at the CCU station, so that the tally lights could trigger following the GlobeCaster switcher or be manually turned on/off.

Since they also incorporated video monitors, it was necessary to apply this same clock-rate adjustment to the teleprompters.

The first budget, providing for all of the capabilities Todd and the team thought would be needed, landed in the production office with a thud. Immediately there were questions like “why do we need this?” and “what’s that for?” Todd and Ben set about grinding the budget down to the bare essentials.

As luck would have it, they had a scene on the first day of production in a smaller video studio set. They shot a sequence with Will Farrell (Ron Burgundy) and Christina Applegate’s character, Veronica, at WNBC, a working television station in Atlanta. This smaller studio served as a good shakedown for all of the gear and also provided production with a good opportunity to appreciate why these particular elements and capabilities were important to the overall film.

About three weeks later, the team returned to Atlanta, with gear optimized for the task, at least as much as these things can be worked out in advance. They had shipped the gear ahead by rail, 3,500 pounds on six pallets in the first shipment and more than double that in the second. The total shipping weight was more than four tons! The engineering work and operational functionality of this equipment gave the show capabilities that enhanced the whole process.

They had ten days to get everything wired and functional in the fictional GNN studio. There were many difficult moments due to the age of the gear, and the amount of graphics and playback necessary for Day One in the studio. Then the real work began. There were two aspects of this: operational and content.

Just keeping everything routed correctly, cued and ready to play in a constantly changing environment is a huge task. For instance, the vintage CRT monitors were all about thirty years old and tended to be balky and erratic; some days they’d work fine while other days individual units would refuse to work or would show color differently from their companions. Not all of the monitors were vintage CRTs; some of the camera viewfinders were LCD displays. Each design handled visual data a bit differently so keeping it all evenly illuminated, period-consistent and simply operational demanded constant attention.

Todd Marks: And also because we’re dealing with CRTs which are inherently finicky and these are twenty-, thirty-year-old machines sometimes—what looked good on one would look completely out of scale on another or the color was completely screwy. We went through, before we started the production, and tried to match as closely as we could but…

Jeb Johenning: Isn’t that what NTSC stands for? Never The Same Color twice. Reasoning that the start-up of a new TV network might experience some glitches, Todd and Perry devised a clever way to cover the occasional slip-up:

Todd Marks: …in the news offices, in our “wall of fifteen,” the day before they go on the air, we set the horizontal off on [the bottom corner monitor] so that it was kind of rolling slowly [and] we put an extra in front of it pretending to tweak it…

Little tricks like that could cover the occasional glitch but, for the most part, the gear had to work flawlessly or it would draw attention away from the story. More than one hundred monitors had to display images in some shots, wall monitors had to show matching color, video camera monitors needed signal to be converted to black & white for period authenticity and everything had to be locked synchronously together.

Providing content for all those monitors was a large part of the assignment. This seems simple enough for the material generated by operating the functional prop cameras but it was still necessary to route the signal appropriately, sending the 29.97 signal to the right place for Chroma Key or Schindler processing, send the 24-frame version to the monitors that needed the slower frequency and keep it all coordinated. But there was also the matter of providing additional content beyond just the camera feeds.

A credible TV news center has more images up on the monitors than just their own anchors in the studio. There are reporters posting stories from the Mideast, from Washington and from across town. They needed images of those reporters; the baseball games they were covering, advertisements that would play between news stories, in short, anything that might play on a TV network. Just like the real television network, the fictional studio is a beast with a voracious appetite.

When they weren’t actually filming in the GNN studios, the entire playback crew was preparing a repertoire of clips to show on the monitors. They would pose their newspeople in front of a green screen and then composite in the Taj Mahal or the Capitol Building in the background. They also made fake period commercials:

Ben Betts: [For] one of the commercials Todd mocked-up in his hotel room, he literally bought a can of beans, put them into a saucepan, shot it with his camera, [and] that became a commercial that we used. I mean, they had no budget for most of this stuff.

Sean had some stock footage. He made a fake airline commercial just out of stock footage. He added graphics and cut it together. [This is] on a Saturday, in Perry’s hotel room, working on little pieces trying to come up with more material ’cause we didn’t want to get caught with our pants down … ’cause commercials are a great thing for something like this. [When] you need to cut to something, a couple stock commercials gives you something that’s safe to roll to. It doesn’t tie you into any part of the story.

It is possible, of course, to burn-in images to all the monitor screens throughout a scene. Production will often gravitate toward that solution because it means that they can postpone decisions about what should appear on the screens. Doing it live on the day requires considerably more planning and coordination, but there are good reasons for making that commitment. First, it simply looks better and more natural to have all the elements together in one place.

Jeb Johenning: I mean, I don’t think it ever looks as good when you burn-in an image versus doing it for real because there are these subtle little things like just even the glow of the monitor reflecting onto the desk or onto a glass or all these little glints or some such—you know, again, like you said earlier, something’s just off.

Also, a task that’s pretty simple, inserting one image into one monitor face, becomes considerably more complex when a single shot requires fifty or more inserts.

Todd Marks: No, we want to do it live—as much as we possibly can—so as long as it’s not something that we’re supposed to be on the television that we haven’t shot yet, which is impossible for us to do, we try to do as much [as we can] practical because it makes it better for the director, it makes it better for the actors, it makes it better for the editors, you know, even if they have to go in and sweeten things a bit later…

The live interactive video performance makes it possible for the actors to react and improvise within the scene. One of the most iconic interactions in Anchorman 2 would not have been possible without this flexibility:

Ben Betts: The fact that everything works was great. ’Cause they really had a teleprompter there and the twenty-four-frame teleprompters were really used by the cast, just like they would in a real broadcast. We had that one infamous scene in the movie that was just a little throwaway thing in the original script, that just became hysterical.

[Steve Carell improvised a scene predicated on his weatherman character wearing green pants for St. Patrick’s Day and then discovering that his legs disappeared from the composite image.]

Ben Betts: It was one of the first trailers for the movie. Because Steve Carell got out there and just ran with it. An improv comedian and a green screen opens you up to a lot of possibilities…

This is the kind of thing that’s really only possible if all the components are working and interacting so the actor can see the effect and play with it. Done with images composited in Post, many of the comic possibilities would have gone unrealized.

Keeping all the elements running and coordinated live on set puts heavy demands on the agility of the production crew. Jeb Johenning and Perry Freeze and Ben Betts and Sharouz (Shawn) Noushinfar, the GlobeCaster TD/Engineer, and also Chris Adams and Phil Haskell, were kept scrambling every day.

Todd Marks: …we had to be on our toes during the studio stuff and we had to change things up on the fly and, you know, they’d come up with stuff two days before and say, oh, we’re going to have a scene where they’re talking, Ron’s talking to four different people at the same time. And so, their thought is—well, we’ll have to shoot these individually and cut them and do it all in Post. And we’re like, no, we can do it all live.

Perry Freeze: It was a lot of planning and a lot of work going in—on top of all the other daily stuff that we were shooting in the studio. And then, I think they said, “Action!” and then did a couple takes—they were over and done in twenty minutes.

This determination to handle as many components as possible live requires planning and commitment but pays dividends in spontaneous performances and an authenticity in the performances. For Anchorman 2, Todd and his team were up to the challenge.

Todd Marks The Computer and Video Playback Supervisor, Todd was the leader of the Anchorman video team.

Ben Betts As Supervising Engineer, Ben assembled the necessary gear and worked to make it, and keep it, operational.

Jeb Johenning His responsibilities as Video Playback Engineer were to have all the needed elements on hand and cued as needed. Glossary for highlighted words

Perry Freeze As Video Playback Coordinator, he assisted Jeb Johenning, as well as being Todd’s right-hand man.

John Monsour John came on the project as a Consulting Engineer and was instrumental in rebuilding old, sometimes gutted, equipment to run in a production environment.

Chris Adams, Phil Haskell and Sharouz (Shawn) Noushinfar provided support on location.


Glossary for highlighted words

Schindler Imaging Standards Converter Custom-designed scan converter that converts, color corrects and genlocks 30 fps computer/video signals to 24 fps or 23.98 fps signals.

GlobeCaster The Broadcast DVE system and Production Switcher that switches live video signals, triggers tally lights, performs various key functions (Down Stream Key, luma/chroma keys) and Digital Video Effects.

Chroma Key A process for compositing two images so one appears atop the other. One figure will be photographed against a color background, often green, that is not represented in the second image. A computer drops the green background and affects a seamless merger. Typically used to superimpose a weatherman over the weather map.

NTSC The National Television System Committee is responsible for specifying technical standards for broadcast television in the United States. The 120-volt, 60 Hz electrical power in use in the US requires different configurations than are in use in Europe and other places using 220-volt 50 Hz power. The term “NTSC” often refers to standard definition 30 fps (29.97) video signals.

Lower Third Literally, the lower third portion of a television image. Graphics, station IDs and text crawls are often placed in the lower third of the image.

CRT Short for Cathode Ray Tube, it denotes the older technology of televisions, before flat screens.

The Nagra Seven

The Nagra Seven

by David Waelder and Brendan Beebe

The new Nagra Seven, a two-track recorder with a touchscreen interface, is a small jewel. Like all the iconic Nagras, it is made in Switzerland to exacting standards. Limited to only the two tracks, it is primarily intended for use by radio reporters and others who don’t need a high track count. Still, it can be fitted with a timecode circuit and is entirely suitable in production work as an adjunct recorder.

It’s also of interest to media professionals because of its flexible design concept. Software provides for multiple configurations that can be recalled to meet various assignment demands, and circuit boards may be installed to provide capabilities not commonly found in a battery-operated recorder.

The timecode system, when installed, operates from a very stable temperature-controlled crystal (TCXO). In addition to all the standard frame rates, it is capable of user-selected pull-ups and pull-downs.

All Nagra Seven recorders are provided with full iXML Metadata and Ethernet connectivity that can send files by FTP. It is also possible to fit a circuit board to provide ISDN or Wi-Fi and 3G connectivity. The ISDN link can serve just as a connection for recording phone conversations but is also potentially useful for doing in-the-field re-recording. The recorder can be set up so that a director in another town, or another continent, can hear a recording as it was being made and at a quality level suitable for judging takes.

The editing function is another remarkable feature in a small recorder. When fitted, it provides non-destructive editing capabilities right on the recorder. The display screen shows an audio waveform that may be cut and pasted to a new file. An interview that wandered over topics might be edited before being uploaded to a radio station while preserving the original file. This might also be useful for occasions when the recorder is left running in a car that goes off to do remote work. The resulting hours-long file might be trimmed for inclusion with dailies materials without the risk of making permanent cuts.

Brendan Beebe took the machine for a week when working The After for the Amazon network. He was positively impressed by its performance and specifically mentioned the extensive and flexible settings for preamp filters. With the ability to tailor roll-off slopes and limiter parameters, it was, he said, more like a studio mixer than a recorder in its capabilities.

Brendan referred to it as a “gentleman’s portable recorder” and gave particularly high marks to the performance of the preamps and the headphone amp and to the flexible touch screen with intelligent shortcuts. “It would be difficult,” he said, “to have poor sounding audio on the Nagra Seven.”

At $3,300 for the base machine and $3,800 with the timecode option, it’s reasonably priced for a professional Swiss recorder. Being very light and offering outstanding performance out to 192 kHz sampling frequency, it’s a good choice for sound effects recording. With only the two tracks, its applications in production are limited but it’s also a window into thinking at Nagra.

Edge of Tomorrow

Recording Sound For Edge of Tomorrow

by Stuart Wilson, AMPS

The first challenge for us was keeping up with the man! He doesn’t waste a second; he walks onto the stage, barely breaks step to have a mini-pit stop where he sits on a wooden box (no artist’s chair for him) while hair, makeup, costume and sound all do their finishing touches and bam!—he’s off again onto the set and straight into the scene. We needed to be there, like coiled-springs ready, to wire up this man or be left behind.

Matching pace with Tom Cruise (TC) as he powers his way through the project was just one of the elements that made work on Edge of Tomorrow exciting. Quite a bit of the action of this film, directed by Doug Liman and co-starring Emily Blunt, takes place on the battlefield. The brief is that Cruise, Blunt and their squad would be airdropped onto a beach in the middle of a huge battle and have to fight their way inland. The battlefield was full of shell-hole craters, mud and water. Multiple cameras, simultaneously stationary and handheld, would cover action that would include partially improvised, seven-way dialog that ranged from quiet muttering to full-scale yelling. Oh, and they would be wearing full-body armor, called Exo-suits, with weapons mounted on them and constructed from dozens of parts with several articulated joints.

The elements we faced included:

• The Exo-suit body armor could be noisy when they moved around.
• There was little chance of getting a boom microphone anywhere near.
• There would be wide shots for action and scale at the same time as intimate shots between the soldiers.
• The explosives and gunfire were going to be loud.
• The weather was set to be wet and windy.

It’s enough to set any Sound Mixer’s alarm bells ringing on multiple fronts although the bells were largely inaudible over the din of the action.

I decided I had to get at least one wireless microphone, if not two, working on everyone at all times—and they had to work well! It’s my first time to work with Tom Cruise and I have the impression he’s not going to want to spend much time getting his microphone fitted and finessed every day.

I looked at the designs for the armor suits and consulted with the Prop-Modellers to find a way to build microphones into them. That way when Tom Cruise was in costume, he would already be mic’d.

Working cooperatively with Pierre Bohanna, James Barr and their crew, we built a box to be fitted on the front of the Exo-suit. The box was drilled out to make space for a lavalier with a furry windscreen that we concealed behind a mesh screen, and painted to match the Exo-suit. We found that even a millimeter of variation higher or lower in the mount made a huge difference to the coloration of the sound, so it was critical to have them precisely vertical and positioned as high in the box as possible. At the end of each day, we would open up all the boxes to dry them out and make sure there was no slippage in the orientation of the microphones inside.

We also fitted microphones in the helmets to help us cope with the huge dynamic range of the performances and give us coverage when a shell blast threw a load of wet debris on top of the chestmounted boxes. We did have a couple of microphones destroyed during the shoot but, all in all, they survived pretty well.

The team who built the Exo-suits was responsible for getting the actors in and out of them every day, making necessary adjustments and keeping them all working smoothly. They were a crack team, and ready with a can of oil to prevent squeaks.

One of the important sets was a ‘Drop Ship,’ a troop-carrying aircraft that was built on a gimbal suspended from the roof of the stage. The actors were all hooked on with restraint wires in an upright position, already in their body armor, ready to be dropped when bomb-bay-type drop doors opened beneath them. They then dropped fifteen feet for real before being held by the wires.

Once the doors were closed and the access steps slid away, no one could get in or out. We built a PA system inside using some rugged horn speakers that would look right on-camera, so the Director or Assistant Director could communicate with the cast.

This was a phenomenally noisy rig, built twenty feet above the ground. There was no possibility of reducing the noise of this rig, but it sounded quite in keeping with the big machine it was supposed to be, so the best thing was to go with it. We suspended a couple of Schoeps omnis from the ceiling to capture FX in sync and all the clanking, groaning and motor noise of the ship sounded great. These microphones were placed away from where the dialog was happening so they could pick up a continuous track of FX (recorded on discrete tracks) which could cover and blend edits from one shot to another. This recording was used to help keep the sync dialog usable.

I had fun putting a homemade contact microphone (from the Sound Artist/Field Recordist/Composer Jez Riley French) directly on the huge winches that drove the motion of the rig. That gave some really interesting low-frequency tones and clunks to add into the mix at the dramatic moment the drop-doors swung open. A benefit of the noise level was that the actors had to yell their lines, keeping the balance of voice to noise usable. Again, we used two microphones as much as possible, one in the helmet and one on the chest. We used Audio Ltd. mostly and Zaxcom for the really loud voices.

There are many benefits to working with an actor like Cruise, as he is aware of all aspects of the process, including a sensitivity to sound. When he walked onto a barracks set that had plastic interlocking floor tiles and said they were a bit noisy to walk on, the Art Department had to find a solution. There must have been around four hundred tiles in that floor and they put three men on the job of filling each of the thirty hollow sections on the underside of EACH moulded plastic tile, with silicon glue and precisely cut pieces of carpet. I’d like to imagine that would happen if a Sound Mixer made a comment on a noisy floor!

A scene landing a helicopter in the middle of London’s Trafalgar Square, one of the main focal points of the city, presented another challenge. Nothing of the sort had ever been done before and the shot necessitated closing down the entire area for the morning. TC was to board the helicopter in another part of the city where Boom Operator Orin Beaton would put the microphones on while I set up with the filming unit in the Square. The plan was that, as the chopper landed, TC would step out, immaculately dressed in his U.S. officer’s uniform, flash that famous smile, meet up with a waiting British officer, and they would talk as they walked to a waiting car, get in and drive off.

I had told TC that I wanted to put two wireless microphones on him for this scene, one set to a high level, the other set low and in two different frequency blocks in case of any unpredictable interference on the day. He was open to this idea but wanted to test it a few days before, to see that it could be done without affecting the immaculate look of the costume. I was grateful for this as the shot was nerve-wracking enough. The cameras would be rolling from the moment the chopper appeared in the sky until they drove away in the car at the end of the scene—it had never been done before, we might get only one shot at it and we didn’t want to risk a visible sound pack spoiling the shot. We put the microphones on, mounted the packs on the ankles, and pulled and flapped the trousers to make sure we wouldn’t see any lumps in the downdraft of the helicopter blades. When TC gave the thumbs-up, we were good to go and it turned out to be a great scene. In the end there was a lot of RF around that morning that wasn’t present on the scout so it took an Xmas tree of antennae to bring in enough signal, but we got it! We all had to be in military costume in case we were seen from the air so we had to rationalise that we were in the Royal Engineers Corps and that wireless operators play a crucial role in modern warfare!

I have never encountered anyone so relentlessly POSITIVE as Tom Cruise. Every day for five months, he was always giving at least 100%. If there was anything he wasn’t happy about with the filming, he always had a good reason and whatever the problem was got fixed, which would invariably improve the shot. He’s very demanding of himself and those around him but if you do a good job, he shows his appreciation. He is absolutely in control of what he is doing in his performance and behaves like he is the luckiest guy in the world to be able to do what he does—making movies. His professionalism, talent and commitment both impressed me and took some of the edge off an assignment with more than the usual amount of trudging through mud.

Hal Hanie Profile

A Profile of Hal Hanie
56 Years in Broadcasting

by David Waelder
Photos courtesy of Hal Hanie except where otherwise stated

Dwight David Eisenhower was President in 1957 when William (Hal) Hanie began his career in television at KRLD, the CBS affiliate in Dallas. Tailfins were all the rage for cars and The Howdy Doody Show, the iconic children’s show from the ’40s, was still on the air; it would run for another three years. Videotape had been introduced only a year prior and, in some markets, copying programs was still done by kinescope, a process that involved shooting a monitor screen with a motion picture camera.

Television in the ’50s was a young and rapidly developing industry but Hal Hanie entered the field well prepared for the rapid technological change he would experience. Drafted into the Army during the Korean War, he took advantage of an opportunity to complete his service in the Air Force. They gave him twenty-two weeks of training in electronics school and additional training in control tower school that included instruction in radar. On completing his four years of military service, he continued his training in trade school and also worked at the radio station run by the school. His first real job was with Collins Radio, now Rockwell Collins, a manufacturer of broadcast transmitters, microwave transmitters and relays. When he took the position at KRLD, Channel 4 in Dallas, he already had a solid background in electronics and related disciplines.

At KRLD, he worked nearly every position in television at one time or another. He also maintained the transmitter for the station’s sister radio facility located in the same building.

He did television remotes for events, like football games, all over Texas. He also did video recording and worked instant replay, a new feature developed at CBS by Tony Verna. In those days, sports events were recorded on two-inch videotape and any portion of the tape might be played back for on-air review. Locating the right cue point for the desired play was the difficulty in any on-the-fly playback situation. The video recorder was fitted with a mechanical counter and the operator would hold the timer at zero until the play started. For replay, he would back up the tape to the zero point or a few seconds before to provide time for lock-up. Later, with the use of one-inch machines, operators like Hal Hanie would often turn the reels by hand to find the cue point, and then turn the reels forward by hand to provide slow motion. With the later machines, a system of identifying plays by laying down beep tones on the cue track that were audible to the operator on rewind replaced the mechanical counter. Providing instant replay was one of his responsibilities throughout most of his career, both in Dallas and here in Los Angeles, up until 2009 when the Clippers ended their over-the-air contract with the station and KTLA ceased original sports programming.

The Kennedy assassination in Dallas was his most memorable experience while at KRLD. He recalls seeing Lee Harvey Oswald at the Dallas police station and observing how cool and selfpossessed he appeared to be. Jerry Hill, one of the policemen who found the sniper’s nest in the Texas School Book Depository and later helped capture Oswald at the Texas Theatre, was one of two police officers working part-time at KRLD as a police liaison and well known to the staff at the station. Hanie remembers this as a chaotic time, exciting but stressful and disturbing. And, he had occasion to evaluate the performance of the crack staff from CBS in New York who came to Dallas to cover events. Nelson Benton, now regarded as a veteran newsman, was just beginning his career and appeared a young fellow “shaking in his shoes” when Hanie observed him.

In June 1969, Hanie moved to Los Angeles and started work at KTLA. He joined IATSE at that time. (His work at KRLD had been under an IBEW contract.) He stayed at KTLA for forty-four years. Combined with his twelve years at KRLD, he has 56 continuous years of experience in television.

At KTLA he continued to do instant replay for sports and did videotape playback and recording for all sorts of programming. He did the recording for Donny and Marie and Dinah’s Place. He has fond memories of the people working both shows.

He worked many other shows including The Richard Simmons Show and Mary Hartman, Mary Hartman and others too numerous to recall. He edited Backstage with Johnny Grant and recalls that Grant could never remember names so he would call everyone “Tig,” short for “Tiger.” When Hal Hanie asked him what he would call a woman, he thought for a moment and answered, “Tigress.”

Gene Autry owned the station when Hal Hanie first came to work at KTLA. Hanie remembers him as a benevolent boss who often treated employees to lunch in his private box at Angels games. The Tribune Company purchased the station in 1985 and initiated polices that were more corporate. They sought to renegotiate the contract and eliminate seniority status. Hal Hanie was proud to walk a picket line to protest that move. He also served awhile as Shop Steward for the Videotape Department at KTLA.

He recalls a time at KTLA in 1991 when the station brought in some green production staff to work the morning news. They were so inept that they couldn’t coordinate the teleprompter copy to match the video clips and mismatches were common. Finding themselves adrift, the reporters would often break up laughing. The Producer of the KTLA Morning News encouraged them to play along with the errors rather than glossing them over and striving to retain dignity. The newscasters, thinking the show was probably on the verge of cancellation anyway, went along and discovered that ratings improved. Viewers liked the casual presentation. After that, every news program in town was copying the loose format. Three of the reporters from that time, Mark Kriski, Sam Rubin and Eric Spillman, are still with the station.

In addition to his regular work at KTLA, Hal Hanie operated a small, community radio station from a studio adjacent to his home. FCC regulations are quite demanding regarding regular broadcasts and he needed assistance to keep things running regularly. He used interns from Columbia School of Broadcasting, Santa Monica College and Cal State Northridge, trading technical training and experience for help with operations. He did regular remote broadcasts of high school football games, both home and away. That was a complex operation requiring stringers to prerecord interviews with the coaches that he would edit into a pre-game show. During the game itself, he had a professional announcer and a color man providing continuous coverage that he would feed into a phone line for broadcast. Eventually, he became the “sustaining member” of that particular charity and it became too much to carry while also working a full-time job at KTLA. The radio station is no more but he still maintains a recording studio that he uses to make demo tapes and transfers to digital media.

Operations at a TV station are now largely automated but, during his career the systems required considerably more attention. Chroma Key demanded very exact lighting to prevent bleed at the edges. Genlock used to be so fragile that just touching a camera could cause the signal to lose lock. Equipment required alignment daily, or even more frequently, and he used to be responsible for tweaking color and density on a vector scope. Now, a computer generally handles this chore digitally. And there was a time when he needed to keep a rag soaked with solvent to clean heads on the fly to prevent image breakup caused by emulsion build-up.

The continuing process of automating procedures eventually encouraged Hal Hanie to retire in 2013. When the station completed the automation program and linked several tasks to one computer, they offered him the option of retraining. He had done that at several stages in the past but thought, at age eighty-two, it was time to step aside. William (Hal) Hanie retired as a Gold Card member of Local 695.

His other passion is flying. He used to own a 1977 Archer but airplanes are an expensive hobby and he had to let it go. But his license is still current and he was planning a trip to Texas when we interviewed him. We wish him blue skies.

P-Cap, MoCap and All That Jazz Part 2

P-Cap, MoCap and All That Jazz Part 2

by Jim Tanenbaum, CAS

Set Procedure

The capture techs will have an earlier call so they can calibrate the volume. This involves placing a single reflective marker at specified positions so the computer can associate them with the images in the capture cameras. The marker is mounted on a rod, usually the same length as the side of the grid squares. First, the rod is used as a handle to position the marker on the floor at each intersection of every grid line. The system will beep or chirp when it has calibrated that point so the tech can move on to the next one. When the floor grid is calibrated, the other end of the rod is placed at each of the intersections, and held vertically with the reflector at a fixed distance directly above the spot, and the procedure repeated. During the calibration, the volume needs to be kept clear of other crew people.

Reflective objects are verboten in or even near the volume. Any Scotchlite strips on shoes or clothing need to be taped over, and if the anodizing is worn off of the clutch knobs on your fishpole, they will need to be covered with black paper tape. Some poles’ shiny tube sections are a problem too, and black cloth tubular shrouds can be purchased to slip over the entire fishpole. J.L. Fisher has black-anodized booms available to rent for use on capture shoots. If you have work lights on your cart, be sure their light bulbs are not directly visible to any of the capture cameras.

On most shoots, you will have only a single assistant, either to boom or to help with the wireless mikes. This means that the smaller and lighter your package is, the easier it will be to set up, move and wrap.

I make it a habit to run on batteries at all times. This avoids problems with hum from ground loops because you are tied into the studio’s gear through your audio sends, and also the possibility of having your power cord kicked out of the wall outlet. Being a belt-and-braces (suspenders) man, I also use isolation transformers in my audio-out circuits. (See my cable articles in the Spring, Summer and Fall 2012, and the Winter 2013 issues of the 695 Quarterly.)

The usual recording format is mix on Channel 1, boom (if used) iso’d on Channel 2, and wireless mikes (if used) iso’d on succeeding channels. You will send a line-level feed of your mix to the IT department, where it will be distributed to the reference cameras and imported into the editing software. Your isos may also be sent into the system during production.

Metadata may be conventional (Scene 37a, Take 6) or extremely esoteric and complex (195A_tk_00E_002_Z1_pc001_0A01_VC_ Av001_LE). Hopefully, you will be allowed to abbreviate long ones like this—I was able to get away with: Scene 195A_00E_002, and Take 2, but since the last digit of the “scene” number was also the take number, I had to manually advance it every take. Fortunately, the Deva allows me to make corrections retroactively, but it is still a nuisance so I’m very careful when I enter the data initially. Discuss metadata requirements with production as soon as possible.

Digital sound reports are very convenient, but you need to secure your tablet carefully; the cost of replacing a dropped Galaxy or iPad overcomes any amount of convenience.

Comtek monitors can be a problem because of system delays in the video display screens, which are often non-standard and even variable. Many directors will want to see and hear playback during the day. I have found that the simplest solution is to get a feed of your mix back from IT and send that to the Comtek transmitter. They should automatically have the correct delay for both direct and playback. Unfortunately, a number of new, smaller volumes have sprung up, and they sometimes do not have any means to compensate the audio for the video delay. Behringer makes an inexpensive variable-audio-delay unit called the “Shark,” and it is worthwhile to carry two of them along with an XLR switch so you can quickly feed your Comtek with the appropriate delay for direct and playback audio. Your direct mix will go into delay 1, and the mix return from playback will go into delay 2. The XLR switch will be used to select the output of either delay as required to feed your Comtek transmitter.

A problem with sending your mix and isos into the capture system in analog form is that the gain structure of their audio channels may be less than optimal, and more importantly, accidently be changed after you have adjusted it initially. If you can have any control over the infrastructure, try to get a digital audio (SMPTE/ EBU) audio path so you won’t have to worry about this, or hum/ buzz pickup.

It is vitally (and virtually) important to discuss digital audio parameters with the IT department. The most common TC frame rates are 23.98 and 29.97, but 24 and 30 are also encountered, and you must be sure to use the correct one. Although you can use 29.97 with a 23.98 system, and 30 with a 24 system—the rates can be converted without too much trouble—it is much more difficult (and expensive) to use 30 with a 23.98 system, or 29.97 with a 24 system. Usually, you will get a TC feed from the capture system. Ask specifically about the user bits—some systems have fixed random digits that remain unchanged from day to day. If you are working more than one day on the shoot (and remember that sometimes a one-day job runs over and requires a second day), it is important to put the date (or some other incremented number) into the user bits yourself to avoid duplicate TCs.

There are two “standards” in TC circuitry: BNC connectors at 75­ and 3-pin XLRs at approximately 110­. Unfortunately, these parameters are not universal, and to make matters worse, some facilities have built up their own infrastructure and have patch panels with connectors that are fed from equipment with the inappropriate impedance.

Unless long cable runs are involved, this impedance mismatch usually does not cause problems. (See the cable articles for using balun transformers.) The best you can do is to use mike cables with XLR TC sources and 75­ coax cables with BNC TC sources. If this does not match the TC input connector of your recorder, try a simple hard-wired adapter before going to a balun. If the recorder’s display shows a solid indication of the proper frame rate and there are no error flags, you are probably okay. If this is a long-term project, you should have time for a pre-production test, if not, cross your fingers. (Or invest $10,000 in a time-domain reflectometer to measure the jitter in the “eye pattern” and determine the stability of the TC signal at your end.)

When it comes to wireless-mic’ing the capture suits, there is good news and bad news. The good news is that you don’t have to hide the transmitter or mike. The bad news is:

1. There is a tremendous amount of Velcro used on capture suits, and it can make noise when the actor moves. Applying gaffer tape over the offending strip of Velcro will sometimes quiet it. For more obdurate cases, a two-inch-wide strip of adhesive-backed closedcell neoprene foam (aka shoe foam) may prove effective. As a last resort, one or more large safety pins fastened through both sides of the Velcro usually works.

2. Mounting the mike capsule requires some forethought. If no facial capture camera is in use, the top of the helmet opening can be used to mount a short strut to hold the mike in front of the forehead. I use a thin strip of slightly flexible plastic, 1–2 inches in length. If a face-cap camera is used, its mounting strut can be used to secure the mike, but in both cases, be sure to keep the mike positioned behind the vertical plane of the performer’s face to help protect against breath pops. Also, the exposed mike is susceptible to atmospheric wind, or air flow from rapid movement of the actor. I have found that a layer of 100% wool felt makes an excellent windscreen, especially when spaced away from the microphone element about 1/8 inch. (Incidentally, felt can be used to windscreen mikes under clothing as well.)

3. Because the mike is located so close to the actor’s mouth, it is exposed to very high SPLs. Many lavaliers overload acoustically at these levels, so turning down the transmitter’s audio gain doesn’t reduce the distortion. Both Countryman and Sanken make transmitter-powered models designed for higher SPLs, but not quite high enough. The problem is that the mikes require at least 5 volts of bias for these peak levels, and most wireless mike transmitters supply only 3.3 to 4 volts. An inelegant fix is to use one of these mikes with an external, in-line battery power supply, because their extra bulk doesn’t have to be concealed. The other side of this coin is that these high-SPL mikes are noisier at low dialog levels. Be prepared to quickly switch back to the low-SPL mikes between loud and quiet dialog scenes. Another possibility, if you have stereo transmitters (currently only available from Zaxcom), is to employ two different mikes, one for high levels and the other for low, and iso them both.

4. There may be other electronics mounted on the actor’s suit that can interfere with your wireless mikes. If a face-cam is in use, there will be a digital video recorder and timecode source. This may be an onboard TCXO, or a receiver for an external reference. Another possibility is a transmitter to send locally generated TC to the capture system. If real-time face monitoring is present, there will be a video transmitter, either in the WiFi band (2.4 GHz) or on a microwave (above 1 GHz) frequency. If active markers are functioning, they may receive and/or transmit an RF synchronizing signal. The RF from any of these transmitters can get into your wireless either through leakage in the transmitter case or through the lavalier’s capsule housing, cable or plug. Keeping your gear as far from any of these transmitters and their cables is the first line of defense.

5. If motion control apparatus is being used, there may be multiple RF links involved, all at different frequencies. As soon as possible, coordinate frequencies with the appropriate department(s).

6. The reference video cameras, if camcorder types, may have video monitor transmitters. Some of them still use the old analog Modulus units, and they present very serious interference problems.

7. Walkie-talkies usually operate well above or below your wireless frequencies, but at 5 watts they can cause trouble if close to the actor or your sound cart.

8. For general wireless mike problems, see my radio mike article in the Spring 2011 issue of the 695 Quarterly.

When it comes to booming a CGI–capture scene, there is good news and bad news. The good news is that you don’t have to worry about boom shadows. The bad news is:

1. You can’t block the view of the reference cameras. When 12 of them are in use simultaneously, it can be hard to keep track of all of them. But the mike and boom can be visible in the reference camera(s) as long as it isn’t between them and an actor’s face (or key part of the body).

2. There is no such thing as “perspective” in a captured scene, since it can be rendered from a POV at any distance. Every shot needs to be mic’ed as closely as possible. Distance is easily added in Post, especially now that we have DSP (Digital Signal Processing), but cannot be removed.

When it comes to booming a live-action capture scene, there is good news and bad news. The good news (if any) is dependent on various factors. The good/bad news is:

1. It depends on the particular project as to whether the mike and/ or boom can be in frame. For green/blue screen work, a green or blue cloth sleeve is available for the pole, and similarly colored foam windscreens for the mike. Also, appropriately colored paper tape can be used to cover the shockmount, or acoustically transparent colored cloth can shroud both mike and shockmount. Be sure the cloth is far enough from the mike that it does not rub when moved.

2. For non-screen work, the ordinary booming rules about shadows and reflections apply, except…

Now that HD video is the norm, there is no film “sprocket jitter” to make the matt lines stand out, and there is no “generation loss” from optical film processes. This, plus the much lower cost of video image processing compared to film, has made producers and directors less reluctant to use it. Offending objects can be removed from a shot relatively easily, and this can include mike booms. (Of course, this is no license for sloppy work.)

Another use of CGI solved a problem that has plagued filmmakers from the very beginning: reflections of lights, cameras and crew in shiny surfaces. Bubble faceplates on spacesuits were a particular problem. (We had to build a quarter-million-watt artificial sun for single-source lighting on the TV miniseries From the Earth to the Moon, in major part because of the astronauts’ mirrored visors.) For Avatar, most of the exopack masks were only open frames, with red fiduciary (computer-tracking) dots around the edge. CGI faceplates were added in Post, complete with the appropriate reflections of trees, sky, other characters, etc. Many of the windows in vehicles were CGI’d in the same manner. This provided a rare benefit to the sound department: the ability to shoot through a “closed” window or a facemask with a boom mike.

When it comes to setting levels and mixing the production (realtime) scratch mix for a capture scene, the usual live-action esthetic and dramatic considerations do not apply:

1. As just mentioned, there is no “visual perspective” as such for a given take, because it can be rendered from any POV. Wireless mikes sound “close,” and you will try to boom mike as closely as possible, too. With every channel iso’d, there is the freedom in Post to mix them in any proportion, but remember that your work is normally judged in dailies. (Although nowadays, that usually means the immediate playback of the take.)

2. For your production mix, however, you will have to make certain choices without knowing what perspective image it will be mated to. EXCEPTION: When a virtual camera is in use, if you can see or be told, what the composition is, by all means use that perspective, because it will most likely be seen (and heard) that way first, as in dailies.

3. The biggest problem (IMHO) concerns overlapping dialog when the characters are separated in the volume by a large distance. If you don’t have the virtual camera info mentioned above, try to imagine what the composition of the rendered shot(s) will be. Is a main character speaking with a secondary one? Then the main character will probably get the most screen time. Is one character reacting more emotionally than the other? Then they will probably get the close-up.

4. After you have determined (made your best guess) which character will be featured, mix them just noticeably hotter than the other one. The separation in levels should be just large enough that the lower level dialog doesn’t muddy the higher level dialog, but no more. Since both actors are close-mic’d, if they happen to feature the secondary one, the overlap will still work. EXCEPTION: If you know the purpose of the overlap, assign the higher level to the appropriate character’s dialog. This will call attention to the overlapping character, but that’s the reason for the overlap in the first place.

In addition to the usual noise problems on a live-action stage, the volume has some unique ones:

1. The area lighting is often supplied by ordinary fluorescent lamps, and many of them have older magnetic ballasts that emit 120 Hz hums and buzzes. Modern electronic (high-frequency) ballasts are usually quiet enough, and are available as direct replacements for the older magnetic ones.

2. There are usually a great number of computers on the stage, and their cooling fans are a significant source of noise. If the facility has been in existence for some time, this may already have been dealt with. If not, plywood baffles, covered with sound-absorbing material on the side that faces the computers, should quiet them sufficiently.

3. Some volumes’ floors are carpeted to eliminate footstep noise, but unfortunately, some are not. An adequate stock of foot foam should be on hand for this eventuality. Be sure to remove any dust or other loose material from the shoe soles before attaching the foam. I have found that repeatedly wiping the soles with the sticky side of gaffer tape, using a new length of tape each pass, does an outstanding job of preparing them. An expedient method when time is limited is to slip heavy wool socks over the shoes. You may have to cut holes in the socks for the foot markers. Unfortunately, the socks can slip around, and also have less traction on the floor than rubber soles. I keep a dozen 2’ x 5’ carpet rolls on my follow cart, and these can be laid down along the path taken by the actor(s) during the rehearsal. (Of course, they never deviate during the take.) Normally, the strips are taped in place, but when time is short, they can be attached with staple guns (unless the floor is concrete). IMPORTANT: Roll up the carpets with their upper surface out—this makes the strip curl downward when it is laid out, so the ends hug the floor and do not curve up to present a tripping hazard.

4. The floor-contour modules are another source of footstep sounds. Some of them are carpeted, but can still produce dull, hollow thumps from the impacts of running or jumping (which video games seem to have in abundance). The un-carpeted platforms are particularly loud. If at all possible, arrange to have them carpeted before shooting begins. Both types of modules benefit from having the underside of the top surface sprayed with sound-deadening material, such as automotive underbody coating. Using thicker (and unfortunately, heaver) plywood for the upper surface makes a big difference, too. During shooting, carpet strips can be utilized on the modules in the same manner as on the floor.

5. Front-projection video projectors have cooling fans that can be problematical. Ask if their use is absolutely necessary. Check in their menus to see if they have a “lownoise/ low-speed” option.

6. Props (and some set dressing) are usually not the real objects they represent. Rifles are plastic or wood pieces shaped like the guns they represent, or toys or air rifles.

A solid oak dining table may in fact be only a row of folding card tables of the same height and overall size. Be alert to any sounds they produce—an object set down on the card table (not on a line) may make an effect at the appropriate level, but the sound will not be appropriate to the nature of the CGI-heavy wood table. There are two schools of thought in dealing with this: 1, eliminate as much noise as possible by padding the table so the effects cutter will have a clean room tone to lay the correct effect into; and 2, leave the production effect in, as a guide to synchronization when laying in the new effect. I suggest discussing the matter with Post ahead of time, but my personal preference is number 2, because the presence of the padding will affect the manner (body motion) in which the actor sets down the object. Of course, if the inappropriate sound is on a line, either pad the table or object, or record some clean wild lines.

Capture Procedure

When a capture scene begins, the actors will start by spreading out and taking a “T-pose” near the edge of the volume. If you haven’t been given a specific “Roll sound,” this is the time to go into Record. An added precaution, be sure to set your recorder’s pre-roll to the maximum time. T-pose is a standing position with the legs slightly spread and the arms extended horizontally, which allows the capture techs to see that the system has properly recognized all the markers. The techs give the okay, the actors will take their proper positions in the set and then the director will call “Action.”

At the end of the shot, after the director calls “Cut,” the actors will again move out and take the T-pose. When the capture techs are satisfied, they will announce that the capture system is stopped, and then you can stop recording.

The primary difference between a capture shoot and any other type is that you won’t have much free time once the process starts. Unlike live-action, there is no setup time for camera and lighting. And there are no setups for alternate camera angles, or retakes for bad camera moves, flyaway hair, or any of the multitude of other delays sound is used to. Once the scene has been performed to the director’s satisfaction, the action will move to the next one, which again requires no re-lighting, new camera setups, wardrobe changes, or makeup and hair. If any set or prop changes are necessary, they can be accomplished in a few minutes. Plan your bathroom breaks accordingly.

This high-density work can generate many GB of audio, so be sure to have a large amount of pre-formatted media on hand. Depending on your particular recorder, you may have your on-set archive on an internal or external hard drive, or a CF or SD card. Most productions want audio turned in on a flash memory card. SD cards are much cheaper than CF cards (and all those tiny fragile pins in the CF card socket scare me). If you are using a Deva with only CF card slots, consider an external SD dock into the Deva’s FireWire port. Depending on the particular job, you may or may not be required to turn in the flash card(s) during the day or at wrap. The audio may be imported immediately and the card(s) returned to you, or they may be kept overnight or longer. Use only ‘name-brand” cards, as the wear-leveling algorithms on the cheap ones can cause premature failure, with the possible loss of all your data.

The director may have several options to monitor the scene during capture:

1. The live video from the reference cameras.

2. A crudely rendered live CGI frame, with a fixed POV chosen in advance.

3. Using a “virtual camera,” pioneered by Cameron on Avatar. This is a small, handheld flat-panel monitor equipped with reflective markers. The capture system knows its location and the direction it is pointed, and renders a live CGI frame from that POV and “lens size.” The director can treat it as a handheld camera, pointing it as though it was a real camera in the virtual world. Incidentally, the camera does not have to actually be pointed at the actors—the GCI world seen by the virtual camera can be rotated so that the camera can be aimed at an empty part of the stage to avoid distractions. Another feature of the virtual camera is a “proportionality control.” Set to 1:1, the camera acts like a handheld camera. At 10:1, raising the camera two feet creates a 20-foot crane shot. With a 100:1 ratio, it is possible to make aerial “flyover” shots, because the entire extent of the virtual world is available in the computer’s database.

When a virtual camera is in use on a multi-day shoot, the capture days may not be contiguous. After a certain amount of capture has been done, the main crew and cast may be put on hiatus while the director wanders around the empty capture stage with the scene data being played back repeatedly. The crudely rendered video will appear in the handheld monitor, from the POV of its current position. The director can then “shoot” coverage of the scene: master, close-ups, over-the-shoulders, stacked-profile tracking shots, etc. This procedure ensures that all the angles “work.” If not, the director has two options: re-capture the scene on another day; or fix the problem in the computer by dragging characters into the desired position and/or digitally rearranging the props, set or background.

If this is the case, you have two choices: wrap your gear at the end of each capture session, and load in and set up at the beginning of the next one; or leave your gear in place during your off day(s). The trade-off is between the extra work (and payroll time) of wrapping and setting up, and the danger of the theft of the gear, or your getting a last-minute call for another job on the idle day(s). If you elect to leave your equipment, see if you can get a “stand-by” rental payment. Even if this is only a token amount, it establishes a precedent, and you may be able to raise the rate on the next job.

Conclusion

In addition to on-the-job training, if you know another mixer who will let you visit a capture set, take advantage of the opportunity as soon as possible. I probably would have not survived the first day of my first capture job (Avatar) if it were not for Art Rochester, who kindly let me shadow him before he left the show. I also got many hours of coaching from William Kaplan, who mixed the show before Art, and let me use his regular Boom Op, Tommy Giordano, to help with the load-in and setup of my gear. Bill also sent his son Jessie to work with me on the set. If at all possible, hire a boom op who has capture experience. (Note to boom ops: list your capture experience in your 695 directory listing.)

I wish you an absence of bad luck, which is more important than good luck in this business.

Text and pictures © 2014 by James Tanenbaum, all rights reserved.
Avatar set photo ©2009 Twentieth Century Fox. All rights reserved.

Working With Jim Webb

Working With Jim Webb

by Andy Rovins, CAS

One day in 1981, while standing in line at a bank, I struck up a conversation with an older gentleman who said he was a retired Prop Master. When I replied that I was a Boom Operator, he said that his son, Chris McLaughlin, was a Boom Operator. “Really, Chris McLaughlin is revered among boom operators. He works with Jim Webb and gets equal billing with Jim as the sound team.” The next day, I got a call from Chris. “Who are you, and why are you saying nice things about me to my pop?” We chatted a bit about mikes and booms and stuff. “What do you like,” he asked? “A Schoeps is my favorite.” “We use an 815 on everything. We did All the Presidents’ Men with one 815 underneath and won an Oscar.” You had to be spot-on with an 815 or it would sound funky; if you could handle one all the time you were a real Boom Operator.

A few days later, Chris asked me if I wanted to work on pickups for One From the Heart. Their regular third, Jim Steube, was on vacation. I jumped at the opportunity. I got to work with this famous team, and I’d heard about this film, with Francis Coppola directing from the Silverfish (a custom airstream trailer stuffed with monitors and video gear), Vittorio Storaro’s lighting and Dean Tavoularis’ forced perspective Las Vegas sets. I also got to meet Jim, whom I’d heard so much about. Jim was congenial and different from most Mixers I knew. He didn’t want to be near the set, but was content to cable in and give his Boom Operators autonomy. We did scenes with Teri Garr and Raul Julia, and one with Nastassja Kinski sitting in a big martini glass singing “Little Boy Blue.” Nastassja was a real flirt. I think every guy on the set had a crush on her.

At one point, Francis came on set and tried to talk Joanie Blum, the Script Supervisor, into directing the scene, but she wanted no part of it. I offered to do it, but Francis declined. “Who are you?’ “I’m your new Sound Utilityman.” “Oh, yeah, I used to do that job.” He decided to direct it himself.

The last day, Chris felt ill so Jim told me I could boom. It was only a little announcement from Francis—they wanted to show the film to exhibitors, but the opticals weren’t done so it would have some slugs that he wanted to explain. Francis was late that day and we sat around. Finally, someone came up with an idea. Ron Garcia, DP for the pickups, looked kind of like Francis with his beard. So we sat Ron in a director’s chair holding a film can, while a prop guy dropped money into it from above, and Ron looked at the camera and said, “We will show no film before its time,” a goof on an Orson Welles wine commercial running at the time. I think Ron still has a print of it.

Jim brought me on some more projects after that. He would drive up in his white van, and we would pull out the Fisher boom—Jim was the only guy I knew who owned his own. He had his anodized black and changed out the platform wrench to a socket for faster action. When possible, Jim would mix from the van and we would run cable out. A cool thing about having our own Fisher was that we didn’t have to bargain for one; it was there if we needed it.

Jim got some interesting gigs in those days. He brought me on Get High on Yourself, an anti-drug special produced by Robert Evans as part of a plea bargain negotiated after an arrest for possession. It was a huge production with many bands and stars, including Ted Nugent, The Osmonds, Leif Garrett, Brooke Shields, Carol Burnett and Paul Newman. There were concerts and audience Q&As with lots of kids asking the celebrities questions. There was also a big production number with many stars and kids all singing the theme song in a style that would be mirrored by “We Are the World” a few years later. Jeff Fecteau and Chris Seidenglanz were the A2s and co-booms. There were many producers on that show; the event was kind of thrown together and disorganized but I think Jim thrived on being able to hold together challenges like that.

Jim liked my boom work so he asked me to come along for another unique show—recording a concert by pianist Mona Golabek in a women’s prison in Chino, California. It was an odd scene. The prison had been built as a luxury resort in the ’20s, so it was all marble and stone floors with high-vaulted ceilings.

The acoustics were somewhat echoey, so we put up furny pads when we could, mic’d the pianos with some Shure dynamics and set up an SM58 for Mona. We rolled the Fisher out of Jim’s white van for me (with the 815) to mike questions and reactions from the prisoners.

It was somewhat incongruous: we had a classical pianist in a stately, beautiful building playing Chopin and Liszt for an audience of very rough-looking women in prison fatigues, but they were an appreciative group and seemed to regard the experience as a treat. I think Jim still has a recording of that show.

Jim is not just a great Sound Mixer. He’s also one of the best raconteurs I’ve ever met. Just sit down with him if you get a chance. He tells me that one of his favorite stories comes from Get High on Yourself when I was working that long 815 on the Fisher boom to get unscripted audience responses. To be part of Jim’s stories is a real honor!

The Walking Dead

Blood, Guts, Gore … and Chiggers
Behind the Boom of THE WALKING DEAD

With Robert ‘Max’ Maxfield

Photos by Gene Page, courtesy of AMC TV, except where otherwise noted

Did you know that chiggers don’t really burrow under your skin?! Nope, actually they grab onto a hair follicle, and inject a digestive enzyme into your skin cells. The enzyme ruptures your cells so that they can drink the resulting fluid containing a protein they need to grow. Your skin hardens around the area, forming a nice big red volcano-like sore. That enzyme-filled volcano keeps you itching for a good three to four days. And you’re almost never bitten by just one!

I came by this intimate knowledge of chiggers in the early fall of 2011 when I was invited to go down to Atlanta for three weeks to boom a show I’d never heard of called The Walking Dead. It was in the middle of Season Two, and I was told that another guy would come in after me to finish the last five weeks of the schedule. It’s not a good sign when you take over in the middle of a season and it’s even less promising when they’ve already scheduled another person to come in after you. I couldn’t shake the thought that I was just another piece of raw meat for zombie lunch. And the prospect of working nights on a project with blood, guts and gore (I’ve never really taken to B-movie horror) was not attractive. I had memories of working with slimy creatures in the early ’90s series Freddy’s Nightmares and wasn’t eager to revisit the experience. So, after a short deliberation, I told the Sound Mixer, “Thank you for inviting me, but I’m going to have to pass.”

Well, seven days went by and I still hadn’t booked anything for the following week, so I thought, “Heck, three weeks with a bunch of rotting corpses in sunny Georgia couldn’t be too disgusting, and it’s not like I have to eat lunch with them.” Like any Boy Scout film worker during lean times, I called the Sound Mixer back and asked, “Still looking for a good Boom Operator?” He said, “Yes, come on down.” I shook off the disquiet that they were only three days from needing someone and hadn’t yet filled the position. “Oh well, they’re paying me housing and per diem, plus a box rental and rental car … I’m outta here!”

Three days later, I was driving my rental car down a pitch-black country road at six o’clock in the morning, just outside the tiny rural town of Senoia, Georgia. The stages are situated in an old chemical plant on a dead-end road, one hour south of Atlanta in a thickly forested area that only chiggers could love. It’s shrouded by trees, stagnant ponds, railroad tracks and all of the little creatures that make for a great horror flick. I fought off the feeling of this being my worst nightmare.

I arrived to some good news. They told me that I was the ninth Boom Operator on the show since its inception a year prior. “You mean that in only 13 filmed episodes, you have been through nine Boom Operators!” “Yep,” the Sound Mixer said. This was not sitting well with me. By the end of that same Season Two, they would reach the milestone of 11 Boom Operators! To this day, they call me “Number 9, Number 9, Number 9…” There have also been several Mixers over the seasons, starting with my friend and supporter, Bartek Swiatek, CAS, a Local 695 colleague who left Georgia to move to California, and coming to the present day with Michael Clark, CAS.

Oh, and it turns out, I did have to eat with those zombie things. Nothing like lunch with a gooey corpse sitting across the table from me, spoonfeeding itself through displaced dentures into its black-and-blue prosthetic face—yummy. But, it’s those little tufts of half-dead hair that really creep me out.

The day before my arrival, they had filmed the Season Two farm scene where Rick, Shane and the others, slaughtered the zombies that Herschel’s family had secretly kept in the barn. Our first setup had 12 cast members, spread 12 feet apart outside the barn, shuddering over the deaths of their kinfolk-turned-zombies. There were three cameras (a daily ritual) on three separate 30-foot lengths of dolly track that formed a large U around the actors. All of the cameras had long lenses. I was the solo Boom Operator, as the six remaining tracks on the Sound Devices 788t were allocated to the scripted speakers and the mix track. It was my first day with this Mixer, so I hoisted the boom, danced about the dollies and stretched with determination to prove that I could get some dialog; I wanted to stand out amongst the eight previous Boom Operators. My results seemed feeble, as I was only able to get a couple of lines. The Camera Operators and Dolly Grips were giving me funny looks like “What’s with the new guy? What number is he?” “Not the last,” said someone, “he’s walker bait … won’t last a week!” They all chuckled. What the hell had I gotten myself into?

Far in the back of the acting pack was Emily Kinney (playing Beth), who sobbed uncontrollably throughout the scene. She was not wired but she dominated each take with her emotional outcries. I mentioned that it would be best to pull a wire off someone that I could get on the boom and put that wire on her but there was no enthusiasm for taking the time to make the transition. As the third take commenced, a loud jet entered the shooting zone. I immediately called for a hold, but the 1st AD cut me off. “We don’t hold for planes … roll sound!” The remaining three weeks of my stay were grueling, sweaty and filled with my first set of fluid-sucking chiggers.

Later, I learned that, due to time constraints, upper management restricted freedom to make corrections. The production schedule was so relentless that, at one time, they had adopted a policy of using radio microphones exclusively. They didn’t ever want to see a boom over any actor and were determined to fix any sound problems in Post. The Sound Mixer went on to tell me stories about how they would wait for “Roll Sound,” get the sticks and then, at the last second, slip the boom in for some of the close-ups.

The history of booming this show aside, there was a lot of pressure on me to boom some scenes because of challenges with wind, wardrobe, props and the active nature of the staging. One night, I had to boom a scene that took place in a tent. Both of the characters, Rick Grimes (Andy Lincoln) and Lori Grimes (Sarah Wayne Callies), entered the tent while talking and then disrobed and continued with their dialog. It was impossible to wire them, so I had to figure out a way to boom them in the tent. Do you have any idea how much space is available in a tent after two cameras, two operators and two assistants have been employed? Add in four apple boxes and two 4-foot sliders, and it’s really cramped. The only thing going for me was that they had to raise the side flap to position the cameras. There was barely enough room for me to insert a 12-foot boom pole with a Schoeps MK41 capsule on a GVC. I had to start the shot crouched down, yet standing, so as to reach the tent’s entrance. Fortunately, I was able to aim the microphone straight through the fabric to bring them into the tent talking. I was just millimeters from the cloth ceiling, so I had to be extremely careful not to whisk the microphone on the cloth, while keeping it equidistant to their mouths. After they entered and began taking off their clothes, I had to back up and get down on both knees. At one point, Rick delivered a couple lines looking away from Lori. I couldn’t possibly get them both, so I put a plant microphone on a nearby table, and boomed Lori until Rick turned back. The plant did its job. It was four o’clock in the morning, the last scene of the night, and I was exhausted. It was truly my best booming feat during the entire three weeks. But, as the Camera Operator, Michael Satrazemis, said that first week, “It’s a tough show, but that’s what makes it great.”

Obstacles and frustrations aside, I figured I better work hard, have patience and keep a good attitude. The actors were fabulous and supported my efforts from the beginning. In fact, I remember the Sound Mixer telling me, when he was trying to entice me to do the show, that the actors were very warm and accommodating and kept him motivated to do good work. People like Andy Lincoln (as Rick Grimes), Norman Reedus (as Daryl), Scott Wilson (as Hershel), IronE Singleton (as T-Dog), Jeffrey DeMunn (as Dale), Lauren Cohan (as Maggie) and Steven Yeun (as Glenn) would come up to me and give me a good-morning hug. I hardly knew these folks, and they welcomed me like family. Jeffrey DeMunn said it first, and he said it the most, “WE are the Walking Dead” WE, the cast, crew and abovethe- line executives, ARE THE WALKING DEAD! It’s still true of the cast to this day. The Georgia heat, the remote locations, the grueling production schedule, the absence of zombie hygiene and chiggers, make this a very difficult show, but the spirit the actors bring to the project keeps the crew working together as a team.

Yet I still wasn’t convinced that I wanted to be a part of it when I was asked to join Season Three full time. I had doubts, so many, in fact, that I said, “NO.” I continued to say, “NO” for about four weeks. The thing that really turned me around was the fact that the Sound Mixer went to the wall to get me a rate I couldn’t refuse. Yep, it came down to money. But now, after two full seasons, I look back, and I look forward, and I confess, it isn’t the money that makes working on The Walking Dead worth it, it’s the family spirit. It’s the excitement of being part of one of the most amazing TV shows ever.

The setting of this TV series is unique in character, in that it takes place in a post-apocalyptic world. There is no electricity or running water, no trains, no planes, only a few cars and so far, no boat. We do have one obnoxious motorcycle, and usually we can get Norman (as Daryl) to turn it off before he speaks, but sometimes this is logistically impossible. A post-apocalyptic world is a quiet world. But, we shoot in rural Georgia. We have highways, lots of trains, farm implements, bustling towns and our studios are right in the approaching pathway of Atlanta Hartsfield International Airport. It’s not easy recording dead quiet takes in our modern world.

Locations are often deep in the woods, on rarely traveled dirt roads, abandoned railroad tracks and around brush-shrouded ponds. This means that we have to load our equipment onto flatbed trailers and get pulled by four-wheel-drive vehicles to our locations. When the going gets rough, we all pitch in to get it down. And when a deluge of rain comes in, we all take the hit on another sloppy mud fest to get ourselves out of the swamp bog.

Many of these locations lay along the unused rail tracks that serviced the now-abandoned enterprises in this part of rural Georgia. The Construction Department built several wooden carts to help move gear along the tracks. They’re very helpful when they work but they often break down and we’re forced to remove our carts and roll or drag them along the gravel rail-beds adjacent to the tracks. Zombie apocalypses don’t generally occur right next to the Walmart so we often need to haul the gear a considerable distance. That the carts are wobbly and tend to squeak doesn’t really bother us except when they are pressed into service as camera dollies. Then the noise of the cart layered with the sound of grips pushing it on the gravel rail-bed does make recording a clean track difficult.

Actors on The Walking Dead roll around, run, shout, yell, fight, whisper, snap their heads from one side to side, kneel, bend over and swing lots of props (guns, knives, katanas, crossbows, backpacks, hammers, crow bars, bottles, etc.), all in the same setup. And, they do it amongst trees, vines, creeks, tall grass, railroad tracks, rubble, fences, furniture and the like. This constant activity makes booming the show a unique experience; my legwork has never been more tested. The dizzying array of props needed to combat zombies forces us to be creative in radio microphone placement. We’ve rigged collars, hats, hair, the props themselves and everywhere else imaginable.

Anyone who has viewed the show is well aware of how location-driven the sets are. We work in the forest a lot. We work on gravel roads a lot. We work in fields a lot. We work with the elements a lot. But we also work a lot indoors. The difference is, 99.9% of our locations, wherever they may be, are filthy. They’re dusted, shredded, destroyed, trashed, wetted, burned and pillaged. Everything is dirty on The Walking Dead. The only good thing, as far as sound goes, is that the mills and plants that serve as our sets are typically vacant and out of business. Turning off noisy appliances is not so much an issue with us. But, the sheer volume of filth complicates placing microphones on the actors. In fact, the costumers go to great lengths to pat them down with blood, dust, dirt and oil. Blood and oil are the real test. And this show really uses blood—lots of blood—gallons of blood. In fact, there’s something like 10 types of blood: live blood, dead blood, real dead blood, drippy blood, gooey blood, thick blood, blue blood, black blood, it’s bloody unbelievable! As much as we all love Joe’s clear butyl, it doesn’t work on bloody and oily clothes. Perspiration doesn’t make such a good friend either. We end up having to sew most of the lavaliers into their clothes, especially during the warm months. This takes extra time. Fortunately, over the seasons we have conditioned the production staff to bring the actors to us extra early. Unfortunately, they have to be un-sewn whenever there is a technical issue with the lavalier, or when there is a wardrobe change. To avoid the need for re-sewing, we do tests, placing the microphone in the proposed position and having the actor go through anticipated body motions. Sometimes we’ll wire additional wardrobe in advance. We’ve wired as many as three shirts at one time. On this show pre-planning is imperative, because we almost never do rehearsals, and time is so precious. Then, at wrap, the actors must come to us after a grueling day of rolling and fighting their way through the zombie apocalypse to have their microphones extracted from their costumes.

The sun provides its own little challenge. Ordinarily, “fly swatters” (20 x 20 diffusion frames rigged onto an overhead condor) would soften the shadows on exterior shots but they aren’t used on The Walking Dead. The difficulty of getting that kind of gear a quarter mile down rail tracks to a shooting location precludes their use. Sometimes this forces us to use two booms to capture shadow-free dialog from a single player.

Much of The Walking Dead is shot in the summer. Georgia is sweltering hot with singing cicadas, croaking frogs and stinging sunburns on the back of your neck. Every time I wring out my shirt, I have occasion to remember the line from David Lynch’s Wild at Heart, with Laura Dern and Nicolas Cage, “You’re as hot as Georgia asphalt.” Summer is when the chiggers, ticks, mosquitos and spiders are most plentiful. Sometimes, before we even arrive on set, we drench our bare legs, arms, necks and midsections with a not-so-healthy dose of DEET. It stings a bit at first but really does the job. We don’t like it, but it beats scratching for days on end until your flesh comes off. Ahhh, the glamour of Hollywood.

The Directors and Directors of Photography make full use, as they should, of all the visual tools at their disposal. They use Steadicams, Go Pros, DSLRs, below-ground positions, hidden cameras, long custom- made sliders to go among vines and trees, high-angle crane shots and every device imaginable to achieve an expressive image. Typically, several of these elements will be combined for multiple views of the action. This adds to the challenge of getting good dialog tracks.

Episode 405, “Internment,” is illustrative. The Director, David Boyd, one of the former DPs on the show, believes in guerrilla-style filmmaking, using multiple cameras in obscure positions. The episode took place mostly in the prison cells, where Scott Wilson (Hershel) would tend to the near-death patients. These cells are really only about 10 feet by 10 feet with a bunk bed on one side. Director Boyd staged the scene with three actors, three cameras and two operators. Radios couldn’t be used because the actors had blood on their chests and air masks on their faces so my assignment was to squeeze into the cell with everyone else and get the dialog. My regular position in these scenes was either standing on the upper bunk or squeezed between an operator and the wall, only inches from the talent.

These same cells presented one of our biggest acoustic challenges. Although they were prison cells, and ought to sound like prison cells, they were really made of wood. We used furniture pads on the walls, and acoustic tiles in the corners when they wouldn’t be seen, but many times we couldn’t control things. Fortunately, the reverberant effects that Post Production added fixed the problem, and made for some very interesting character effects.

Despite these extraordinary elements, the soundtracks have been getting better and better. This is due to a solid proactive plan, teamwork, ample first-rate equipment and excellent execution. Production Sound Mixer Michael P. Clark, CAS leads us through the process by being very involved. Days in advance he will be analyzing scripts, talking with decision makers, preparing equipment and contemplating solutions. And his mixing skills are sharp, clean and logical. Dennis Sanborn, the Utility person, is assertively proactive in preparing equipment, securing locations and most importantly, wiring all of the actors. He is both skillful and resourceful. As arduous as recording sound is on The Walking Dead, these guys step up to the challenge of working on one of the most difficult shows in television with grace and determination.

I’ve never before been part of something so deep, difficult and complex as the process of making this show. More than any project I have ever worked, the shared sentiment that “We Are the Walking Dead” makes this one of the most remarkable career experiences that I have ever had. It has truly changed my life … and my career.

Gravity and Captain Phillips

Recording Captain Phillips and Gravity

by Chris Munro, CAS

It sounds somewhat ungrateful to complain about being nominated for two films in the same year. Though I was honored to receive both BAFTA and Academy Awards for Gravity, a part of me was disappointed that Captain Phillips has not been equally recognized.

These are two very different films with different challenges for production sound. Gravity was completely different from anything I had done before, whereas Captain Phillips is a prime example of how drawing on previous experience enables us to be better at what we do. Having worked with Paul Greengrass on United 93, the film about the terrorist takeover of a passenger jet on 9/11, I knew that Paul likes to shoot in a documentary style, with no rehearsal and a lot of improvisation, and to cast non-actors in key roles. When I came to work with Paul again, on Captain Phillips, this experience was vital but we now had the added issues of shooting at sea on a container ship, a lifeboat and in the Somali skiffs.

Having worked on five James Bond films, I was no stranger to action sequences involving water, especially the boat-chase sequences on Quantum of Solace filmed in Panama. On Captain Phillips, I needed waterproof lavalier microphones that also sounded good out of the water so I chose to use Da-Cappo DA04s (now Que Audio performance series in the USA). These are very popular in theater because of their very small size but have great waterproof qualities due to the inlet size being smaller than a droplet of water. I mounted them upside down so that no water settled on the microphone. I had to develop a system for getting longer range reception for recording in the high-powered pirate skiffs. I used Audio 2040 mini-tx radios in aquapacs on the pirates. The receivers were built into secret compartments in the skiffs where audio was recorded and re-transmitted to the bigger boat that we were all on. We were regularly recording up to 16 tracks and feeding a mix to Video Assist, the Director and Camera Operators. I recently wrapped on Heart of the Sea, with Ron Howard where again I was able to use what I had learnt. Months before I started on the film I said to the boat builders, “I need you to build these secret compartments…”

On Captain Phillips, we were based in Malta on a container ship, which was our studio for much of the film. Each department had a base in one or more of the containers to store equipment and carry out any maintenance. We still needed to be highly portable as we would shoot inside the ship, perhaps in the engine room or cabins while heading out to sea and returning to port, and shoot on decks and the bridge when at sea. There were a lot of stairs, and some passageways were very narrow. Generally, we were shooting multi-camera without rehearsals and all with improvised dialog, sometimes with the scene playing out between several groups in different parts of the ship.

We were limited in the number of crew on the ship, but I was very fortunate to have a great crew with my usual UK Boom Operator, Steve Finn, and tech support from Jim McBride. Tim Fraser recorded 2nd Unit in Malta and in Morocco, and Pud Cussack looked after Boston and Virginia.

Oliver Tarney was Supervising Sound Editor. I had also worked with him on United 93 and the two Sherlock Holmes films with Guy Ritchie. One of the best things we were able to do was to get Oliver to spend a weekend with us on the ship recording sound FX. Not only did he get the FX that he needed, but he also got to experience the ship and to understand how it should sound at sea and its geography. He also got to experience being in the lifeboat—known by us as the vomit vessel—certainly not a pleasure craft!

 

Chris Burdon and Mark Taylor were Re-recording Mixers; I’ve worked with both on previous films.

Gravity was a completely different experience from anything I had previously worked on. When I first got the call and was told that there were only two actors in the film and that there is no sound in space, it sounded like the perfect job! Then when I met Alfonso Cuarón and he started to talk about his ideas for the film, I was hooked and immediately knew that this was going to be something special. Every few years there is a film that breaks the technological boundaries— this year it was Gravity. The first issue was that both the cameras and the actors could be on robotic arms. I had recently shot a small sequence with these and knew that, although the arms could move with not too much noise, the associated power supplies and controllers were very noisy. So the first job was to negotiate that these could be extended and built into blimps far away from the action.

We had a very comprehensive previz of the film that we worked to. The previz helped us keep the VFX elements, still being designed, in sync with lighting, camera moves and sound. I had originally thought that we might be able to lock everything to the same timecode but, for a number of reasons, timecode wasn’t always practical as the controller. Touch Designer was used to control the robots and as a visual platform, sending midi triggers for us to sync to.

Alfonso Cuarón originally had a plan for all of the radio conversations and OS dialog to be live, and we had planned to have different rooms in the studio for those to be performances. However, due to artist availability and other issues, this proved to be impractical so we prerecorded as much as we could. Most of the pre-records were guides that were re-recorded as ADR in Post Production.

 

 

Will Towers was our Pro Tools operator. He made loops of the lines that we could play from a keyboard. The idea was that each line was on a separate loop, and there were alternative performances available for the on-screen actor to react and interact with. We would use different performances and adjust the timing for each take to create spontaneity while still having to be sure that certain lines were occurring at the correct frame space allocated in the previz. All film is a collaboration, but on this film I was collaborating more with VFX and the actor than ever before. It was also necessary for us to work very closely with Editorial as the film took shape and timing parameters or dialog constantly changed.

Here was another opportunity to use the Da-Cappo microphones— this time because of the very small size. The microphones used were a mixture of a Da-Cappo capsule that Jim McBride, our tech support engineer, had fashioned to an arm connected to the inner helmet and a latex shield that we made for both visual accuracy and to reject noise from outside the helmet. A second Sanken COS-11 was sewn into the inner helmet as were earpieces for communication. We also had in-ear molds made for some scenes. Each different piece of headgear that Sandra Bullock wears in the film contained practical microphones and earpieces. Even the classic Russian headset that she uses at one point has a built-in transmitter and receiver. We achieved this by borrowing bare 2040 mini-transmitter boards from Audio Ltd. and building them in to headsets.

I used a Cedar DNS1500 during shooting to reduce some of the fan noise from the LED lighting rig and the robotic arms. This was only on one mix track. The iso tracks and another mix track were left unprocessed.

The communication system could rival NASA Mission Control at Houston. In addition to feeding scripted lines that the actors would respond to, we also played atmospheric sounds to Sandra to set the mood for each sequence. Additionally, we played loops of her breathing from the preceding or following shots so that she was able to get the correct breathing rhythm for the shot. Often the shot could start at one pace but finish with breathing at another pace so it was important that we were able to give the correct breathing rhythms throughout the shot.

The Director and the 1st AD needed to be able to communicate with the actors and DP, Camera and other departments without distracting the actors when giving technical cues. The costumes and helmets so completely isolated the actors that they needed an audio feed both to hear each other and also to hear their own voices. Allowing them to hear themselves, but at a reduced level to avoid distraction, required a second layer of IFB feed to each.

Sandra Bullock and George Clooney could often be in rigs for hours on end so, as well as providing a system for them to communicate with each other, I also ran a kind of mini-radio station to play music, YouTube clips or anything to keep them entertained between shots. Sandra Bullock has often said that she had never previously had such interaction with the Sound Department yet we were at opposite sides of a dark stage for weeks on end. It was during one particular break during shooting that I discovered that both Sandra and George knew all the words to “Rapper’s Delight” and could sing a pretty good version!

You could be forgiven for thinking that most of Gravity was created in Post Production but, in fact, much of the shooting was oddly conventional. We had six weeks of pre-shoot, 12 weeks of principle photography and two weeks of additional photography, all with sound. Some of the sequences were shot on actual sets and boomed! For every shot, the DP concentrated on the camera angle and how the actor was lit. The Director concentrated on getting the performance that he needed and the Sound Department concentrated on capturing that performance the same way that we all do on every movie.


Glossary of highlighted words

Previz Essentially an animated storyboard, a previz video shows a rough rendition of all the elements and special effects in a sequence so every department can see how it all fits together.

Touch Designer A software program that facilitates production of animated videos and graphic sequences.

P-Cap, MoCap and All That Jazz

P-Cap, MoCap , and All That Jazz / Part 1

by Jim Tanenbaum, CAS

As sound people, we live in (according to the old Chinese curse) interesting times. Our technology is advancing at an exponential rate … with a very large exponent. The analog Nagra ¼-inch reel-to-reel tape recorder was used on almost all of the world’s movies for more than thirty years. Then DAT (Digital Audio Tape) cassette recorders (though more than one brand) held sway for another ten. Hard-drive recorders (I beta-tested a Deva I) led the race for five years, then DVD optical-disc recorders (albeit still with an internal HDD) for only three. Sony’s magnetic minidisk unit never made significant inroads in production recording. Now we’re using flash-memory cards, and I’m surprised they’ve held on for more than a year, but the original choice of CF cards seems to be giving way to SD ones (except for Zaxcom). Next year?

But it is not only the technology that is changing—so is the product. Made-for-Internet drama or documentary shows aren’t that much different from their predecessors, but reality shows certainly are a new breed: dozens of radio-mic’d people running around, in and out of multiple cameras’ view, and in and out of multiple Production Mixers’ receiver range. Fortunately, we have Zaxcom transmitters with onboard recorders. Still, things aren’t that different.

But “capture” shoots are. Almost entirely different from anything that has gone before. And capture for Computer-Generated Image (CGI) characters (sometimes called “virtual characters”) is different than capture for live-action shoots. Also, Motion Capture (MoCap) is different from Motion Control (MoCon), though these two techniques are sometimes used together, along with Motion Tracking (MoTrac). And then there is Performance-Capture (P-Cap). They will be described in this order: CGI MoCap, P-Cap, live-action MoCap, MoCon, and MoTrac. Following that, working conditions and esthetics for all types will be discussed.

So now, for those of you who have yet to work on a capture job, here is a primer (pronounced “prim-er”; not “pry-mer”). The rest will be on-the-job training.

CGI MoCap

For starters, the capture stage is called a “volume”—because it is—a three-dimensional volume where the position and movement of the actors (often called “performers”) and their props are tracked and recorded as so many bits. Many, many bits—often terabytes of bits. You can expect to record many gigabytes of audio per day.

The stage containing the volume has an array of video cameras, often a hundred or more, lining the walls and ceiling, every one interconnected with a massive computer. Each camera has a light source next to, or surrounding, its lens, which special reflective markers on the actors will reflect back to that particular camera only. This is known as a “passive” system, because the markers do not emit any light of their own. The camera lights may be regular incandescents or LEDs, with white, red, or infrared output. More about that later.

The cameras are mounted either directly on the walls and ceiling, or on a latticework of metal columns and trusses. WARNING: It is vitally important not to touch these cameras or their supporting structure. If you do, you must immediately notify the capture techs so that they can check to see if the volume needs to be recalibrated.

The actors/performers wear black stretch leotards studded with reflective dots. The material is retro-reflective, which means it reflects almost all the light back in the direction it came from, in most cases utilizing tiny glass spheres. Scotchlite™ is a typical example, used on license plates, street pavement stripes, and clothing. For use with the capture suits, the reflective material is in the form of pea-sized spheres, mounted on short stalks to increase their visibility from a wider angle. The other end of the stalk terminates in a small disc of Velcro™ hooks, so it can be attached anywhere on the capture suit’s fabric.

As an aid in editing, the capture suit usually has a label indicating the character’s name. Hands and/or feet may be color-coded to distinguish left from right.

The markers in the image above are glowing because a flash was used when the picture was taken. The camera was very far away, and the stage lighting completely washed out the light from the strobe on the people and objects, but the markers reflected most of the flash back to the camera lens.

Capture cameras mounted on more rigid
columns, but still subject to displacement if hit. [Formerly Giant Studios, now Digital Domain’s Playa Vista, California, stages]

If MoCap is to be used on the actors’ faces, smaller, BB-sized reflective spheres are glued directly to the skin, sometimes in the hundreds. When too many have fallen off, work stops until they can be replaced, a process that takes some time because they must be precisely positioned.

Props and certain parts of any sets or set dressing (particularly those that move, like doors), also get reflective markers. Unlike “real” movies, props and set dressing do not have to look like their CGI counterparts, only have certain dimensions matching. They are often thrown together from apple boxes, grip stands, and “found” objects, and may be noisy.

Here is a description of the mechanics of MoCap.

The floor of the volume is marked off in a grid pattern, with each cell about five feet square. This array serves two purposes: 1, it allows the “virtual world” in the computer to be precisely aligned with the real world; and 2, it allows for the accurate positioning of actors, props, sets, and floor contour modules.

The capture process is not like conventional imaging—there are no camera angles or frame sizes. The position and motion of every “markered” element is simultaneously recorded in three-dimensional space. Once the Director is satisfied with the actors’ performances in a scene, the capturing of the scene is finished. Later on, the Director can render the scene from any, and as many, POVs and “focal lengths” as he or she wishes.

But for this to be possible, every actor must be visible to (most of) the capture cameras at all times. This means that there must not be any large opaque surfaces or objects to block the cameras’ view. If there need to be physical items in the volume for the actors to interact with, they must be “transparent.” But glass or plastic sheets can’t be used, because refraction will distort the positions of markers behind them as seen by the cameras. Instead, surfaces are usually made out of wire mesh or screening, e.g., a house will have thin metal tubing outlining the doors and windows (to properly position the actors), with wire mesh walls (so the actors don’t accidently walk through them). In the virtual world, seen from a POV at some distance from the house, the walls will be solid and opaque, but as the POV is moved closer, at some point it will pass through the “wall” and now everything in the room is visible. Tree trunks can be cylinders of chicken-wire fencing, with strands of hanging moss simulated by dangling strings.

Props need only to be the same size and overall shape, and weight, to keep the actions of the actors handling them correct. They will have a number of reflected markers distributed over their surface. Live animals, if not the actual living version, are made as life-size dolls with articulated limbs and appropriate markers, and puppeted by human operators. This gives the actor something “living” to interact with.

Since the motions and positions are captured in three dimensions, if the ground or floor in the virtual world is not flat and/or level like the volume’s stage floor, the bottom of the volume must be contoured to match it. This is done by positioning platform modules on the grid squares to adjust the surface accordingly. (More about this later.)

It is necessary to precisely align the real world of the capture volume with the CGI virtual world in the computer; otherwise, parts of the CGI character’s bodies may become imbedded in “solid” surfaces. The first step in this process involves a “gnomon” (pointer) that exists in both the real and virtual worlds.

As an aid in editing, the capture suit usually has a label indicating the character’s name. Hands and/or feet may be color-coded to distinguish left from right. The gnomon has three arms at right angles to each other, tipped with reflective markers to allow the MoCap system to create its CGI doppelganger in the virtual world. To align the real table with its “twin” in the virtual world, the gnomon is placed at one of the real table’s corners, and then the table is moved in the volume until the virtual gnomon is exactly positioned on the corresponding corner of the CGI table. This is usually the simplest method. Another possibility is to go into the virtual world and mouse-drag the CGI table until it lines up with the virtual gnomon. The entire virtual world could also be dragged to position the table, but this might throw other objects out of alignment. Global position shifts like that are limited to adjusting the virtual ground with the volume floor after the contour modules are in place.

Real-world alignment gnomon and “transparent” table with wire-mesh surfaces. (Photo: ‘AVATAR’ ©2009 Twentieth Century Fox. All rights reserved)

Multiple conventional HD video cameras are used in the volume for “reference.” These cameras cover the scene in wide shots and close-ups on each character. This allows the Director to judge an actor’s performance before the data is rendered into the animated character. A secondary function is to sort out body parts when the MoCap system gets confused and an arm sprouts out of a CGI character’s head. Looking at the reference shot, the Editor can figure out to whom it belongs, and mouse-drag it back into its proper place. In most stages, the cameras are hard-wired into the system so they have house-sync TC and do not normally require TC slating. They may use DV cassettes and/or send the video directly into the system.

Until a few years ago, it was not possible to see the CGI characters in real time, but now Autodesk Motion Builder™ software allows real-time rendering, albeit in limited resolution. Warning: The flatpanel monitors on the stage have cooling fans that may need to be muffled or baffled. Video projectors’ fans are even louder.

Lighting in the volume is very uniform, soft and non-source, to ensure that the reference cameras always have a well-illuminated image. In addition, having no point-source lights ensures that there will be few, if any, specular (spot-like) reflections that might confuse the MoCap system’s cameras.

To capture motion effectively, the system must measure the marker positions at least twice as fast as the temporal resolution required. For 24-frame applications, this means a minimum 48 Hz rate. Currently, much higher rates are used, 120 Hz to 240 Hz. If “motion blur” is desired, it can be created in Post.

P-Cap

Motion Capture was developed first, and initially captured only the gross motions of the actor’s body. The facial features were animated later, by human operators who used mouse clicks and drags. Then, smaller, BB-sized reflective balls were glued to the faces, in an attempt to capture some of the expressions there. Unfortunately, this process couldn’t capture the movement of the eyes, or the tongue, or any skin wrinkles that formed. And since the “life” of a character is in the face, these early CGI creations failed the “Uncanny Valley” test.

It turns out that human beings evolved a built-in warning system to detect people that weren’t quite “right.” Back in the “cave people” days, subtle clues in a person’s appearance or actions were an indication of a disease or mental impairment that could be dangerous to your continued good health or even your very existence.

Multiple hard-wired HD reference cameras (although these have DV cassettes as well). (Photo: ‘AVATAR’ ©2009 Twentieth Century Fox. All rights reserved)

A graph of the “realism” of a character versus its acceptability starts at the lower left with obvious cartoon figures and slowly rises as the point moves to the right with increasing realism. But before the character’s image reaches a peak at the right edge, where photographic images of actual human beings fall, it turns sharply downward into the valley, and only climbs out as the character becomes “photo-realistic.” Even an image of a real human corpse (possible disease transmission) is in the valley, as would be that of a super-realistic zombie.

When you watch a Mickey Mouse cartoon, you know the character isn’t “real,” so its completely “inhuman” appearance is not a problem. Likewise, when you watch a live-action movie, the characters are real, so again there are no warning bells going off in your brain.

Current computer-animated cartoons like Despicable Me or Mars Needs Moms don’t have a problem because their “human” characters are so obviously caricatures. The trouble began when CGI characters developed to the point of being “almost” human, and started the descent into the uncanny valley. The 2001 video-game-based movie Final Fantasy: The Spirits Within was the first attempt at a “photo-realistic” CGI feature movie using MoCap. Although an amazing piece of work for its time, it didn’t succeed visually or at the box office. But it didn’t quite fall over the precipice into the uncanny valley, either. The characters’ faces all had that “stretchy rubber” look when they moved, the motion of their eyes and mouths weren’t close enough to human, and most of their exposed body parts (except for hair, which was quite good) were rigid and doll-like, moving only at the joints. It still was “only” video game animation, and back then, nobody expected that to be real.

The stylized 2004 feature The Polar Express had an intentionally non-realistic, stylized look to its settings and characters, but since the MoCap process was used, their now, much more realistic motions caused a slight uneasiness among some viewers.

It wasn’t until Beowulf (2007), that the CGI capabilities increased to the “almost photo-realistic” level and a larger portion of the audience was disturbed by their being in the uncanny valley, albeit subliminally. It was mainly that the characters’ eyes were mostly “dead,” moving only on cue to look at another character, and never exhibiting the minor random movements that real, living eyes make continuously. The interior details of their mouths were also deficient.

Interestingly, the same capture volume that was used for The Polar Express and Beowulf was also used for Avatar (2009), but only after James Cameron spent a great deal of time and money to upgrade the system. Avatar successfully crossed the uncanny valley because the facial-capture cameras worn by the actors allowed for the recording and reproducing of accurate eye and mouth movements, and the formation and elimination of skin wrinkles. “Edge-detection” software made this possible. Thus was born the “Performance Capture” version of MoCap.

P-Cap volumes have the same soft, non-directional lighting as MoCap, plus additional lights mounted next to the facial capture cameras to make sure the face is never shadowed. Avatar used a single CCD-chip mounted on a strut directly in front of the performer’s face, and many systems still use this configuration. To avoid having the distraction of an object continuously in the actor’s line of sight, by the time AChristmas Carol went into production in 2009, four cameras were used, mounted at the sides of the face, and their images were rectified and stitched together in the computer.

At the beginning of the production of Avatar, Cameron used a live microwave feed from the face camera to “paint” the actor’s human eyes and mouth onto the CGI Na’vi’s face as an aid to judging performance. But after a while, this proved not to be that useful and was discontinued.

Face-Only P-Cap

For certain action scenes, the actors cannot safely wear a camera head rig. For these situations, only the body markers are used, and conventional MoCap is employed. Sound is recorded with a boom mike or wireless mike with a body-mounted lavalier, but will (normally) serve only as cue-track. Afterward, P-Cap techniques will be used to capture the face and dialog. If the director does not automatically ask for it, I recommend that you suggest he or she have the actors attempt to reproduce their body motions from the MoCap sessions as accurately as possible, because this will induce a form of realistic stress to their voices. These setups should be mic’d in the same manner as the rest of the project.

Alternate Techniques for Face-Only P-Cap

The capture infrastructure is continuously evolving, and several new technologies are emerging. Unfortunately, because of NDAs (Non-Disclosure Agreements), I cannot describe the projects I worked on in any detail. The information here comes from public sources such as Cinefex magazine and Wikipedia.org.

Real-time LIDAR (LIght Detection And Ranging) scanning is used to measure the shape and position of the performer’s head, down to sub-millimeter resolution. (This technique is also used to capture GCI data from large motionless objects like buildings, statues, vehicles, etc.)

Real-time multiple-camera, multiple-angle views are used to compute 3-D data from the different 2-D images of the performer’s face.

For both of these, you must usually keep the mike, the boom, and their shadows out of the working volume.

Live-Action MoCap

Live-action scenes, often shot against green- or blue-screen backings, need to have dramatic, sourced lighting. There are also many shiny wardrobe items and props, some of which even emit light themselves, and all these would confuse the passive MoCap system. Exterior scenes shot in direct sunlight can completely wash out the reflected capture-camera lights. For all these reasons, the reflective marker passive system cannot be used. Instead, “active” markers are used. These are larger, ½- to 1-inch cubes, with an LED array on each visible side. The markers emit a pattern of light pulses, either red or infrared, to uniquely identify each individual marker. Externally mounted markers that are visible in a shot can be eliminated with “wire-removal” software in Post. Infrared markers may sometimes be concealed under clothing to avoid this extra step, along with its attendant time and cost.

MoCon

Motion Control was developed long before any capture processes. A camera was mounted on a movable multi-axis platform that ran on tracks, and had sensors to record its motion, position, and lens settings. The initial shot was made by a human operator, then the subsequent ones could be made by playing back the recorded data and using it to control servo motors that moved the camera in a duplicate of whatever dolly, pan, tilt, zoom, focus, etc., moves were made the first time. This allowed “in-camera” compositing of multiple scene elements without the need for optical film work in Post, with the attendant problems of generation loss, color shifts, etc. A typical use would be to shoot a night scene of model buildings with illuminated windows using a large outdoor model city street. To get uniform illumination, the tracking shot past the buildings is shot in daylight, with the camera stopped down to reduce the exposure. This would require impossibly intense (and hot) lights to illuminate the windows brightly enough to read in direct sunlight. Instead, a second, matching, pass is made at night with the lens opened up, so that low-wattage bulbs will provide the proper exposure. The original Star Wars movies used this method extensively. While this system is still in use, it is now possible to use markers to track camera position, particularly with handheld cameras.

MoTrac

Motion Control requires a large amount of expensive equipment, but now that computers have become so much more powerful, digital manipulation can accomplish some, but not all, of the tasks formally done with MoCon. And of course, many that were impossible with MoCon. And sometimes MoTrac can be used instead of needing MoCap to record camera positions and moves.

MoTrac has two main applications. First, green- and blue-screen work where there will be camera moves that must be coordinated with an added background plate. To do this, an ordinary non-MoCon camera is used, and visible “fiduciary” marks are made on the screen as a reference for how the plate image must be shifted to have the proper parallax for the moving camera. Usually, the mark is simply an “X” made with pieces of contrasting color tape. Enough marks are placed on the screen to ensure that some of them will always be in frame. The computer tracks the motion of these Xs and then adjusts the position of the background plate to match.

Second, smaller marks, often ¼-inch red dots, are stuck on real objects that will have CGI extensions added on to them. The moving ampsuits used in Avatar existed in the real word only as torsos on MoCon bases. The CGI arms, legs, and clear chestpiece were attached later in the virtual world. If you are planting/hiding microphones, be careful not to tape over or otherwise occlude any of these marks.

While not commonly used at present, it is possible to put fiduciary marks on a mike boom as an aid in removing it Post. And the recent Les Miserables used them to help remove the exposed lavaliers that were mounted outside the wardrobe.

MoTrac MoCap

This hybrid has limited capabilities, but is often used for liveaction shoots on real locations or sets, with CGI characters that are human-shaped and slightly larger than the human performers. No reflective or active markers are used because the scenes often involve action and stunts, and the markers could injure the wearer or be damaged or torn off. Typical examples are the Iron Man suits and the humanoid droids in Elysium.

This method does not capture 3-D position information directly, and is used to simply “overlay” the CGI image on top of the capture performer’s on a frame-by-frame basis. Perspective distortion of the shape and size of the marker squares can be analyzed by the software to properly rotate and light the virtual character.

The actors wear grey capture suits with cloth “marker bands,” consisting of strips ranging from ½ to 2 inches in width having alternating white-and-black squares with a small circle of the opposite color in the center. The bands are fastened around the portions of the actor’s body that are to be captured: head, torso, arms, and/or legs. Only gross body movements are captured with this system; not details such as fingers or facial features.

If wireless mikes are used, there is no face-cam mounting strut available to mount the microphone, but neither it nor the transmitter has to be hidden. Like a regular shot, boom shadows have to be kept off anything visible in frame, except for the capture suit. (The shadow will not be dark enough to be mistaken for black makings.)

Editor’s note: Jim Tanenbaum’s explanation of P-Cap and MoCap practices will continue in the next issue of the Quarterly with specific guidance for sound technicians working these projects.

Text and pictures (except Avatar set pictures) © 2014 by James Tanenbaum, all rights reserved.

My Wild Ride: Booming in the ocean

My Wild Ride

by Coleman Metts, CAS
All images courtesy of Coleman Metts

My friend and colleague, Scott Harbor, who was having a scheduling conflict, referred me to the movie Ride. He thought, “Coleman surfs and stand-up paddleboards, so he’ll be great for it.” When the Producers initially contacted me, they said, “We are keeping it simple, but we want the actors, Helen Hunt and Luke Wilson, to talk to each other as they’re going out through the waves and catching the waves.” Well, I thought, perhaps the new wireless microphone transmitters from Lectrosonics might work. I told them it would be an experiment, but I felt pretty good about being able to pull it off.

My initial plan was to cut holes in the wet suits and have the microphones exposed but removed with computers in Post. The Producers responded they could not afford to do that for every shot. After considering a range of alternative options, we eventually agreed to cut small holes in the wet suits and attach the microphones behind each hole with tape. We then began an exhaustive process of trial and error in an attempt to mount the microphones. We could not find any tape that would effectively work in saltwater! Eventually, we settled on using Velcro to mount the microphones. However, it was not long before we learned that the Lectrosonics waterproof transmitters are not saltwater proof!

Lectrosonics was very cooperative about minimizing the L&D expenses, but the wireless transmitter failures forced my crew to capture all the sound sequences, both on the ocean and in the surf zone, with an old-fashioned overhead boom microphone. On the water, I used a Sennheiser 60. We got basically traditional coverage, so we were very lucky in that respect, and the microphone worked perfectly. I used the Lectrosonics plug-on transmitters to get the sound from the boom microphone back to me. My recorder for the movie was the Zaxcom Fusion. Working in this environment is incredibly hard on every piece of equipment. I am still finding sand in various places among my sound gear, months later.

We did not make any technological leaps on this movie; it was just persistence, and positive attitude, that solved our problems.

The initial schedule showed us on and off the water a lot, so I had a small ENG-type package built for the ocean, and my main rig I left built for filming on the land. Even at sea, I managed to send an IFB feed to video village and, particularly to Helen Hunt, who was directing as well as acting. I also supplied signal to the Script Supervisor and the Video Playback Operator, fellow 695 member Anthony Desanto.

We did not have a lot of prep for this project, so we had to improvise and figure it out as we went along. A nice benefit of the show was that I was able to bike to work every day for three weeks. Also, I got to wear my sandals at work every day for about a month.

So began my two weeks out on the water. I was placed in everything from zodiac-type boats to the back of wave runners. I eventually spent much of my time on a large stand-up paddleboard as no motorized craft were allowed in the designated surf zone which, in the Marina Del Rey/Venice area, extends from the beach out three hundred yards.

My Boom Operators, Johnny Evans and Jim Castro, also operated from large stand-up paddleboards for significant portions of their time on location. When not on a stand-up paddleboard, the Boom Operators were standing directly in the surf zone. Doing so, however, required the use of Watermen/Stuntmen who would position themselves directly behind my crew and grip them tightly to prevent them from being knocked over by the oncoming waves. My Utility, Ace Williams, did a phenomenal job in these trying conditions. Oftentimes, I was texting Ace what resources I needed sent out on the next supply boat.

It was not long before I realized that working on the water is very different from playing on the water. Being out on various watercraft all day was pretty fatiguing. Communication was limited. In the beginning, we also had a failure in the transfer process when the facility transferred all the tracks for dailies. That added some stress at the start of the show. Eventually, we got it all sorted out—just about the time when we moved off the ocean and started filming on dry land.

What did I learn from this project? Well, I guess I learned that when they say it’s going to be simple, it’s not. And I learned that you need more than one plan to deal with any eventuality plus enough resources for almost any scenario.

The process of filming on the beach, on the water and on multiple locations throughout Venice made Ride the hardest show I’ve done by far. But the amazing people I worked with made it a memorable and positive experience. The Director of Photography and his crew, the Key Grip and his team and our Stunt Coordinator and the Waterman were outstanding. They solved amazingly hard challenges every day. They all displayed the best positive attitude and everything seemed easy for them. After eighteen years in the business, I’ve learned not to take these things for granted. Ride restored my enthusiasm for making movies; it was a bright spot in my career.

Jim Webb: A Profile

Jim Webb: A Profile

by David Waelder

“ He was the most perfect Sound Mixer I ever worked with.”
–Chris McLaughlin

“ I would say that Jim was the father of multi-track. I really would.”
–Harrison “Duke” Marsh

“ He seemed to field a lot of curveballs very elegantly.”
–Robert Schaper

“ He was a great educational source to learn from.”
–James Eric

“ Jim Webb is a crusty old pirate of a man who has a heart bigger than words can describe.”
–Mark Ulano, CAS

James E. Webb Jr. is justifiably renowned for his work developing multi-track recording on a series of films for Robert Altman. He captured the dialog from multiple cast members and interlocking story lines on such iconic films as Nashville, Buffalo Bill and the Indians, 3 Women, and A Wedding. He pioneered the multi-track process.

The scenes were so complex, so intricate and so audacious that Altman himself parodied the style in The Player.

And yet, this was really just the beginning of Jim Webb’s career.

He studied film in college, first at Northwestern University in Evanston, Illinois, and later at USC in the Department of Cinema. In 1962, he was drafted into the Army and, after training in radio and as a radio teletype operator (RTTY), served in Germany at an Army Aviation Repair Company that occupied the old Luftwaffe hangars on the military side of Stuttgart’s main airport.

Discharged in 1964, he worked for about a year at USC and then took a job, first at KTLA and then at the CBS station KNXT. The stations had contracts with IATSE and he got his IA card at that time.

ROCK & ROLL

Work in feature films was the goal but opportunities were scarce for recent film school graduates and new members of the union with limited contacts and seniority. Seeking to create their own work opportunities, he formed an independent production company with Pierre Adidge, a friend from Northwestern, and Bob Abel.

The newly formed production company did music specials for PBS and also documentary concert features. The Joe Cocker film, Mad Dogs & Englishmen, was the first feature, followed by Soul to Soul (as a consultant) and Elvis on Tour. These projects taxed his technical skills to keep everything in sync and sensibly organized. He was well aware that Woodstock required a full year of work to get everything synced and worked strenuously to avoid a calamity of that sort. He insisted on shooting regular slates and on assigning one track on the eight-track recorder to a sync pulse. His commitment to good protocol was not always adhered to but his efforts were at least partially successful and the films were all released in a timely manner.

ROBERT ALTMAN AND MULTI-TRACK

His Army training and experience with radio mikes and multi-track music recording on concert films gave him a good foundation in skills needed to implement a production style that Robert Altman was developing. Traditionally, films had treated their subject matter as if they were stage plays seen with a camera. Close-ups and tracking shots provide changing perspective but the action unfolded as a linear narrative. Altman saw the world as a messy place where events didn’t always proceed in an orderly way. Sometimes everyone would speak at once. Sometimes, with multiple participants, it wouldn’t be clear who was driving the action until the event was over. He wanted to bring some of that messy uncertainty to his film projects.

Altman used a loose and improvisational style in films like MASH but encountered difficulties with sound for McCabe & Mrs. Miller. Without precise cues to know when each character might speak, it was difficult for the Sound Mixer to deliver a suitable track, especially when action was staged in authentic locations with hard floors and other acoustically difficult features. Having multiple microphones, and assigning the outputs to isolated tracks, was the obvious solution. Altman brought in Jack Cashin to design a system of multi-track recording that might be used on location. Most of the equipment then available was designed for use in a studio and it required some ingenuity to adapt it for location use. But assembling the hardware is only part of the equation; someone must operate it effectively and this presented challenges to the Production Mixer.

The system Cashin developed used a Stevens one-inch, eight-track recorder. With one track assigned to a sync signal, seven tracks were available for discrete audio. No multi-track mixing panels that could work off DC were available at that time so 2 eight-input, four-output consoles were linked to supply the needed signal feed. The whole business ran off a 12-volt motorcycle battery with a converter circuit to provide the higher voltage needed by the recorder.

Paul Lohmann, Altman’s Director of Photography, recommended Jim Webb for the multi-track skills he had demonstrated on the concert films. And that was the beginning of collaboration among Robert Altman, Jack Cashin and Jim Webb on a series of films.

Jim Webb:

They had everything together but they didn’t have any idea about how to use it. And I said, “Well, the only thing that makes any sense is to put radio mikes on everybody.” You can’t have open mikes because if you add those back in the pre-dub, the background is going to be astronomical—you won’t be able to tell anything. You have to do a lot of close mic’ing to make this work. So my contribution was radio mikes.

The first picture made with this multi-track technique was California Split. It was a fortuitous choice because it made good use of improvisational technique but was less ambitious in that application than subsequent projects. It provided an opportunity to shake out the system.

By the time we got to Nashville, we pulled out all the stops and went blasting our way through it. We shot that film in eight weeks at a dead run.

Putting radio mikes on each performer and assigning them to discrete tracks was an obvious approach but there were also limitations. Post work required an additional two weeks to deal with all the different tracks. There was also an inherent lack of audio perspective. Jim Webb explains it best himself:

There’s no perspective. We ran into that immediately on Nashville. There’s this scene that opens the movie which is where they’re all in a recording studio and I went about putting radios on everybody, even the ones behind the recording glass. And I went over to Altman and I said, “Are you sure we’re doing this right? We’re throwing perspective just completely out the window.” And he said, “Yes, yes, of course we are.” I went back to putting radios on. About twenty minutes later, he comes over and says, “Are we doing this right?”

A little late to change the action now. And it worked out. I would have people come up to me and say, “That was the most realistic sound I’ve ever heard.” Well, there was nothing real about it. You’re not hearing people shooting through a double-plate glass and hearing all the conversation inside there, as well as what’s going on outside.

But it was primarily designed for overlapping dialog and improv and things of that nature where you never knew what anybody was going to say.

And you can’t possibly listen to it all because it’s just a Tower of Babel. So once I previewed all the radios and made sure they were working, you were just watching the meters.

Capturing the dialog with individual radio microphones was a complex undertaking that required all of Jim Webb’s skill but it accomplished what Robert Altman needed to fulfill his vision for the film. According to the Supervising Sound Editor, only two lines weren’t recorded in the original production track. One was a failed radio on Henry Gibson and the other one was an added line of Allen Garfield’s back as he was walking away from us. That was it; the rest was all stuff that we did.

THE ULTRASTEREO MIXER

Very little was available in the way of a portable mixing panel at the time Jim Webb was working the multi-track pictures with Robert Altman. The specialty mixing panels that Jack Cashin adapted for those pictures had liabilities that make them cumbersome for use on most pictures. He and Jack Cashin set to work to address this need with a capable mixer.

In the late to mid-fifties, [Perfectone] had a little threepot black mixer that was very popular in the studios; it was a little rotary pot thing and everybody used it. And it was around a lot. And then they updated their little portable mixer with a straight line. And they had six in and one out—it was still a mono mixer. And I liked the straight-line faders because you could handle them a lot easier than trying to wrangle three rotary, four, five rotary pots. So I said to Jack [Cashin], “Can we modify this and make it two track?” And we looked it over and said, no, it’s going to be simpler to make our own version of this. And he designed it and I built it. I built a dozen of them, maybe 12 to 14 of them. Sold them all.

ALL THE PRESIDENT’S MEN AND A RETURN TO BOOMING

Right after Nashville, Jim Webb was hired to do All the President’s Men, largely because his multi-track skills were applicable to situations where actors might have to interact with video monitors playing in the newsroom. He was also particularly skilled at recording telephone conversations and there were many of those in the script. Although he had his own working prop phones, the Special Effects Department supplied the multi-line key phones used in All the President’s Men. Webb provided a phone tap to record the phoneline conversations on a separate track from the on-camera dialog. He would supply an audio feed to actors brought in just for their off-screen dialog. Because everyone heard everyone else, either through the phones or via a specially provided feed from the mixing panel, overlaps were possible and could be recorded naturally. It was expensive because of the need to bring in actors who didn’t appear on screen but freedom from the pace-killing process of having lines read by a script supervisor allowed the filming to fly and yielded more natural performances.

When we rehearsed it, it went like lightening. And when we got through, Bob [Redford] said, Holy cow! … He was shocked at how fast it went and that’s how we did the scene.

It’s not often that the Mixer gets a chance to dabble in how the scene plays.

All the President’s Men was more tightly scripted and allowed a more normal recording technique than the Altman pictures. It came at a good time:

I remember going into an interview one time and I said, “I’ve done this Altman this and that.” And the guy looks at me and says, “OK. What else have you done besides that?” And I didn’t have anything so I was thinking to myself, it’s better to work around; it’s better to do different formats and utilize them when you need them.

Chris McLaughlin was his Boom Operator on the film but the newsroom scenes presented particular challenges. The Washington Post set was gigantic, consuming two linked stages, and lit naturalistically from overhead fluorescent lights. Fortunately, due to the heat they generated, the ballasts for all those lights were mounted in a shed outside the stage so there wasn’t a serious problem with hum. Director of Photography Gordon Willis favored up-angle shots that showed all the lights in the ceiling. When Jim Webb asked if it would be OK to boom, Willis held out his hand, casting multiple soft shadows and said, “I don’t care what you do as long as you don’t make any shadows on my set.” “That was the end of that conversation,” says Webb. Chris did manage to boom the picture using primarily a Sennheiser MKH 815 from below, flitting in and out of the performer’s legs.

According to Chris McLaughlin, Webb entrusted the microphone selection to his Boom Operators. But the big Sennheiser was clearly a favorite. He describes using one on The Long Riders. The Keach brothers were fitted with wireless mikes when Jim Webb learned that they intended to ride into the Chattahoochee River at the conclusion of the dialog. Concerned about immersing the radio packs in the river, Webb resolved to boom the scene. Chris McLaughlin thought that he could capture the dialog with a Sennheiser 815 off a 10-foot ladder. He turned the mike back for maximum rejection of the sound of the river and they accomplished the shot. At the end, the Keach brothers did ride into the river and Webb didn’t lose any mikes. “So I have a lot of respect for the 815,” he said, “it got me through a lot of tough places.”

It’s key to an understanding of technique that there was no agenda, no rules about how each scene needed to be recorded. Jim Webb approached each project with an eye to achieving the Director’s vision and capturing the elements needed for the picture as a whole. Duke Marsh says: “I think with Jim it was, if I’m [Post] mixing this thing, or I’m going to do the Post work on it, what do I want to hear?”

And Jim himself says, “You just gotta do what you gotta do, you know. And I never worried, pretty much at all, about what people thought about what I was doing. If I saw a way to do it and it felt right, that’s what I was going to do.”

Each project presented its own set of challenges to test his skills and preparation.Noises Off presented a particularly complex situation. Originally a stage play, it concerns an acting company rehearsing and presenting a play on an elaborate set. Come opening night, everything goes awry, cues are missed, props misplaced, and the comic errors pile one atop the other. The two-story set mirrored the set that would be on stage. To accommodate the perspective of the Stage Manager, a key character, the entire set was built ten feet above the sound studio floor, complicating any work from the stage. Peter Bogdanovich, the Director, intended to shoot the entire film using a Louma crane that had the ability to swoop in on individual performers, further complicating efforts to capture the audio with a boom microphone. Moreover, the script took the actors up and down stairs and through doors at a frenetic pace.

The actors hoped to avoid using radio mikes, in part because there was often little costume to conceal them. But they needn’t have worried as the pace and frequent costume changes made that an inconvenient choice.

The original plan was to distribute plant microphones throughout the set and go from mike to mike as the action required. After a rehearsal, Webb said, “Guys, I don’t know.”

McLaughlin thought he could capture the dialog using Fisher booms and had a plan for how to accomplish it. They would use two of the big Fisher booms and, to get them high enough to work the elevated set, they would replace the regular bases with purpose-built scaffolds and mount the booms to the top rails of the scaffolding. Wheels fitted to the scaffolding allowed moving the booms into position as needed.

Jim Webb was open to the idea and brought in Fisher booms with 29-foot arms fitted with Neumann KMH 82i microphones. Randy Johnson joined Chris to operate the second Fisher and Duke Marsh was brought in to work from the greenbeds with a fishpole to catch anything that fell between them. After hearing a rehearsal, Jim Webb said, “This is the way to go. Pull those plants.” They did use a few of the plants to pick up dialog occurring well upstage, under the set overhangs where the booms couldn’t penetrate, but using the big Fisher booms simplified the plan considerably. The plan still demanded considerable mixing skill to blend the two main booms, the fishpole operated by Duke and the occasional plant mike, but there was logic to the operation and the team successfully recorded all the dialog.

Other films presented challenges of their own. The Bette Midler films, The Rose and For the Boys, each presented playback challenges because of the large audiences or the complex shots envisioned by Mark Rydell, the Director. Webb worked with Re-recording Mixer Robert Schaper on For the Boys to build modern elements into period microphones so they might accomplish live-records at the highest quality levels. Robert Schaper recalls:

We ended up stealing vocals off of those mikes in the playback situations. One of Bette’s songs to her husband, when she is reunited with her husband, had a very silky, lovely, studio playback [of] “I’m Going to Love You Come Rain or Come Shine.” And she had a very silky rendition of that. [But] it didn’t match her acting performance at all because she was crying, overwhelmed with seeing her husband that she hadn’t seen in months and she was very worried about it and everything else. And we had planted … a Shure 55 with a rebuilt Shure capsule in it. Even with playback coming at her, the isolation was good enough on her actual live vocal—and she always actually sings all of her lip syncs. And she performed the heck out of the song … I ended up compiling all of that and using her live vocal—rather than the pre-record … from the plant that we had out there … and it turned out to be a really great acting performance.

CREW RELATIONS

“ He left a lot of it to the boom man. He walked on and said the boom man was the money-man, the boom man, he believed, controlled the set. ”
–Duke Marsh

“ He put great trust and faith in his Boom Operator. It was a collaborative effort. ”
–Chris McLaughlin

Over the course of a career, every Sound Mixer works with many Boom Operators, Utility Technicians, and Playback Operators. All who worked with Jim Webb praise his skills, his concentration, his commitment both to the project and his crew. A few brief stories from Duke Marsh illustrate:

[From Beaches] I would go and grab the snakes at wrap and he comes up behind me with gloves on and he said, “No, I do that.” “But I’m the cable guy; that’s a cable.” And, instantly we were buddies. And he’d go, “But, Duke, you gotta understand, those snakes are for me so I can work off the truck.” And in my whole career with him, in the rain, in the mud, in the snow, he’d always come off that truck. And there were days when I would say, “But you’re the Mixer.” “Well, you got other stuff to do. Go do that, come back, give me a hand.” That was Jim. He would always back his crew.

And then, in 2001 when he was receiving the CAS Lifetime Achievement Award:

I get a phone call and Jim says, “I want you to come. You’ll be at my table.” Well, he invited Doug Vaughn and Chris McLaughlin. [He delivered a speech accepting the award] then he says, “Those three guys at that table are responsible for a lot of this in my career. If it wasn’t for the boom man, putting that mike in the right spot, I wouldn’t be here.” And he had us stand up and we got an ovation. And I’m thinking, how many mixers pay attention to the guy that’s out front?

AWARDS AND ACHIEVEMENTS

In addition to the CAS Award, Jim Webb won the Academy Award for All the President’s Men in 1977 and the BAFTA Award for Nashville in 1976. He received one other Oscar nomination and three additional BAFTA nominations.

Nashville and All the President’s Men are each featured in both the Criterion Collection and the Smithsonian List. While Robert Altman and Alan Pakula, respectively, are recognized for their vision, Jim Webb shares in the accomplishment through his skill and inventiveness in facilitating that vision.

It’s also instructive to note the Producers and Directors he’s worked with multiple times. The list of three or more film collaborators includes Robert Altman, Francis Ford Coppola, Garry Marshall, Walter Hill, Bette Midler, and Paul Mazursky. Mark Rydell is one of several directors who employed him twice.

For each of these directors, Jim Webb contributed a sense of the role of sound as part of the whole and adjusted his technique to meet the needs of each particular project and the vision of that particular filmmaker. In talking with him, it is apparent that he has taken great pleasure in the process.

Jim Webb: “Good production sound has production value! Don’t give up. Be consistent and do the best you can.”

 

Mic’ing the Instruments

In the course of interviewing for this profile, Jim Webb shared many great stories that didn’t fit neatly into the narrative. This is one of the stories rescued from the trim bin.

In California Split, it started there and at the end of the scene there was going to be—and I found out this about two minutes before we were going to shoot it— there was a piano with tacked hammers in the bar and there was a lady that would play the piano and sing. Elliott Gould was going to be sitting there and they were going to talk a bit while she was playing the piano—and eventually they were going to sing “Bye Bye Blackbird.”

And I said, oh my God, it would be nice to know about this a little bit earlier. So I ran back and the only things I had around in those days were the old ECM-50s which were some of the first electrets from Sony. And I had a bunch of those. So I ran over to the piano, raised the lid and put one taped to the cross bar pointed down and the other one pointed up to the top end. I connected cables from the mixing panel, closed the lid, put a radio on each performer, ran back, turned the equalization all the way up – all I had – and prayed. So I laid down four tracks and it worked pretty well. In fact, they couldn’t duplicate it.

The scene didn’t really make the film but the song is in there, at the end, over the credits.

Anyway, they discovered that I could do that. So, in the smaller scenes in Nashville, where there were just the two gals singing in a place and the piano and whatever, I would do that mike, if it was an old upright, I would stick a couple of mikes in there … and as long as I hadn’t filled up the eight tracks, I could do that.

Well, they decided that I had so much dialog going on that I couldn’t cover all the music too; I just didn’t have enough tracks. So, they hired a guy named Johnny Rosen to come in and they had a sixteen-track truck and they hired him to do the Opryland stuff and all that. And their mixer, I think his name was Gene Eichelberger, was shadowing me just to see what I was doing. And he saw me doing this lavalier routine and I’m thinking to myself, I can’t tell anybody in Nashville that I’m using lavaliers to mike instruments because they’re going to laugh me out of town. Next thing I know when I get to Opryland, Eichelberger is over borrowing every ECM-50 I’ve got and he’s taping them to fiddles and everything in the orchestra he can find. So, I thought, well, OK, that’s how we’re going to do this. And that’s how it all kinda went down.


Interview Contributors

These colleagues of Jim Webb assisted in the preparation of this profile by making themselves available for interviews:

Crew Chamberlain was Webb’s Boom Operator on several films including The Milagro Beanfield War, Legal Eagles, and Down and Out in Beverly Hills.

James Eric knew Jim Webb from his days working the microphone bench at Location Sound. Later, he served as Utility Sound on Out to Sea.

Robert Janiger is a Sound Mixer and friend who collaborated on further development of the Ultrastereo mixer.

Harrison “Duke” Marsh worked with Jim Webb on seventeen films including Pretty Woman, For the Boys and Noises Off. He worked variously as Playback Operator, Utility Sound and Boom Operator.

Chris McLaughlin boomed twenty-one films for Jim Webb starting with California Split and continuing through Noises Off. Among others, he did Nashville, 3 Women, The Rose, The Long Riders, and Hammett.

Robert Schaper was Supervising Music Engineer on For the Boys.

Mark Ulano is an award-winning Sound Mixer who considers Jim Webb a mentor.

Ray Dolby

A Tribute to Ray Dolby

by Scott Smith, CAS and David Waelder

To be an inventor, you have to be willing to live with a sense of uncertainty, to work in this darkness and grope towards an answer, to put up with anxiety about whether there is an answer.

–Ray Dolby

The Dolby name appears so often on films that it has become like Kleenex or Xerox, a generic for noise reduction. But the many innovations of Dolby Labs are largely the work of Ray Dolby, a man of prodigious ingenuity. He died of leukemia on September 12, 2013, at age eighty, at his home in San Francisco. Born January 18, 1933, in Portland, Oregon, Mr. Dolby was hired straight out of high school by Alexander Poniatoff of Ampex Corporation. At the time, Mr. Dolby had volunteered as a projectionist for a talk that Mr. Poniatoff was giving. Impressed by his talents, Poniatoff invited the young Mr. Dolby to come to work with him at Ampex, where he contributed to the design of the first quad videotape recorders.

After completing studies in electrical engineering at Stanford and physics at the University of Cambridge, Ray Dolby invented a system of high-frequency compression and expansion that minimized recorded hiss. He formed Dolby Labs in 1965 to bring this noise reduction system, called Dolby A, to market. Mr. Dolby later turned his attention to the problems of sound recording for motion pictures, which still relied on decades-old technology. His endeavors would lead to the introduction of a surround sound system that could be duplicated using traditional optical soundtrack printing techniques. It replaced the expensive and cumbersome printing techniques previously used for big-budget films.

At Dolby Labs he is remembered as much for mentoring a new generation of scientist/engineers as for his particular innovations. He was a scientist who expanded creative horizons for artists.

His contributions are covered in greater detail in Scott Smith’s series “When Sound Was Reel” in the Summer 2011 and Winter 2012 issues of 695 Quarterly. There is also a very fine video tribute available on the Dolby website. These are available at:

https://www.local695.com/Quarterly/3-3/3-3-when-sound-was-reel-7/

https://www.local695.com/Quarterly/4-1/4-1-when-sound-was-reel-8/

http://www.dolby.com/us/en/about-us/who-we-are/leadership /ray-dolby.htm

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 12
  • Page 13
  • Page 14
  • Page 15
  • Page 16
  • Go to Next Page »

IATSE LOCAL 695
5439 Cahuenga Boulevard
North Hollywood, CA 91601

phone  (818) 985-9204
email  info@local695.com

  • Facebook
  • Instagram
  • Twitter

IATSE Local 695

Copyright © 2026 · IATSE Local 695 · All Rights Reserved · Notices · Log out