• Skip to primary navigation
  • Skip to main content
  • Login

IATSE Local 695

Production Sound, Video Engineers & Studio Projectionists

  • About
    • About Local 695
    • Why & How to Join 695
    • Labor News & Info
    • IATSE Resolution on Abuse
    • IATSE Equality Statement
    • In Memoriam
    • Contact Us
  • Magazine
    • CURRENT and Past Issues
    • About the Magazine
    • Contact the Editors
    • How to Advertise
    • Subscribe
  • Resources
    • COVID-19 Info
    • Safety Hotlines
    • Health & Safety Info
    • FCC Licensing
    • IATSE Holiday Calendar
    • Assistance Programs
    • Photo Gallery
    • Organizing
    • Do Buy / Don’t Buy
    • Retiree Info & Resources
    • Industry Links
    • Film & TV Downloads
    • E-Waste & Recycling
    • Online Store
  • Show Search
Hide Search

Features

Hal Hanie Profile

A Profile of Hal Hanie
56 Years in Broadcasting

by David Waelder
Photos courtesy of Hal Hanie except where otherwise stated

Dwight David Eisenhower was President in 1957 when William (Hal) Hanie began his career in television at KRLD, the CBS affiliate in Dallas. Tailfins were all the rage for cars and The Howdy Doody Show, the iconic children’s show from the ’40s, was still on the air; it would run for another three years. Videotape had been introduced only a year prior and, in some markets, copying programs was still done by kinescope, a process that involved shooting a monitor screen with a motion picture camera.

Television in the ’50s was a young and rapidly developing industry but Hal Hanie entered the field well prepared for the rapid technological change he would experience. Drafted into the Army during the Korean War, he took advantage of an opportunity to complete his service in the Air Force. They gave him twenty-two weeks of training in electronics school and additional training in control tower school that included instruction in radar. On completing his four years of military service, he continued his training in trade school and also worked at the radio station run by the school. His first real job was with Collins Radio, now Rockwell Collins, a manufacturer of broadcast transmitters, microwave transmitters and relays. When he took the position at KRLD, Channel 4 in Dallas, he already had a solid background in electronics and related disciplines.

At KRLD, he worked nearly every position in television at one time or another. He also maintained the transmitter for the station’s sister radio facility located in the same building.

He did television remotes for events, like football games, all over Texas. He also did video recording and worked instant replay, a new feature developed at CBS by Tony Verna. In those days, sports events were recorded on two-inch videotape and any portion of the tape might be played back for on-air review. Locating the right cue point for the desired play was the difficulty in any on-the-fly playback situation. The video recorder was fitted with a mechanical counter and the operator would hold the timer at zero until the play started. For replay, he would back up the tape to the zero point or a few seconds before to provide time for lock-up. Later, with the use of one-inch machines, operators like Hal Hanie would often turn the reels by hand to find the cue point, and then turn the reels forward by hand to provide slow motion. With the later machines, a system of identifying plays by laying down beep tones on the cue track that were audible to the operator on rewind replaced the mechanical counter. Providing instant replay was one of his responsibilities throughout most of his career, both in Dallas and here in Los Angeles, up until 2009 when the Clippers ended their over-the-air contract with the station and KTLA ceased original sports programming.

The Kennedy assassination in Dallas was his most memorable experience while at KRLD. He recalls seeing Lee Harvey Oswald at the Dallas police station and observing how cool and selfpossessed he appeared to be. Jerry Hill, one of the policemen who found the sniper’s nest in the Texas School Book Depository and later helped capture Oswald at the Texas Theatre, was one of two police officers working part-time at KRLD as a police liaison and well known to the staff at the station. Hanie remembers this as a chaotic time, exciting but stressful and disturbing. And, he had occasion to evaluate the performance of the crack staff from CBS in New York who came to Dallas to cover events. Nelson Benton, now regarded as a veteran newsman, was just beginning his career and appeared a young fellow “shaking in his shoes” when Hanie observed him.

In June 1969, Hanie moved to Los Angeles and started work at KTLA. He joined IATSE at that time. (His work at KRLD had been under an IBEW contract.) He stayed at KTLA for forty-four years. Combined with his twelve years at KRLD, he has 56 continuous years of experience in television.

At KTLA he continued to do instant replay for sports and did videotape playback and recording for all sorts of programming. He did the recording for Donny and Marie and Dinah’s Place. He has fond memories of the people working both shows.

He worked many other shows including The Richard Simmons Show and Mary Hartman, Mary Hartman and others too numerous to recall. He edited Backstage with Johnny Grant and recalls that Grant could never remember names so he would call everyone “Tig,” short for “Tiger.” When Hal Hanie asked him what he would call a woman, he thought for a moment and answered, “Tigress.”

Gene Autry owned the station when Hal Hanie first came to work at KTLA. Hanie remembers him as a benevolent boss who often treated employees to lunch in his private box at Angels games. The Tribune Company purchased the station in 1985 and initiated polices that were more corporate. They sought to renegotiate the contract and eliminate seniority status. Hal Hanie was proud to walk a picket line to protest that move. He also served awhile as Shop Steward for the Videotape Department at KTLA.

He recalls a time at KTLA in 1991 when the station brought in some green production staff to work the morning news. They were so inept that they couldn’t coordinate the teleprompter copy to match the video clips and mismatches were common. Finding themselves adrift, the reporters would often break up laughing. The Producer of the KTLA Morning News encouraged them to play along with the errors rather than glossing them over and striving to retain dignity. The newscasters, thinking the show was probably on the verge of cancellation anyway, went along and discovered that ratings improved. Viewers liked the casual presentation. After that, every news program in town was copying the loose format. Three of the reporters from that time, Mark Kriski, Sam Rubin and Eric Spillman, are still with the station.

In addition to his regular work at KTLA, Hal Hanie operated a small, community radio station from a studio adjacent to his home. FCC regulations are quite demanding regarding regular broadcasts and he needed assistance to keep things running regularly. He used interns from Columbia School of Broadcasting, Santa Monica College and Cal State Northridge, trading technical training and experience for help with operations. He did regular remote broadcasts of high school football games, both home and away. That was a complex operation requiring stringers to prerecord interviews with the coaches that he would edit into a pre-game show. During the game itself, he had a professional announcer and a color man providing continuous coverage that he would feed into a phone line for broadcast. Eventually, he became the “sustaining member” of that particular charity and it became too much to carry while also working a full-time job at KTLA. The radio station is no more but he still maintains a recording studio that he uses to make demo tapes and transfers to digital media.

Operations at a TV station are now largely automated but, during his career the systems required considerably more attention. Chroma Key demanded very exact lighting to prevent bleed at the edges. Genlock used to be so fragile that just touching a camera could cause the signal to lose lock. Equipment required alignment daily, or even more frequently, and he used to be responsible for tweaking color and density on a vector scope. Now, a computer generally handles this chore digitally. And there was a time when he needed to keep a rag soaked with solvent to clean heads on the fly to prevent image breakup caused by emulsion build-up.

The continuing process of automating procedures eventually encouraged Hal Hanie to retire in 2013. When the station completed the automation program and linked several tasks to one computer, they offered him the option of retraining. He had done that at several stages in the past but thought, at age eighty-two, it was time to step aside. William (Hal) Hanie retired as a Gold Card member of Local 695.

His other passion is flying. He used to own a 1977 Archer but airplanes are an expensive hobby and he had to let it go. But his license is still current and he was planning a trip to Texas when we interviewed him. We wish him blue skies.

P-Cap, MoCap and All That Jazz Part 2

P-Cap, MoCap and All That Jazz Part 2

by Jim Tanenbaum, CAS

Set Procedure

The capture techs will have an earlier call so they can calibrate the volume. This involves placing a single reflective marker at specified positions so the computer can associate them with the images in the capture cameras. The marker is mounted on a rod, usually the same length as the side of the grid squares. First, the rod is used as a handle to position the marker on the floor at each intersection of every grid line. The system will beep or chirp when it has calibrated that point so the tech can move on to the next one. When the floor grid is calibrated, the other end of the rod is placed at each of the intersections, and held vertically with the reflector at a fixed distance directly above the spot, and the procedure repeated. During the calibration, the volume needs to be kept clear of other crew people.

Reflective objects are verboten in or even near the volume. Any Scotchlite strips on shoes or clothing need to be taped over, and if the anodizing is worn off of the clutch knobs on your fishpole, they will need to be covered with black paper tape. Some poles’ shiny tube sections are a problem too, and black cloth tubular shrouds can be purchased to slip over the entire fishpole. J.L. Fisher has black-anodized booms available to rent for use on capture shoots. If you have work lights on your cart, be sure their light bulbs are not directly visible to any of the capture cameras.

On most shoots, you will have only a single assistant, either to boom or to help with the wireless mikes. This means that the smaller and lighter your package is, the easier it will be to set up, move and wrap.

I make it a habit to run on batteries at all times. This avoids problems with hum from ground loops because you are tied into the studio’s gear through your audio sends, and also the possibility of having your power cord kicked out of the wall outlet. Being a belt-and-braces (suspenders) man, I also use isolation transformers in my audio-out circuits. (See my cable articles in the Spring, Summer and Fall 2012, and the Winter 2013 issues of the 695 Quarterly.)

The usual recording format is mix on Channel 1, boom (if used) iso’d on Channel 2, and wireless mikes (if used) iso’d on succeeding channels. You will send a line-level feed of your mix to the IT department, where it will be distributed to the reference cameras and imported into the editing software. Your isos may also be sent into the system during production.

Metadata may be conventional (Scene 37a, Take 6) or extremely esoteric and complex (195A_tk_00E_002_Z1_pc001_0A01_VC_ Av001_LE). Hopefully, you will be allowed to abbreviate long ones like this—I was able to get away with: Scene 195A_00E_002, and Take 2, but since the last digit of the “scene” number was also the take number, I had to manually advance it every take. Fortunately, the Deva allows me to make corrections retroactively, but it is still a nuisance so I’m very careful when I enter the data initially. Discuss metadata requirements with production as soon as possible.

Digital sound reports are very convenient, but you need to secure your tablet carefully; the cost of replacing a dropped Galaxy or iPad overcomes any amount of convenience.

Comtek monitors can be a problem because of system delays in the video display screens, which are often non-standard and even variable. Many directors will want to see and hear playback during the day. I have found that the simplest solution is to get a feed of your mix back from IT and send that to the Comtek transmitter. They should automatically have the correct delay for both direct and playback. Unfortunately, a number of new, smaller volumes have sprung up, and they sometimes do not have any means to compensate the audio for the video delay. Behringer makes an inexpensive variable-audio-delay unit called the “Shark,” and it is worthwhile to carry two of them along with an XLR switch so you can quickly feed your Comtek with the appropriate delay for direct and playback audio. Your direct mix will go into delay 1, and the mix return from playback will go into delay 2. The XLR switch will be used to select the output of either delay as required to feed your Comtek transmitter.

A problem with sending your mix and isos into the capture system in analog form is that the gain structure of their audio channels may be less than optimal, and more importantly, accidently be changed after you have adjusted it initially. If you can have any control over the infrastructure, try to get a digital audio (SMPTE/ EBU) audio path so you won’t have to worry about this, or hum/ buzz pickup.

It is vitally (and virtually) important to discuss digital audio parameters with the IT department. The most common TC frame rates are 23.98 and 29.97, but 24 and 30 are also encountered, and you must be sure to use the correct one. Although you can use 29.97 with a 23.98 system, and 30 with a 24 system—the rates can be converted without too much trouble—it is much more difficult (and expensive) to use 30 with a 23.98 system, or 29.97 with a 24 system. Usually, you will get a TC feed from the capture system. Ask specifically about the user bits—some systems have fixed random digits that remain unchanged from day to day. If you are working more than one day on the shoot (and remember that sometimes a one-day job runs over and requires a second day), it is important to put the date (or some other incremented number) into the user bits yourself to avoid duplicate TCs.

There are two “standards” in TC circuitry: BNC connectors at 75­ and 3-pin XLRs at approximately 110­. Unfortunately, these parameters are not universal, and to make matters worse, some facilities have built up their own infrastructure and have patch panels with connectors that are fed from equipment with the inappropriate impedance.

Unless long cable runs are involved, this impedance mismatch usually does not cause problems. (See the cable articles for using balun transformers.) The best you can do is to use mike cables with XLR TC sources and 75­ coax cables with BNC TC sources. If this does not match the TC input connector of your recorder, try a simple hard-wired adapter before going to a balun. If the recorder’s display shows a solid indication of the proper frame rate and there are no error flags, you are probably okay. If this is a long-term project, you should have time for a pre-production test, if not, cross your fingers. (Or invest $10,000 in a time-domain reflectometer to measure the jitter in the “eye pattern” and determine the stability of the TC signal at your end.)

When it comes to wireless-mic’ing the capture suits, there is good news and bad news. The good news is that you don’t have to hide the transmitter or mike. The bad news is:

1. There is a tremendous amount of Velcro used on capture suits, and it can make noise when the actor moves. Applying gaffer tape over the offending strip of Velcro will sometimes quiet it. For more obdurate cases, a two-inch-wide strip of adhesive-backed closedcell neoprene foam (aka shoe foam) may prove effective. As a last resort, one or more large safety pins fastened through both sides of the Velcro usually works.

2. Mounting the mike capsule requires some forethought. If no facial capture camera is in use, the top of the helmet opening can be used to mount a short strut to hold the mike in front of the forehead. I use a thin strip of slightly flexible plastic, 1–2 inches in length. If a face-cap camera is used, its mounting strut can be used to secure the mike, but in both cases, be sure to keep the mike positioned behind the vertical plane of the performer’s face to help protect against breath pops. Also, the exposed mike is susceptible to atmospheric wind, or air flow from rapid movement of the actor. I have found that a layer of 100% wool felt makes an excellent windscreen, especially when spaced away from the microphone element about 1/8 inch. (Incidentally, felt can be used to windscreen mikes under clothing as well.)

3. Because the mike is located so close to the actor’s mouth, it is exposed to very high SPLs. Many lavaliers overload acoustically at these levels, so turning down the transmitter’s audio gain doesn’t reduce the distortion. Both Countryman and Sanken make transmitter-powered models designed for higher SPLs, but not quite high enough. The problem is that the mikes require at least 5 volts of bias for these peak levels, and most wireless mike transmitters supply only 3.3 to 4 volts. An inelegant fix is to use one of these mikes with an external, in-line battery power supply, because their extra bulk doesn’t have to be concealed. The other side of this coin is that these high-SPL mikes are noisier at low dialog levels. Be prepared to quickly switch back to the low-SPL mikes between loud and quiet dialog scenes. Another possibility, if you have stereo transmitters (currently only available from Zaxcom), is to employ two different mikes, one for high levels and the other for low, and iso them both.

4. There may be other electronics mounted on the actor’s suit that can interfere with your wireless mikes. If a face-cam is in use, there will be a digital video recorder and timecode source. This may be an onboard TCXO, or a receiver for an external reference. Another possibility is a transmitter to send locally generated TC to the capture system. If real-time face monitoring is present, there will be a video transmitter, either in the WiFi band (2.4 GHz) or on a microwave (above 1 GHz) frequency. If active markers are functioning, they may receive and/or transmit an RF synchronizing signal. The RF from any of these transmitters can get into your wireless either through leakage in the transmitter case or through the lavalier’s capsule housing, cable or plug. Keeping your gear as far from any of these transmitters and their cables is the first line of defense.

5. If motion control apparatus is being used, there may be multiple RF links involved, all at different frequencies. As soon as possible, coordinate frequencies with the appropriate department(s).

6. The reference video cameras, if camcorder types, may have video monitor transmitters. Some of them still use the old analog Modulus units, and they present very serious interference problems.

7. Walkie-talkies usually operate well above or below your wireless frequencies, but at 5 watts they can cause trouble if close to the actor or your sound cart.

8. For general wireless mike problems, see my radio mike article in the Spring 2011 issue of the 695 Quarterly.

When it comes to booming a CGI–capture scene, there is good news and bad news. The good news is that you don’t have to worry about boom shadows. The bad news is:

1. You can’t block the view of the reference cameras. When 12 of them are in use simultaneously, it can be hard to keep track of all of them. But the mike and boom can be visible in the reference camera(s) as long as it isn’t between them and an actor’s face (or key part of the body).

2. There is no such thing as “perspective” in a captured scene, since it can be rendered from a POV at any distance. Every shot needs to be mic’ed as closely as possible. Distance is easily added in Post, especially now that we have DSP (Digital Signal Processing), but cannot be removed.

When it comes to booming a live-action capture scene, there is good news and bad news. The good news (if any) is dependent on various factors. The good/bad news is:

1. It depends on the particular project as to whether the mike and/ or boom can be in frame. For green/blue screen work, a green or blue cloth sleeve is available for the pole, and similarly colored foam windscreens for the mike. Also, appropriately colored paper tape can be used to cover the shockmount, or acoustically transparent colored cloth can shroud both mike and shockmount. Be sure the cloth is far enough from the mike that it does not rub when moved.

2. For non-screen work, the ordinary booming rules about shadows and reflections apply, except…

Now that HD video is the norm, there is no film “sprocket jitter” to make the matt lines stand out, and there is no “generation loss” from optical film processes. This, plus the much lower cost of video image processing compared to film, has made producers and directors less reluctant to use it. Offending objects can be removed from a shot relatively easily, and this can include mike booms. (Of course, this is no license for sloppy work.)

Another use of CGI solved a problem that has plagued filmmakers from the very beginning: reflections of lights, cameras and crew in shiny surfaces. Bubble faceplates on spacesuits were a particular problem. (We had to build a quarter-million-watt artificial sun for single-source lighting on the TV miniseries From the Earth to the Moon, in major part because of the astronauts’ mirrored visors.) For Avatar, most of the exopack masks were only open frames, with red fiduciary (computer-tracking) dots around the edge. CGI faceplates were added in Post, complete with the appropriate reflections of trees, sky, other characters, etc. Many of the windows in vehicles were CGI’d in the same manner. This provided a rare benefit to the sound department: the ability to shoot through a “closed” window or a facemask with a boom mike.

When it comes to setting levels and mixing the production (realtime) scratch mix for a capture scene, the usual live-action esthetic and dramatic considerations do not apply:

1. As just mentioned, there is no “visual perspective” as such for a given take, because it can be rendered from any POV. Wireless mikes sound “close,” and you will try to boom mike as closely as possible, too. With every channel iso’d, there is the freedom in Post to mix them in any proportion, but remember that your work is normally judged in dailies. (Although nowadays, that usually means the immediate playback of the take.)

2. For your production mix, however, you will have to make certain choices without knowing what perspective image it will be mated to. EXCEPTION: When a virtual camera is in use, if you can see or be told, what the composition is, by all means use that perspective, because it will most likely be seen (and heard) that way first, as in dailies.

3. The biggest problem (IMHO) concerns overlapping dialog when the characters are separated in the volume by a large distance. If you don’t have the virtual camera info mentioned above, try to imagine what the composition of the rendered shot(s) will be. Is a main character speaking with a secondary one? Then the main character will probably get the most screen time. Is one character reacting more emotionally than the other? Then they will probably get the close-up.

4. After you have determined (made your best guess) which character will be featured, mix them just noticeably hotter than the other one. The separation in levels should be just large enough that the lower level dialog doesn’t muddy the higher level dialog, but no more. Since both actors are close-mic’d, if they happen to feature the secondary one, the overlap will still work. EXCEPTION: If you know the purpose of the overlap, assign the higher level to the appropriate character’s dialog. This will call attention to the overlapping character, but that’s the reason for the overlap in the first place.

In addition to the usual noise problems on a live-action stage, the volume has some unique ones:

1. The area lighting is often supplied by ordinary fluorescent lamps, and many of them have older magnetic ballasts that emit 120 Hz hums and buzzes. Modern electronic (high-frequency) ballasts are usually quiet enough, and are available as direct replacements for the older magnetic ones.

2. There are usually a great number of computers on the stage, and their cooling fans are a significant source of noise. If the facility has been in existence for some time, this may already have been dealt with. If not, plywood baffles, covered with sound-absorbing material on the side that faces the computers, should quiet them sufficiently.

3. Some volumes’ floors are carpeted to eliminate footstep noise, but unfortunately, some are not. An adequate stock of foot foam should be on hand for this eventuality. Be sure to remove any dust or other loose material from the shoe soles before attaching the foam. I have found that repeatedly wiping the soles with the sticky side of gaffer tape, using a new length of tape each pass, does an outstanding job of preparing them. An expedient method when time is limited is to slip heavy wool socks over the shoes. You may have to cut holes in the socks for the foot markers. Unfortunately, the socks can slip around, and also have less traction on the floor than rubber soles. I keep a dozen 2’ x 5’ carpet rolls on my follow cart, and these can be laid down along the path taken by the actor(s) during the rehearsal. (Of course, they never deviate during the take.) Normally, the strips are taped in place, but when time is short, they can be attached with staple guns (unless the floor is concrete). IMPORTANT: Roll up the carpets with their upper surface out—this makes the strip curl downward when it is laid out, so the ends hug the floor and do not curve up to present a tripping hazard.

4. The floor-contour modules are another source of footstep sounds. Some of them are carpeted, but can still produce dull, hollow thumps from the impacts of running or jumping (which video games seem to have in abundance). The un-carpeted platforms are particularly loud. If at all possible, arrange to have them carpeted before shooting begins. Both types of modules benefit from having the underside of the top surface sprayed with sound-deadening material, such as automotive underbody coating. Using thicker (and unfortunately, heaver) plywood for the upper surface makes a big difference, too. During shooting, carpet strips can be utilized on the modules in the same manner as on the floor.

5. Front-projection video projectors have cooling fans that can be problematical. Ask if their use is absolutely necessary. Check in their menus to see if they have a “lownoise/ low-speed” option.

6. Props (and some set dressing) are usually not the real objects they represent. Rifles are plastic or wood pieces shaped like the guns they represent, or toys or air rifles.

A solid oak dining table may in fact be only a row of folding card tables of the same height and overall size. Be alert to any sounds they produce—an object set down on the card table (not on a line) may make an effect at the appropriate level, but the sound will not be appropriate to the nature of the CGI-heavy wood table. There are two schools of thought in dealing with this: 1, eliminate as much noise as possible by padding the table so the effects cutter will have a clean room tone to lay the correct effect into; and 2, leave the production effect in, as a guide to synchronization when laying in the new effect. I suggest discussing the matter with Post ahead of time, but my personal preference is number 2, because the presence of the padding will affect the manner (body motion) in which the actor sets down the object. Of course, if the inappropriate sound is on a line, either pad the table or object, or record some clean wild lines.

Capture Procedure

When a capture scene begins, the actors will start by spreading out and taking a “T-pose” near the edge of the volume. If you haven’t been given a specific “Roll sound,” this is the time to go into Record. An added precaution, be sure to set your recorder’s pre-roll to the maximum time. T-pose is a standing position with the legs slightly spread and the arms extended horizontally, which allows the capture techs to see that the system has properly recognized all the markers. The techs give the okay, the actors will take their proper positions in the set and then the director will call “Action.”

At the end of the shot, after the director calls “Cut,” the actors will again move out and take the T-pose. When the capture techs are satisfied, they will announce that the capture system is stopped, and then you can stop recording.

The primary difference between a capture shoot and any other type is that you won’t have much free time once the process starts. Unlike live-action, there is no setup time for camera and lighting. And there are no setups for alternate camera angles, or retakes for bad camera moves, flyaway hair, or any of the multitude of other delays sound is used to. Once the scene has been performed to the director’s satisfaction, the action will move to the next one, which again requires no re-lighting, new camera setups, wardrobe changes, or makeup and hair. If any set or prop changes are necessary, they can be accomplished in a few minutes. Plan your bathroom breaks accordingly.

This high-density work can generate many GB of audio, so be sure to have a large amount of pre-formatted media on hand. Depending on your particular recorder, you may have your on-set archive on an internal or external hard drive, or a CF or SD card. Most productions want audio turned in on a flash memory card. SD cards are much cheaper than CF cards (and all those tiny fragile pins in the CF card socket scare me). If you are using a Deva with only CF card slots, consider an external SD dock into the Deva’s FireWire port. Depending on the particular job, you may or may not be required to turn in the flash card(s) during the day or at wrap. The audio may be imported immediately and the card(s) returned to you, or they may be kept overnight or longer. Use only ‘name-brand” cards, as the wear-leveling algorithms on the cheap ones can cause premature failure, with the possible loss of all your data.

The director may have several options to monitor the scene during capture:

1. The live video from the reference cameras.

2. A crudely rendered live CGI frame, with a fixed POV chosen in advance.

3. Using a “virtual camera,” pioneered by Cameron on Avatar. This is a small, handheld flat-panel monitor equipped with reflective markers. The capture system knows its location and the direction it is pointed, and renders a live CGI frame from that POV and “lens size.” The director can treat it as a handheld camera, pointing it as though it was a real camera in the virtual world. Incidentally, the camera does not have to actually be pointed at the actors—the GCI world seen by the virtual camera can be rotated so that the camera can be aimed at an empty part of the stage to avoid distractions. Another feature of the virtual camera is a “proportionality control.” Set to 1:1, the camera acts like a handheld camera. At 10:1, raising the camera two feet creates a 20-foot crane shot. With a 100:1 ratio, it is possible to make aerial “flyover” shots, because the entire extent of the virtual world is available in the computer’s database.

When a virtual camera is in use on a multi-day shoot, the capture days may not be contiguous. After a certain amount of capture has been done, the main crew and cast may be put on hiatus while the director wanders around the empty capture stage with the scene data being played back repeatedly. The crudely rendered video will appear in the handheld monitor, from the POV of its current position. The director can then “shoot” coverage of the scene: master, close-ups, over-the-shoulders, stacked-profile tracking shots, etc. This procedure ensures that all the angles “work.” If not, the director has two options: re-capture the scene on another day; or fix the problem in the computer by dragging characters into the desired position and/or digitally rearranging the props, set or background.

If this is the case, you have two choices: wrap your gear at the end of each capture session, and load in and set up at the beginning of the next one; or leave your gear in place during your off day(s). The trade-off is between the extra work (and payroll time) of wrapping and setting up, and the danger of the theft of the gear, or your getting a last-minute call for another job on the idle day(s). If you elect to leave your equipment, see if you can get a “stand-by” rental payment. Even if this is only a token amount, it establishes a precedent, and you may be able to raise the rate on the next job.

Conclusion

In addition to on-the-job training, if you know another mixer who will let you visit a capture set, take advantage of the opportunity as soon as possible. I probably would have not survived the first day of my first capture job (Avatar) if it were not for Art Rochester, who kindly let me shadow him before he left the show. I also got many hours of coaching from William Kaplan, who mixed the show before Art, and let me use his regular Boom Op, Tommy Giordano, to help with the load-in and setup of my gear. Bill also sent his son Jessie to work with me on the set. If at all possible, hire a boom op who has capture experience. (Note to boom ops: list your capture experience in your 695 directory listing.)

I wish you an absence of bad luck, which is more important than good luck in this business.

Text and pictures © 2014 by James Tanenbaum, all rights reserved.
Avatar set photo ©2009 Twentieth Century Fox. All rights reserved.

Working With Jim Webb

Working With Jim Webb

by Andy Rovins, CAS

One day in 1981, while standing in line at a bank, I struck up a conversation with an older gentleman who said he was a retired Prop Master. When I replied that I was a Boom Operator, he said that his son, Chris McLaughlin, was a Boom Operator. “Really, Chris McLaughlin is revered among boom operators. He works with Jim Webb and gets equal billing with Jim as the sound team.” The next day, I got a call from Chris. “Who are you, and why are you saying nice things about me to my pop?” We chatted a bit about mikes and booms and stuff. “What do you like,” he asked? “A Schoeps is my favorite.” “We use an 815 on everything. We did All the Presidents’ Men with one 815 underneath and won an Oscar.” You had to be spot-on with an 815 or it would sound funky; if you could handle one all the time you were a real Boom Operator.

A few days later, Chris asked me if I wanted to work on pickups for One From the Heart. Their regular third, Jim Steube, was on vacation. I jumped at the opportunity. I got to work with this famous team, and I’d heard about this film, with Francis Coppola directing from the Silverfish (a custom airstream trailer stuffed with monitors and video gear), Vittorio Storaro’s lighting and Dean Tavoularis’ forced perspective Las Vegas sets. I also got to meet Jim, whom I’d heard so much about. Jim was congenial and different from most Mixers I knew. He didn’t want to be near the set, but was content to cable in and give his Boom Operators autonomy. We did scenes with Teri Garr and Raul Julia, and one with Nastassja Kinski sitting in a big martini glass singing “Little Boy Blue.” Nastassja was a real flirt. I think every guy on the set had a crush on her.

At one point, Francis came on set and tried to talk Joanie Blum, the Script Supervisor, into directing the scene, but she wanted no part of it. I offered to do it, but Francis declined. “Who are you?’ “I’m your new Sound Utilityman.” “Oh, yeah, I used to do that job.” He decided to direct it himself.

The last day, Chris felt ill so Jim told me I could boom. It was only a little announcement from Francis—they wanted to show the film to exhibitors, but the opticals weren’t done so it would have some slugs that he wanted to explain. Francis was late that day and we sat around. Finally, someone came up with an idea. Ron Garcia, DP for the pickups, looked kind of like Francis with his beard. So we sat Ron in a director’s chair holding a film can, while a prop guy dropped money into it from above, and Ron looked at the camera and said, “We will show no film before its time,” a goof on an Orson Welles wine commercial running at the time. I think Ron still has a print of it.

Jim brought me on some more projects after that. He would drive up in his white van, and we would pull out the Fisher boom—Jim was the only guy I knew who owned his own. He had his anodized black and changed out the platform wrench to a socket for faster action. When possible, Jim would mix from the van and we would run cable out. A cool thing about having our own Fisher was that we didn’t have to bargain for one; it was there if we needed it.

Jim got some interesting gigs in those days. He brought me on Get High on Yourself, an anti-drug special produced by Robert Evans as part of a plea bargain negotiated after an arrest for possession. It was a huge production with many bands and stars, including Ted Nugent, The Osmonds, Leif Garrett, Brooke Shields, Carol Burnett and Paul Newman. There were concerts and audience Q&As with lots of kids asking the celebrities questions. There was also a big production number with many stars and kids all singing the theme song in a style that would be mirrored by “We Are the World” a few years later. Jeff Fecteau and Chris Seidenglanz were the A2s and co-booms. There were many producers on that show; the event was kind of thrown together and disorganized but I think Jim thrived on being able to hold together challenges like that.

Jim liked my boom work so he asked me to come along for another unique show—recording a concert by pianist Mona Golabek in a women’s prison in Chino, California. It was an odd scene. The prison had been built as a luxury resort in the ’20s, so it was all marble and stone floors with high-vaulted ceilings.

The acoustics were somewhat echoey, so we put up furny pads when we could, mic’d the pianos with some Shure dynamics and set up an SM58 for Mona. We rolled the Fisher out of Jim’s white van for me (with the 815) to mike questions and reactions from the prisoners.

It was somewhat incongruous: we had a classical pianist in a stately, beautiful building playing Chopin and Liszt for an audience of very rough-looking women in prison fatigues, but they were an appreciative group and seemed to regard the experience as a treat. I think Jim still has a recording of that show.

Jim is not just a great Sound Mixer. He’s also one of the best raconteurs I’ve ever met. Just sit down with him if you get a chance. He tells me that one of his favorite stories comes from Get High on Yourself when I was working that long 815 on the Fisher boom to get unscripted audience responses. To be part of Jim’s stories is a real honor!

The Walking Dead

Blood, Guts, Gore … and Chiggers
Behind the Boom of THE WALKING DEAD

With Robert ‘Max’ Maxfield

Photos by Gene Page, courtesy of AMC TV, except where otherwise noted

Did you know that chiggers don’t really burrow under your skin?! Nope, actually they grab onto a hair follicle, and inject a digestive enzyme into your skin cells. The enzyme ruptures your cells so that they can drink the resulting fluid containing a protein they need to grow. Your skin hardens around the area, forming a nice big red volcano-like sore. That enzyme-filled volcano keeps you itching for a good three to four days. And you’re almost never bitten by just one!

I came by this intimate knowledge of chiggers in the early fall of 2011 when I was invited to go down to Atlanta for three weeks to boom a show I’d never heard of called The Walking Dead. It was in the middle of Season Two, and I was told that another guy would come in after me to finish the last five weeks of the schedule. It’s not a good sign when you take over in the middle of a season and it’s even less promising when they’ve already scheduled another person to come in after you. I couldn’t shake the thought that I was just another piece of raw meat for zombie lunch. And the prospect of working nights on a project with blood, guts and gore (I’ve never really taken to B-movie horror) was not attractive. I had memories of working with slimy creatures in the early ’90s series Freddy’s Nightmares and wasn’t eager to revisit the experience. So, after a short deliberation, I told the Sound Mixer, “Thank you for inviting me, but I’m going to have to pass.”

Well, seven days went by and I still hadn’t booked anything for the following week, so I thought, “Heck, three weeks with a bunch of rotting corpses in sunny Georgia couldn’t be too disgusting, and it’s not like I have to eat lunch with them.” Like any Boy Scout film worker during lean times, I called the Sound Mixer back and asked, “Still looking for a good Boom Operator?” He said, “Yes, come on down.” I shook off the disquiet that they were only three days from needing someone and hadn’t yet filled the position. “Oh well, they’re paying me housing and per diem, plus a box rental and rental car … I’m outta here!”

Three days later, I was driving my rental car down a pitch-black country road at six o’clock in the morning, just outside the tiny rural town of Senoia, Georgia. The stages are situated in an old chemical plant on a dead-end road, one hour south of Atlanta in a thickly forested area that only chiggers could love. It’s shrouded by trees, stagnant ponds, railroad tracks and all of the little creatures that make for a great horror flick. I fought off the feeling of this being my worst nightmare.

I arrived to some good news. They told me that I was the ninth Boom Operator on the show since its inception a year prior. “You mean that in only 13 filmed episodes, you have been through nine Boom Operators!” “Yep,” the Sound Mixer said. This was not sitting well with me. By the end of that same Season Two, they would reach the milestone of 11 Boom Operators! To this day, they call me “Number 9, Number 9, Number 9…” There have also been several Mixers over the seasons, starting with my friend and supporter, Bartek Swiatek, CAS, a Local 695 colleague who left Georgia to move to California, and coming to the present day with Michael Clark, CAS.

Oh, and it turns out, I did have to eat with those zombie things. Nothing like lunch with a gooey corpse sitting across the table from me, spoonfeeding itself through displaced dentures into its black-and-blue prosthetic face—yummy. But, it’s those little tufts of half-dead hair that really creep me out.

The day before my arrival, they had filmed the Season Two farm scene where Rick, Shane and the others, slaughtered the zombies that Herschel’s family had secretly kept in the barn. Our first setup had 12 cast members, spread 12 feet apart outside the barn, shuddering over the deaths of their kinfolk-turned-zombies. There were three cameras (a daily ritual) on three separate 30-foot lengths of dolly track that formed a large U around the actors. All of the cameras had long lenses. I was the solo Boom Operator, as the six remaining tracks on the Sound Devices 788t were allocated to the scripted speakers and the mix track. It was my first day with this Mixer, so I hoisted the boom, danced about the dollies and stretched with determination to prove that I could get some dialog; I wanted to stand out amongst the eight previous Boom Operators. My results seemed feeble, as I was only able to get a couple of lines. The Camera Operators and Dolly Grips were giving me funny looks like “What’s with the new guy? What number is he?” “Not the last,” said someone, “he’s walker bait … won’t last a week!” They all chuckled. What the hell had I gotten myself into?

Far in the back of the acting pack was Emily Kinney (playing Beth), who sobbed uncontrollably throughout the scene. She was not wired but she dominated each take with her emotional outcries. I mentioned that it would be best to pull a wire off someone that I could get on the boom and put that wire on her but there was no enthusiasm for taking the time to make the transition. As the third take commenced, a loud jet entered the shooting zone. I immediately called for a hold, but the 1st AD cut me off. “We don’t hold for planes … roll sound!” The remaining three weeks of my stay were grueling, sweaty and filled with my first set of fluid-sucking chiggers.

Later, I learned that, due to time constraints, upper management restricted freedom to make corrections. The production schedule was so relentless that, at one time, they had adopted a policy of using radio microphones exclusively. They didn’t ever want to see a boom over any actor and were determined to fix any sound problems in Post. The Sound Mixer went on to tell me stories about how they would wait for “Roll Sound,” get the sticks and then, at the last second, slip the boom in for some of the close-ups.

The history of booming this show aside, there was a lot of pressure on me to boom some scenes because of challenges with wind, wardrobe, props and the active nature of the staging. One night, I had to boom a scene that took place in a tent. Both of the characters, Rick Grimes (Andy Lincoln) and Lori Grimes (Sarah Wayne Callies), entered the tent while talking and then disrobed and continued with their dialog. It was impossible to wire them, so I had to figure out a way to boom them in the tent. Do you have any idea how much space is available in a tent after two cameras, two operators and two assistants have been employed? Add in four apple boxes and two 4-foot sliders, and it’s really cramped. The only thing going for me was that they had to raise the side flap to position the cameras. There was barely enough room for me to insert a 12-foot boom pole with a Schoeps MK41 capsule on a GVC. I had to start the shot crouched down, yet standing, so as to reach the tent’s entrance. Fortunately, I was able to aim the microphone straight through the fabric to bring them into the tent talking. I was just millimeters from the cloth ceiling, so I had to be extremely careful not to whisk the microphone on the cloth, while keeping it equidistant to their mouths. After they entered and began taking off their clothes, I had to back up and get down on both knees. At one point, Rick delivered a couple lines looking away from Lori. I couldn’t possibly get them both, so I put a plant microphone on a nearby table, and boomed Lori until Rick turned back. The plant did its job. It was four o’clock in the morning, the last scene of the night, and I was exhausted. It was truly my best booming feat during the entire three weeks. But, as the Camera Operator, Michael Satrazemis, said that first week, “It’s a tough show, but that’s what makes it great.”

Obstacles and frustrations aside, I figured I better work hard, have patience and keep a good attitude. The actors were fabulous and supported my efforts from the beginning. In fact, I remember the Sound Mixer telling me, when he was trying to entice me to do the show, that the actors were very warm and accommodating and kept him motivated to do good work. People like Andy Lincoln (as Rick Grimes), Norman Reedus (as Daryl), Scott Wilson (as Hershel), IronE Singleton (as T-Dog), Jeffrey DeMunn (as Dale), Lauren Cohan (as Maggie) and Steven Yeun (as Glenn) would come up to me and give me a good-morning hug. I hardly knew these folks, and they welcomed me like family. Jeffrey DeMunn said it first, and he said it the most, “WE are the Walking Dead” WE, the cast, crew and abovethe- line executives, ARE THE WALKING DEAD! It’s still true of the cast to this day. The Georgia heat, the remote locations, the grueling production schedule, the absence of zombie hygiene and chiggers, make this a very difficult show, but the spirit the actors bring to the project keeps the crew working together as a team.

Yet I still wasn’t convinced that I wanted to be a part of it when I was asked to join Season Three full time. I had doubts, so many, in fact, that I said, “NO.” I continued to say, “NO” for about four weeks. The thing that really turned me around was the fact that the Sound Mixer went to the wall to get me a rate I couldn’t refuse. Yep, it came down to money. But now, after two full seasons, I look back, and I look forward, and I confess, it isn’t the money that makes working on The Walking Dead worth it, it’s the family spirit. It’s the excitement of being part of one of the most amazing TV shows ever.

The setting of this TV series is unique in character, in that it takes place in a post-apocalyptic world. There is no electricity or running water, no trains, no planes, only a few cars and so far, no boat. We do have one obnoxious motorcycle, and usually we can get Norman (as Daryl) to turn it off before he speaks, but sometimes this is logistically impossible. A post-apocalyptic world is a quiet world. But, we shoot in rural Georgia. We have highways, lots of trains, farm implements, bustling towns and our studios are right in the approaching pathway of Atlanta Hartsfield International Airport. It’s not easy recording dead quiet takes in our modern world.

Locations are often deep in the woods, on rarely traveled dirt roads, abandoned railroad tracks and around brush-shrouded ponds. This means that we have to load our equipment onto flatbed trailers and get pulled by four-wheel-drive vehicles to our locations. When the going gets rough, we all pitch in to get it down. And when a deluge of rain comes in, we all take the hit on another sloppy mud fest to get ourselves out of the swamp bog.

Many of these locations lay along the unused rail tracks that serviced the now-abandoned enterprises in this part of rural Georgia. The Construction Department built several wooden carts to help move gear along the tracks. They’re very helpful when they work but they often break down and we’re forced to remove our carts and roll or drag them along the gravel rail-beds adjacent to the tracks. Zombie apocalypses don’t generally occur right next to the Walmart so we often need to haul the gear a considerable distance. That the carts are wobbly and tend to squeak doesn’t really bother us except when they are pressed into service as camera dollies. Then the noise of the cart layered with the sound of grips pushing it on the gravel rail-bed does make recording a clean track difficult.

Actors on The Walking Dead roll around, run, shout, yell, fight, whisper, snap their heads from one side to side, kneel, bend over and swing lots of props (guns, knives, katanas, crossbows, backpacks, hammers, crow bars, bottles, etc.), all in the same setup. And, they do it amongst trees, vines, creeks, tall grass, railroad tracks, rubble, fences, furniture and the like. This constant activity makes booming the show a unique experience; my legwork has never been more tested. The dizzying array of props needed to combat zombies forces us to be creative in radio microphone placement. We’ve rigged collars, hats, hair, the props themselves and everywhere else imaginable.

Anyone who has viewed the show is well aware of how location-driven the sets are. We work in the forest a lot. We work on gravel roads a lot. We work in fields a lot. We work with the elements a lot. But we also work a lot indoors. The difference is, 99.9% of our locations, wherever they may be, are filthy. They’re dusted, shredded, destroyed, trashed, wetted, burned and pillaged. Everything is dirty on The Walking Dead. The only good thing, as far as sound goes, is that the mills and plants that serve as our sets are typically vacant and out of business. Turning off noisy appliances is not so much an issue with us. But, the sheer volume of filth complicates placing microphones on the actors. In fact, the costumers go to great lengths to pat them down with blood, dust, dirt and oil. Blood and oil are the real test. And this show really uses blood—lots of blood—gallons of blood. In fact, there’s something like 10 types of blood: live blood, dead blood, real dead blood, drippy blood, gooey blood, thick blood, blue blood, black blood, it’s bloody unbelievable! As much as we all love Joe’s clear butyl, it doesn’t work on bloody and oily clothes. Perspiration doesn’t make such a good friend either. We end up having to sew most of the lavaliers into their clothes, especially during the warm months. This takes extra time. Fortunately, over the seasons we have conditioned the production staff to bring the actors to us extra early. Unfortunately, they have to be un-sewn whenever there is a technical issue with the lavalier, or when there is a wardrobe change. To avoid the need for re-sewing, we do tests, placing the microphone in the proposed position and having the actor go through anticipated body motions. Sometimes we’ll wire additional wardrobe in advance. We’ve wired as many as three shirts at one time. On this show pre-planning is imperative, because we almost never do rehearsals, and time is so precious. Then, at wrap, the actors must come to us after a grueling day of rolling and fighting their way through the zombie apocalypse to have their microphones extracted from their costumes.

The sun provides its own little challenge. Ordinarily, “fly swatters” (20 x 20 diffusion frames rigged onto an overhead condor) would soften the shadows on exterior shots but they aren’t used on The Walking Dead. The difficulty of getting that kind of gear a quarter mile down rail tracks to a shooting location precludes their use. Sometimes this forces us to use two booms to capture shadow-free dialog from a single player.

Much of The Walking Dead is shot in the summer. Georgia is sweltering hot with singing cicadas, croaking frogs and stinging sunburns on the back of your neck. Every time I wring out my shirt, I have occasion to remember the line from David Lynch’s Wild at Heart, with Laura Dern and Nicolas Cage, “You’re as hot as Georgia asphalt.” Summer is when the chiggers, ticks, mosquitos and spiders are most plentiful. Sometimes, before we even arrive on set, we drench our bare legs, arms, necks and midsections with a not-so-healthy dose of DEET. It stings a bit at first but really does the job. We don’t like it, but it beats scratching for days on end until your flesh comes off. Ahhh, the glamour of Hollywood.

The Directors and Directors of Photography make full use, as they should, of all the visual tools at their disposal. They use Steadicams, Go Pros, DSLRs, below-ground positions, hidden cameras, long custom- made sliders to go among vines and trees, high-angle crane shots and every device imaginable to achieve an expressive image. Typically, several of these elements will be combined for multiple views of the action. This adds to the challenge of getting good dialog tracks.

Episode 405, “Internment,” is illustrative. The Director, David Boyd, one of the former DPs on the show, believes in guerrilla-style filmmaking, using multiple cameras in obscure positions. The episode took place mostly in the prison cells, where Scott Wilson (Hershel) would tend to the near-death patients. These cells are really only about 10 feet by 10 feet with a bunk bed on one side. Director Boyd staged the scene with three actors, three cameras and two operators. Radios couldn’t be used because the actors had blood on their chests and air masks on their faces so my assignment was to squeeze into the cell with everyone else and get the dialog. My regular position in these scenes was either standing on the upper bunk or squeezed between an operator and the wall, only inches from the talent.

These same cells presented one of our biggest acoustic challenges. Although they were prison cells, and ought to sound like prison cells, they were really made of wood. We used furniture pads on the walls, and acoustic tiles in the corners when they wouldn’t be seen, but many times we couldn’t control things. Fortunately, the reverberant effects that Post Production added fixed the problem, and made for some very interesting character effects.

Despite these extraordinary elements, the soundtracks have been getting better and better. This is due to a solid proactive plan, teamwork, ample first-rate equipment and excellent execution. Production Sound Mixer Michael P. Clark, CAS leads us through the process by being very involved. Days in advance he will be analyzing scripts, talking with decision makers, preparing equipment and contemplating solutions. And his mixing skills are sharp, clean and logical. Dennis Sanborn, the Utility person, is assertively proactive in preparing equipment, securing locations and most importantly, wiring all of the actors. He is both skillful and resourceful. As arduous as recording sound is on The Walking Dead, these guys step up to the challenge of working on one of the most difficult shows in television with grace and determination.

I’ve never before been part of something so deep, difficult and complex as the process of making this show. More than any project I have ever worked, the shared sentiment that “We Are the Walking Dead” makes this one of the most remarkable career experiences that I have ever had. It has truly changed my life … and my career.

Gravity and Captain Phillips

Recording Captain Phillips and Gravity

by Chris Munro, CAS

It sounds somewhat ungrateful to complain about being nominated for two films in the same year. Though I was honored to receive both BAFTA and Academy Awards for Gravity, a part of me was disappointed that Captain Phillips has not been equally recognized.

These are two very different films with different challenges for production sound. Gravity was completely different from anything I had done before, whereas Captain Phillips is a prime example of how drawing on previous experience enables us to be better at what we do. Having worked with Paul Greengrass on United 93, the film about the terrorist takeover of a passenger jet on 9/11, I knew that Paul likes to shoot in a documentary style, with no rehearsal and a lot of improvisation, and to cast non-actors in key roles. When I came to work with Paul again, on Captain Phillips, this experience was vital but we now had the added issues of shooting at sea on a container ship, a lifeboat and in the Somali skiffs.

Having worked on five James Bond films, I was no stranger to action sequences involving water, especially the boat-chase sequences on Quantum of Solace filmed in Panama. On Captain Phillips, I needed waterproof lavalier microphones that also sounded good out of the water so I chose to use Da-Cappo DA04s (now Que Audio performance series in the USA). These are very popular in theater because of their very small size but have great waterproof qualities due to the inlet size being smaller than a droplet of water. I mounted them upside down so that no water settled on the microphone. I had to develop a system for getting longer range reception for recording in the high-powered pirate skiffs. I used Audio 2040 mini-tx radios in aquapacs on the pirates. The receivers were built into secret compartments in the skiffs where audio was recorded and re-transmitted to the bigger boat that we were all on. We were regularly recording up to 16 tracks and feeding a mix to Video Assist, the Director and Camera Operators. I recently wrapped on Heart of the Sea, with Ron Howard where again I was able to use what I had learnt. Months before I started on the film I said to the boat builders, “I need you to build these secret compartments…”

On Captain Phillips, we were based in Malta on a container ship, which was our studio for much of the film. Each department had a base in one or more of the containers to store equipment and carry out any maintenance. We still needed to be highly portable as we would shoot inside the ship, perhaps in the engine room or cabins while heading out to sea and returning to port, and shoot on decks and the bridge when at sea. There were a lot of stairs, and some passageways were very narrow. Generally, we were shooting multi-camera without rehearsals and all with improvised dialog, sometimes with the scene playing out between several groups in different parts of the ship.

We were limited in the number of crew on the ship, but I was very fortunate to have a great crew with my usual UK Boom Operator, Steve Finn, and tech support from Jim McBride. Tim Fraser recorded 2nd Unit in Malta and in Morocco, and Pud Cussack looked after Boston and Virginia.

Oliver Tarney was Supervising Sound Editor. I had also worked with him on United 93 and the two Sherlock Holmes films with Guy Ritchie. One of the best things we were able to do was to get Oliver to spend a weekend with us on the ship recording sound FX. Not only did he get the FX that he needed, but he also got to experience the ship and to understand how it should sound at sea and its geography. He also got to experience being in the lifeboat—known by us as the vomit vessel—certainly not a pleasure craft!

 

Chris Burdon and Mark Taylor were Re-recording Mixers; I’ve worked with both on previous films.

Gravity was a completely different experience from anything I had previously worked on. When I first got the call and was told that there were only two actors in the film and that there is no sound in space, it sounded like the perfect job! Then when I met Alfonso Cuarón and he started to talk about his ideas for the film, I was hooked and immediately knew that this was going to be something special. Every few years there is a film that breaks the technological boundaries— this year it was Gravity. The first issue was that both the cameras and the actors could be on robotic arms. I had recently shot a small sequence with these and knew that, although the arms could move with not too much noise, the associated power supplies and controllers were very noisy. So the first job was to negotiate that these could be extended and built into blimps far away from the action.

We had a very comprehensive previz of the film that we worked to. The previz helped us keep the VFX elements, still being designed, in sync with lighting, camera moves and sound. I had originally thought that we might be able to lock everything to the same timecode but, for a number of reasons, timecode wasn’t always practical as the controller. Touch Designer was used to control the robots and as a visual platform, sending midi triggers for us to sync to.

Alfonso Cuarón originally had a plan for all of the radio conversations and OS dialog to be live, and we had planned to have different rooms in the studio for those to be performances. However, due to artist availability and other issues, this proved to be impractical so we prerecorded as much as we could. Most of the pre-records were guides that were re-recorded as ADR in Post Production.

 

 

Will Towers was our Pro Tools operator. He made loops of the lines that we could play from a keyboard. The idea was that each line was on a separate loop, and there were alternative performances available for the on-screen actor to react and interact with. We would use different performances and adjust the timing for each take to create spontaneity while still having to be sure that certain lines were occurring at the correct frame space allocated in the previz. All film is a collaboration, but on this film I was collaborating more with VFX and the actor than ever before. It was also necessary for us to work very closely with Editorial as the film took shape and timing parameters or dialog constantly changed.

Here was another opportunity to use the Da-Cappo microphones— this time because of the very small size. The microphones used were a mixture of a Da-Cappo capsule that Jim McBride, our tech support engineer, had fashioned to an arm connected to the inner helmet and a latex shield that we made for both visual accuracy and to reject noise from outside the helmet. A second Sanken COS-11 was sewn into the inner helmet as were earpieces for communication. We also had in-ear molds made for some scenes. Each different piece of headgear that Sandra Bullock wears in the film contained practical microphones and earpieces. Even the classic Russian headset that she uses at one point has a built-in transmitter and receiver. We achieved this by borrowing bare 2040 mini-transmitter boards from Audio Ltd. and building them in to headsets.

I used a Cedar DNS1500 during shooting to reduce some of the fan noise from the LED lighting rig and the robotic arms. This was only on one mix track. The iso tracks and another mix track were left unprocessed.

The communication system could rival NASA Mission Control at Houston. In addition to feeding scripted lines that the actors would respond to, we also played atmospheric sounds to Sandra to set the mood for each sequence. Additionally, we played loops of her breathing from the preceding or following shots so that she was able to get the correct breathing rhythm for the shot. Often the shot could start at one pace but finish with breathing at another pace so it was important that we were able to give the correct breathing rhythms throughout the shot.

The Director and the 1st AD needed to be able to communicate with the actors and DP, Camera and other departments without distracting the actors when giving technical cues. The costumes and helmets so completely isolated the actors that they needed an audio feed both to hear each other and also to hear their own voices. Allowing them to hear themselves, but at a reduced level to avoid distraction, required a second layer of IFB feed to each.

Sandra Bullock and George Clooney could often be in rigs for hours on end so, as well as providing a system for them to communicate with each other, I also ran a kind of mini-radio station to play music, YouTube clips or anything to keep them entertained between shots. Sandra Bullock has often said that she had never previously had such interaction with the Sound Department yet we were at opposite sides of a dark stage for weeks on end. It was during one particular break during shooting that I discovered that both Sandra and George knew all the words to “Rapper’s Delight” and could sing a pretty good version!

You could be forgiven for thinking that most of Gravity was created in Post Production but, in fact, much of the shooting was oddly conventional. We had six weeks of pre-shoot, 12 weeks of principle photography and two weeks of additional photography, all with sound. Some of the sequences were shot on actual sets and boomed! For every shot, the DP concentrated on the camera angle and how the actor was lit. The Director concentrated on getting the performance that he needed and the Sound Department concentrated on capturing that performance the same way that we all do on every movie.


Glossary of highlighted words

Previz Essentially an animated storyboard, a previz video shows a rough rendition of all the elements and special effects in a sequence so every department can see how it all fits together.

Touch Designer A software program that facilitates production of animated videos and graphic sequences.

P-Cap, MoCap and All That Jazz

P-Cap, MoCap , and All That Jazz / Part 1

by Jim Tanenbaum, CAS

As sound people, we live in (according to the old Chinese curse) interesting times. Our technology is advancing at an exponential rate … with a very large exponent. The analog Nagra ¼-inch reel-to-reel tape recorder was used on almost all of the world’s movies for more than thirty years. Then DAT (Digital Audio Tape) cassette recorders (though more than one brand) held sway for another ten. Hard-drive recorders (I beta-tested a Deva I) led the race for five years, then DVD optical-disc recorders (albeit still with an internal HDD) for only three. Sony’s magnetic minidisk unit never made significant inroads in production recording. Now we’re using flash-memory cards, and I’m surprised they’ve held on for more than a year, but the original choice of CF cards seems to be giving way to SD ones (except for Zaxcom). Next year?

But it is not only the technology that is changing—so is the product. Made-for-Internet drama or documentary shows aren’t that much different from their predecessors, but reality shows certainly are a new breed: dozens of radio-mic’d people running around, in and out of multiple cameras’ view, and in and out of multiple Production Mixers’ receiver range. Fortunately, we have Zaxcom transmitters with onboard recorders. Still, things aren’t that different.

But “capture” shoots are. Almost entirely different from anything that has gone before. And capture for Computer-Generated Image (CGI) characters (sometimes called “virtual characters”) is different than capture for live-action shoots. Also, Motion Capture (MoCap) is different from Motion Control (MoCon), though these two techniques are sometimes used together, along with Motion Tracking (MoTrac). And then there is Performance-Capture (P-Cap). They will be described in this order: CGI MoCap, P-Cap, live-action MoCap, MoCon, and MoTrac. Following that, working conditions and esthetics for all types will be discussed.

So now, for those of you who have yet to work on a capture job, here is a primer (pronounced “prim-er”; not “pry-mer”). The rest will be on-the-job training.

CGI MoCap

For starters, the capture stage is called a “volume”—because it is—a three-dimensional volume where the position and movement of the actors (often called “performers”) and their props are tracked and recorded as so many bits. Many, many bits—often terabytes of bits. You can expect to record many gigabytes of audio per day.

The stage containing the volume has an array of video cameras, often a hundred or more, lining the walls and ceiling, every one interconnected with a massive computer. Each camera has a light source next to, or surrounding, its lens, which special reflective markers on the actors will reflect back to that particular camera only. This is known as a “passive” system, because the markers do not emit any light of their own. The camera lights may be regular incandescents or LEDs, with white, red, or infrared output. More about that later.

The cameras are mounted either directly on the walls and ceiling, or on a latticework of metal columns and trusses. WARNING: It is vitally important not to touch these cameras or their supporting structure. If you do, you must immediately notify the capture techs so that they can check to see if the volume needs to be recalibrated.

The actors/performers wear black stretch leotards studded with reflective dots. The material is retro-reflective, which means it reflects almost all the light back in the direction it came from, in most cases utilizing tiny glass spheres. Scotchlite™ is a typical example, used on license plates, street pavement stripes, and clothing. For use with the capture suits, the reflective material is in the form of pea-sized spheres, mounted on short stalks to increase their visibility from a wider angle. The other end of the stalk terminates in a small disc of Velcro™ hooks, so it can be attached anywhere on the capture suit’s fabric.

As an aid in editing, the capture suit usually has a label indicating the character’s name. Hands and/or feet may be color-coded to distinguish left from right.

The markers in the image above are glowing because a flash was used when the picture was taken. The camera was very far away, and the stage lighting completely washed out the light from the strobe on the people and objects, but the markers reflected most of the flash back to the camera lens.

Capture cameras mounted on more rigid
columns, but still subject to displacement if hit. [Formerly Giant Studios, now Digital Domain’s Playa Vista, California, stages]

If MoCap is to be used on the actors’ faces, smaller, BB-sized reflective spheres are glued directly to the skin, sometimes in the hundreds. When too many have fallen off, work stops until they can be replaced, a process that takes some time because they must be precisely positioned.

Props and certain parts of any sets or set dressing (particularly those that move, like doors), also get reflective markers. Unlike “real” movies, props and set dressing do not have to look like their CGI counterparts, only have certain dimensions matching. They are often thrown together from apple boxes, grip stands, and “found” objects, and may be noisy.

Here is a description of the mechanics of MoCap.

The floor of the volume is marked off in a grid pattern, with each cell about five feet square. This array serves two purposes: 1, it allows the “virtual world” in the computer to be precisely aligned with the real world; and 2, it allows for the accurate positioning of actors, props, sets, and floor contour modules.

The capture process is not like conventional imaging—there are no camera angles or frame sizes. The position and motion of every “markered” element is simultaneously recorded in three-dimensional space. Once the Director is satisfied with the actors’ performances in a scene, the capturing of the scene is finished. Later on, the Director can render the scene from any, and as many, POVs and “focal lengths” as he or she wishes.

But for this to be possible, every actor must be visible to (most of) the capture cameras at all times. This means that there must not be any large opaque surfaces or objects to block the cameras’ view. If there need to be physical items in the volume for the actors to interact with, they must be “transparent.” But glass or plastic sheets can’t be used, because refraction will distort the positions of markers behind them as seen by the cameras. Instead, surfaces are usually made out of wire mesh or screening, e.g., a house will have thin metal tubing outlining the doors and windows (to properly position the actors), with wire mesh walls (so the actors don’t accidently walk through them). In the virtual world, seen from a POV at some distance from the house, the walls will be solid and opaque, but as the POV is moved closer, at some point it will pass through the “wall” and now everything in the room is visible. Tree trunks can be cylinders of chicken-wire fencing, with strands of hanging moss simulated by dangling strings.

Props need only to be the same size and overall shape, and weight, to keep the actions of the actors handling them correct. They will have a number of reflected markers distributed over their surface. Live animals, if not the actual living version, are made as life-size dolls with articulated limbs and appropriate markers, and puppeted by human operators. This gives the actor something “living” to interact with.

Since the motions and positions are captured in three dimensions, if the ground or floor in the virtual world is not flat and/or level like the volume’s stage floor, the bottom of the volume must be contoured to match it. This is done by positioning platform modules on the grid squares to adjust the surface accordingly. (More about this later.)

It is necessary to precisely align the real world of the capture volume with the CGI virtual world in the computer; otherwise, parts of the CGI character’s bodies may become imbedded in “solid” surfaces. The first step in this process involves a “gnomon” (pointer) that exists in both the real and virtual worlds.

As an aid in editing, the capture suit usually has a label indicating the character’s name. Hands and/or feet may be color-coded to distinguish left from right. The gnomon has three arms at right angles to each other, tipped with reflective markers to allow the MoCap system to create its CGI doppelganger in the virtual world. To align the real table with its “twin” in the virtual world, the gnomon is placed at one of the real table’s corners, and then the table is moved in the volume until the virtual gnomon is exactly positioned on the corresponding corner of the CGI table. This is usually the simplest method. Another possibility is to go into the virtual world and mouse-drag the CGI table until it lines up with the virtual gnomon. The entire virtual world could also be dragged to position the table, but this might throw other objects out of alignment. Global position shifts like that are limited to adjusting the virtual ground with the volume floor after the contour modules are in place.

Real-world alignment gnomon and “transparent” table with wire-mesh surfaces. (Photo: ‘AVATAR’ ©2009 Twentieth Century Fox. All rights reserved)

Multiple conventional HD video cameras are used in the volume for “reference.” These cameras cover the scene in wide shots and close-ups on each character. This allows the Director to judge an actor’s performance before the data is rendered into the animated character. A secondary function is to sort out body parts when the MoCap system gets confused and an arm sprouts out of a CGI character’s head. Looking at the reference shot, the Editor can figure out to whom it belongs, and mouse-drag it back into its proper place. In most stages, the cameras are hard-wired into the system so they have house-sync TC and do not normally require TC slating. They may use DV cassettes and/or send the video directly into the system.

Until a few years ago, it was not possible to see the CGI characters in real time, but now Autodesk Motion Builder™ software allows real-time rendering, albeit in limited resolution. Warning: The flatpanel monitors on the stage have cooling fans that may need to be muffled or baffled. Video projectors’ fans are even louder.

Lighting in the volume is very uniform, soft and non-source, to ensure that the reference cameras always have a well-illuminated image. In addition, having no point-source lights ensures that there will be few, if any, specular (spot-like) reflections that might confuse the MoCap system’s cameras.

To capture motion effectively, the system must measure the marker positions at least twice as fast as the temporal resolution required. For 24-frame applications, this means a minimum 48 Hz rate. Currently, much higher rates are used, 120 Hz to 240 Hz. If “motion blur” is desired, it can be created in Post.

P-Cap

Motion Capture was developed first, and initially captured only the gross motions of the actor’s body. The facial features were animated later, by human operators who used mouse clicks and drags. Then, smaller, BB-sized reflective balls were glued to the faces, in an attempt to capture some of the expressions there. Unfortunately, this process couldn’t capture the movement of the eyes, or the tongue, or any skin wrinkles that formed. And since the “life” of a character is in the face, these early CGI creations failed the “Uncanny Valley” test.

It turns out that human beings evolved a built-in warning system to detect people that weren’t quite “right.” Back in the “cave people” days, subtle clues in a person’s appearance or actions were an indication of a disease or mental impairment that could be dangerous to your continued good health or even your very existence.

Multiple hard-wired HD reference cameras (although these have DV cassettes as well). (Photo: ‘AVATAR’ ©2009 Twentieth Century Fox. All rights reserved)

A graph of the “realism” of a character versus its acceptability starts at the lower left with obvious cartoon figures and slowly rises as the point moves to the right with increasing realism. But before the character’s image reaches a peak at the right edge, where photographic images of actual human beings fall, it turns sharply downward into the valley, and only climbs out as the character becomes “photo-realistic.” Even an image of a real human corpse (possible disease transmission) is in the valley, as would be that of a super-realistic zombie.

When you watch a Mickey Mouse cartoon, you know the character isn’t “real,” so its completely “inhuman” appearance is not a problem. Likewise, when you watch a live-action movie, the characters are real, so again there are no warning bells going off in your brain.

Current computer-animated cartoons like Despicable Me or Mars Needs Moms don’t have a problem because their “human” characters are so obviously caricatures. The trouble began when CGI characters developed to the point of being “almost” human, and started the descent into the uncanny valley. The 2001 video-game-based movie Final Fantasy: The Spirits Within was the first attempt at a “photo-realistic” CGI feature movie using MoCap. Although an amazing piece of work for its time, it didn’t succeed visually or at the box office. But it didn’t quite fall over the precipice into the uncanny valley, either. The characters’ faces all had that “stretchy rubber” look when they moved, the motion of their eyes and mouths weren’t close enough to human, and most of their exposed body parts (except for hair, which was quite good) were rigid and doll-like, moving only at the joints. It still was “only” video game animation, and back then, nobody expected that to be real.

The stylized 2004 feature The Polar Express had an intentionally non-realistic, stylized look to its settings and characters, but since the MoCap process was used, their now, much more realistic motions caused a slight uneasiness among some viewers.

It wasn’t until Beowulf (2007), that the CGI capabilities increased to the “almost photo-realistic” level and a larger portion of the audience was disturbed by their being in the uncanny valley, albeit subliminally. It was mainly that the characters’ eyes were mostly “dead,” moving only on cue to look at another character, and never exhibiting the minor random movements that real, living eyes make continuously. The interior details of their mouths were also deficient.

Interestingly, the same capture volume that was used for The Polar Express and Beowulf was also used for Avatar (2009), but only after James Cameron spent a great deal of time and money to upgrade the system. Avatar successfully crossed the uncanny valley because the facial-capture cameras worn by the actors allowed for the recording and reproducing of accurate eye and mouth movements, and the formation and elimination of skin wrinkles. “Edge-detection” software made this possible. Thus was born the “Performance Capture” version of MoCap.

P-Cap volumes have the same soft, non-directional lighting as MoCap, plus additional lights mounted next to the facial capture cameras to make sure the face is never shadowed. Avatar used a single CCD-chip mounted on a strut directly in front of the performer’s face, and many systems still use this configuration. To avoid having the distraction of an object continuously in the actor’s line of sight, by the time AChristmas Carol went into production in 2009, four cameras were used, mounted at the sides of the face, and their images were rectified and stitched together in the computer.

At the beginning of the production of Avatar, Cameron used a live microwave feed from the face camera to “paint” the actor’s human eyes and mouth onto the CGI Na’vi’s face as an aid to judging performance. But after a while, this proved not to be that useful and was discontinued.

Face-Only P-Cap

For certain action scenes, the actors cannot safely wear a camera head rig. For these situations, only the body markers are used, and conventional MoCap is employed. Sound is recorded with a boom mike or wireless mike with a body-mounted lavalier, but will (normally) serve only as cue-track. Afterward, P-Cap techniques will be used to capture the face and dialog. If the director does not automatically ask for it, I recommend that you suggest he or she have the actors attempt to reproduce their body motions from the MoCap sessions as accurately as possible, because this will induce a form of realistic stress to their voices. These setups should be mic’d in the same manner as the rest of the project.

Alternate Techniques for Face-Only P-Cap

The capture infrastructure is continuously evolving, and several new technologies are emerging. Unfortunately, because of NDAs (Non-Disclosure Agreements), I cannot describe the projects I worked on in any detail. The information here comes from public sources such as Cinefex magazine and Wikipedia.org.

Real-time LIDAR (LIght Detection And Ranging) scanning is used to measure the shape and position of the performer’s head, down to sub-millimeter resolution. (This technique is also used to capture GCI data from large motionless objects like buildings, statues, vehicles, etc.)

Real-time multiple-camera, multiple-angle views are used to compute 3-D data from the different 2-D images of the performer’s face.

For both of these, you must usually keep the mike, the boom, and their shadows out of the working volume.

Live-Action MoCap

Live-action scenes, often shot against green- or blue-screen backings, need to have dramatic, sourced lighting. There are also many shiny wardrobe items and props, some of which even emit light themselves, and all these would confuse the passive MoCap system. Exterior scenes shot in direct sunlight can completely wash out the reflected capture-camera lights. For all these reasons, the reflective marker passive system cannot be used. Instead, “active” markers are used. These are larger, ½- to 1-inch cubes, with an LED array on each visible side. The markers emit a pattern of light pulses, either red or infrared, to uniquely identify each individual marker. Externally mounted markers that are visible in a shot can be eliminated with “wire-removal” software in Post. Infrared markers may sometimes be concealed under clothing to avoid this extra step, along with its attendant time and cost.

MoCon

Motion Control was developed long before any capture processes. A camera was mounted on a movable multi-axis platform that ran on tracks, and had sensors to record its motion, position, and lens settings. The initial shot was made by a human operator, then the subsequent ones could be made by playing back the recorded data and using it to control servo motors that moved the camera in a duplicate of whatever dolly, pan, tilt, zoom, focus, etc., moves were made the first time. This allowed “in-camera” compositing of multiple scene elements without the need for optical film work in Post, with the attendant problems of generation loss, color shifts, etc. A typical use would be to shoot a night scene of model buildings with illuminated windows using a large outdoor model city street. To get uniform illumination, the tracking shot past the buildings is shot in daylight, with the camera stopped down to reduce the exposure. This would require impossibly intense (and hot) lights to illuminate the windows brightly enough to read in direct sunlight. Instead, a second, matching, pass is made at night with the lens opened up, so that low-wattage bulbs will provide the proper exposure. The original Star Wars movies used this method extensively. While this system is still in use, it is now possible to use markers to track camera position, particularly with handheld cameras.

MoTrac

Motion Control requires a large amount of expensive equipment, but now that computers have become so much more powerful, digital manipulation can accomplish some, but not all, of the tasks formally done with MoCon. And of course, many that were impossible with MoCon. And sometimes MoTrac can be used instead of needing MoCap to record camera positions and moves.

MoTrac has two main applications. First, green- and blue-screen work where there will be camera moves that must be coordinated with an added background plate. To do this, an ordinary non-MoCon camera is used, and visible “fiduciary” marks are made on the screen as a reference for how the plate image must be shifted to have the proper parallax for the moving camera. Usually, the mark is simply an “X” made with pieces of contrasting color tape. Enough marks are placed on the screen to ensure that some of them will always be in frame. The computer tracks the motion of these Xs and then adjusts the position of the background plate to match.

Second, smaller marks, often ¼-inch red dots, are stuck on real objects that will have CGI extensions added on to them. The moving ampsuits used in Avatar existed in the real word only as torsos on MoCon bases. The CGI arms, legs, and clear chestpiece were attached later in the virtual world. If you are planting/hiding microphones, be careful not to tape over or otherwise occlude any of these marks.

While not commonly used at present, it is possible to put fiduciary marks on a mike boom as an aid in removing it Post. And the recent Les Miserables used them to help remove the exposed lavaliers that were mounted outside the wardrobe.

MoTrac MoCap

This hybrid has limited capabilities, but is often used for liveaction shoots on real locations or sets, with CGI characters that are human-shaped and slightly larger than the human performers. No reflective or active markers are used because the scenes often involve action and stunts, and the markers could injure the wearer or be damaged or torn off. Typical examples are the Iron Man suits and the humanoid droids in Elysium.

This method does not capture 3-D position information directly, and is used to simply “overlay” the CGI image on top of the capture performer’s on a frame-by-frame basis. Perspective distortion of the shape and size of the marker squares can be analyzed by the software to properly rotate and light the virtual character.

The actors wear grey capture suits with cloth “marker bands,” consisting of strips ranging from ½ to 2 inches in width having alternating white-and-black squares with a small circle of the opposite color in the center. The bands are fastened around the portions of the actor’s body that are to be captured: head, torso, arms, and/or legs. Only gross body movements are captured with this system; not details such as fingers or facial features.

If wireless mikes are used, there is no face-cam mounting strut available to mount the microphone, but neither it nor the transmitter has to be hidden. Like a regular shot, boom shadows have to be kept off anything visible in frame, except for the capture suit. (The shadow will not be dark enough to be mistaken for black makings.)

Editor’s note: Jim Tanenbaum’s explanation of P-Cap and MoCap practices will continue in the next issue of the Quarterly with specific guidance for sound technicians working these projects.

Text and pictures (except Avatar set pictures) © 2014 by James Tanenbaum, all rights reserved.

My Wild Ride: Booming in the ocean

My Wild Ride

by Coleman Metts, CAS
All images courtesy of Coleman Metts

My friend and colleague, Scott Harbor, who was having a scheduling conflict, referred me to the movie Ride. He thought, “Coleman surfs and stand-up paddleboards, so he’ll be great for it.” When the Producers initially contacted me, they said, “We are keeping it simple, but we want the actors, Helen Hunt and Luke Wilson, to talk to each other as they’re going out through the waves and catching the waves.” Well, I thought, perhaps the new wireless microphone transmitters from Lectrosonics might work. I told them it would be an experiment, but I felt pretty good about being able to pull it off.

My initial plan was to cut holes in the wet suits and have the microphones exposed but removed with computers in Post. The Producers responded they could not afford to do that for every shot. After considering a range of alternative options, we eventually agreed to cut small holes in the wet suits and attach the microphones behind each hole with tape. We then began an exhaustive process of trial and error in an attempt to mount the microphones. We could not find any tape that would effectively work in saltwater! Eventually, we settled on using Velcro to mount the microphones. However, it was not long before we learned that the Lectrosonics waterproof transmitters are not saltwater proof!

Lectrosonics was very cooperative about minimizing the L&D expenses, but the wireless transmitter failures forced my crew to capture all the sound sequences, both on the ocean and in the surf zone, with an old-fashioned overhead boom microphone. On the water, I used a Sennheiser 60. We got basically traditional coverage, so we were very lucky in that respect, and the microphone worked perfectly. I used the Lectrosonics plug-on transmitters to get the sound from the boom microphone back to me. My recorder for the movie was the Zaxcom Fusion. Working in this environment is incredibly hard on every piece of equipment. I am still finding sand in various places among my sound gear, months later.

We did not make any technological leaps on this movie; it was just persistence, and positive attitude, that solved our problems.

The initial schedule showed us on and off the water a lot, so I had a small ENG-type package built for the ocean, and my main rig I left built for filming on the land. Even at sea, I managed to send an IFB feed to video village and, particularly to Helen Hunt, who was directing as well as acting. I also supplied signal to the Script Supervisor and the Video Playback Operator, fellow 695 member Anthony Desanto.

We did not have a lot of prep for this project, so we had to improvise and figure it out as we went along. A nice benefit of the show was that I was able to bike to work every day for three weeks. Also, I got to wear my sandals at work every day for about a month.

So began my two weeks out on the water. I was placed in everything from zodiac-type boats to the back of wave runners. I eventually spent much of my time on a large stand-up paddleboard as no motorized craft were allowed in the designated surf zone which, in the Marina Del Rey/Venice area, extends from the beach out three hundred yards.

My Boom Operators, Johnny Evans and Jim Castro, also operated from large stand-up paddleboards for significant portions of their time on location. When not on a stand-up paddleboard, the Boom Operators were standing directly in the surf zone. Doing so, however, required the use of Watermen/Stuntmen who would position themselves directly behind my crew and grip them tightly to prevent them from being knocked over by the oncoming waves. My Utility, Ace Williams, did a phenomenal job in these trying conditions. Oftentimes, I was texting Ace what resources I needed sent out on the next supply boat.

It was not long before I realized that working on the water is very different from playing on the water. Being out on various watercraft all day was pretty fatiguing. Communication was limited. In the beginning, we also had a failure in the transfer process when the facility transferred all the tracks for dailies. That added some stress at the start of the show. Eventually, we got it all sorted out—just about the time when we moved off the ocean and started filming on dry land.

What did I learn from this project? Well, I guess I learned that when they say it’s going to be simple, it’s not. And I learned that you need more than one plan to deal with any eventuality plus enough resources for almost any scenario.

The process of filming on the beach, on the water and on multiple locations throughout Venice made Ride the hardest show I’ve done by far. But the amazing people I worked with made it a memorable and positive experience. The Director of Photography and his crew, the Key Grip and his team and our Stunt Coordinator and the Waterman were outstanding. They solved amazingly hard challenges every day. They all displayed the best positive attitude and everything seemed easy for them. After eighteen years in the business, I’ve learned not to take these things for granted. Ride restored my enthusiasm for making movies; it was a bright spot in my career.

Jim Webb: A Profile

Jim Webb: A Profile

by David Waelder

“ He was the most perfect Sound Mixer I ever worked with.”
–Chris McLaughlin

“ I would say that Jim was the father of multi-track. I really would.”
–Harrison “Duke” Marsh

“ He seemed to field a lot of curveballs very elegantly.”
–Robert Schaper

“ He was a great educational source to learn from.”
–James Eric

“ Jim Webb is a crusty old pirate of a man who has a heart bigger than words can describe.”
–Mark Ulano, CAS

James E. Webb Jr. is justifiably renowned for his work developing multi-track recording on a series of films for Robert Altman. He captured the dialog from multiple cast members and interlocking story lines on such iconic films as Nashville, Buffalo Bill and the Indians, 3 Women, and A Wedding. He pioneered the multi-track process.

The scenes were so complex, so intricate and so audacious that Altman himself parodied the style in The Player.

And yet, this was really just the beginning of Jim Webb’s career.

He studied film in college, first at Northwestern University in Evanston, Illinois, and later at USC in the Department of Cinema. In 1962, he was drafted into the Army and, after training in radio and as a radio teletype operator (RTTY), served in Germany at an Army Aviation Repair Company that occupied the old Luftwaffe hangars on the military side of Stuttgart’s main airport.

Discharged in 1964, he worked for about a year at USC and then took a job, first at KTLA and then at the CBS station KNXT. The stations had contracts with IATSE and he got his IA card at that time.

ROCK & ROLL

Work in feature films was the goal but opportunities were scarce for recent film school graduates and new members of the union with limited contacts and seniority. Seeking to create their own work opportunities, he formed an independent production company with Pierre Adidge, a friend from Northwestern, and Bob Abel.

The newly formed production company did music specials for PBS and also documentary concert features. The Joe Cocker film, Mad Dogs & Englishmen, was the first feature, followed by Soul to Soul (as a consultant) and Elvis on Tour. These projects taxed his technical skills to keep everything in sync and sensibly organized. He was well aware that Woodstock required a full year of work to get everything synced and worked strenuously to avoid a calamity of that sort. He insisted on shooting regular slates and on assigning one track on the eight-track recorder to a sync pulse. His commitment to good protocol was not always adhered to but his efforts were at least partially successful and the films were all released in a timely manner.

ROBERT ALTMAN AND MULTI-TRACK

His Army training and experience with radio mikes and multi-track music recording on concert films gave him a good foundation in skills needed to implement a production style that Robert Altman was developing. Traditionally, films had treated their subject matter as if they were stage plays seen with a camera. Close-ups and tracking shots provide changing perspective but the action unfolded as a linear narrative. Altman saw the world as a messy place where events didn’t always proceed in an orderly way. Sometimes everyone would speak at once. Sometimes, with multiple participants, it wouldn’t be clear who was driving the action until the event was over. He wanted to bring some of that messy uncertainty to his film projects.

Altman used a loose and improvisational style in films like MASH but encountered difficulties with sound for McCabe & Mrs. Miller. Without precise cues to know when each character might speak, it was difficult for the Sound Mixer to deliver a suitable track, especially when action was staged in authentic locations with hard floors and other acoustically difficult features. Having multiple microphones, and assigning the outputs to isolated tracks, was the obvious solution. Altman brought in Jack Cashin to design a system of multi-track recording that might be used on location. Most of the equipment then available was designed for use in a studio and it required some ingenuity to adapt it for location use. But assembling the hardware is only part of the equation; someone must operate it effectively and this presented challenges to the Production Mixer.

The system Cashin developed used a Stevens one-inch, eight-track recorder. With one track assigned to a sync signal, seven tracks were available for discrete audio. No multi-track mixing panels that could work off DC were available at that time so 2 eight-input, four-output consoles were linked to supply the needed signal feed. The whole business ran off a 12-volt motorcycle battery with a converter circuit to provide the higher voltage needed by the recorder.

Paul Lohmann, Altman’s Director of Photography, recommended Jim Webb for the multi-track skills he had demonstrated on the concert films. And that was the beginning of collaboration among Robert Altman, Jack Cashin and Jim Webb on a series of films.

Jim Webb:

They had everything together but they didn’t have any idea about how to use it. And I said, “Well, the only thing that makes any sense is to put radio mikes on everybody.” You can’t have open mikes because if you add those back in the pre-dub, the background is going to be astronomical—you won’t be able to tell anything. You have to do a lot of close mic’ing to make this work. So my contribution was radio mikes.

The first picture made with this multi-track technique was California Split. It was a fortuitous choice because it made good use of improvisational technique but was less ambitious in that application than subsequent projects. It provided an opportunity to shake out the system.

By the time we got to Nashville, we pulled out all the stops and went blasting our way through it. We shot that film in eight weeks at a dead run.

Putting radio mikes on each performer and assigning them to discrete tracks was an obvious approach but there were also limitations. Post work required an additional two weeks to deal with all the different tracks. There was also an inherent lack of audio perspective. Jim Webb explains it best himself:

There’s no perspective. We ran into that immediately on Nashville. There’s this scene that opens the movie which is where they’re all in a recording studio and I went about putting radios on everybody, even the ones behind the recording glass. And I went over to Altman and I said, “Are you sure we’re doing this right? We’re throwing perspective just completely out the window.” And he said, “Yes, yes, of course we are.” I went back to putting radios on. About twenty minutes later, he comes over and says, “Are we doing this right?”

A little late to change the action now. And it worked out. I would have people come up to me and say, “That was the most realistic sound I’ve ever heard.” Well, there was nothing real about it. You’re not hearing people shooting through a double-plate glass and hearing all the conversation inside there, as well as what’s going on outside.

But it was primarily designed for overlapping dialog and improv and things of that nature where you never knew what anybody was going to say.

And you can’t possibly listen to it all because it’s just a Tower of Babel. So once I previewed all the radios and made sure they were working, you were just watching the meters.

Capturing the dialog with individual radio microphones was a complex undertaking that required all of Jim Webb’s skill but it accomplished what Robert Altman needed to fulfill his vision for the film. According to the Supervising Sound Editor, only two lines weren’t recorded in the original production track. One was a failed radio on Henry Gibson and the other one was an added line of Allen Garfield’s back as he was walking away from us. That was it; the rest was all stuff that we did.

THE ULTRASTEREO MIXER

Very little was available in the way of a portable mixing panel at the time Jim Webb was working the multi-track pictures with Robert Altman. The specialty mixing panels that Jack Cashin adapted for those pictures had liabilities that make them cumbersome for use on most pictures. He and Jack Cashin set to work to address this need with a capable mixer.

In the late to mid-fifties, [Perfectone] had a little threepot black mixer that was very popular in the studios; it was a little rotary pot thing and everybody used it. And it was around a lot. And then they updated their little portable mixer with a straight line. And they had six in and one out—it was still a mono mixer. And I liked the straight-line faders because you could handle them a lot easier than trying to wrangle three rotary, four, five rotary pots. So I said to Jack [Cashin], “Can we modify this and make it two track?” And we looked it over and said, no, it’s going to be simpler to make our own version of this. And he designed it and I built it. I built a dozen of them, maybe 12 to 14 of them. Sold them all.

ALL THE PRESIDENT’S MEN AND A RETURN TO BOOMING

Right after Nashville, Jim Webb was hired to do All the President’s Men, largely because his multi-track skills were applicable to situations where actors might have to interact with video monitors playing in the newsroom. He was also particularly skilled at recording telephone conversations and there were many of those in the script. Although he had his own working prop phones, the Special Effects Department supplied the multi-line key phones used in All the President’s Men. Webb provided a phone tap to record the phoneline conversations on a separate track from the on-camera dialog. He would supply an audio feed to actors brought in just for their off-screen dialog. Because everyone heard everyone else, either through the phones or via a specially provided feed from the mixing panel, overlaps were possible and could be recorded naturally. It was expensive because of the need to bring in actors who didn’t appear on screen but freedom from the pace-killing process of having lines read by a script supervisor allowed the filming to fly and yielded more natural performances.

When we rehearsed it, it went like lightening. And when we got through, Bob [Redford] said, Holy cow! … He was shocked at how fast it went and that’s how we did the scene.

It’s not often that the Mixer gets a chance to dabble in how the scene plays.

All the President’s Men was more tightly scripted and allowed a more normal recording technique than the Altman pictures. It came at a good time:

I remember going into an interview one time and I said, “I’ve done this Altman this and that.” And the guy looks at me and says, “OK. What else have you done besides that?” And I didn’t have anything so I was thinking to myself, it’s better to work around; it’s better to do different formats and utilize them when you need them.

Chris McLaughlin was his Boom Operator on the film but the newsroom scenes presented particular challenges. The Washington Post set was gigantic, consuming two linked stages, and lit naturalistically from overhead fluorescent lights. Fortunately, due to the heat they generated, the ballasts for all those lights were mounted in a shed outside the stage so there wasn’t a serious problem with hum. Director of Photography Gordon Willis favored up-angle shots that showed all the lights in the ceiling. When Jim Webb asked if it would be OK to boom, Willis held out his hand, casting multiple soft shadows and said, “I don’t care what you do as long as you don’t make any shadows on my set.” “That was the end of that conversation,” says Webb. Chris did manage to boom the picture using primarily a Sennheiser MKH 815 from below, flitting in and out of the performer’s legs.

According to Chris McLaughlin, Webb entrusted the microphone selection to his Boom Operators. But the big Sennheiser was clearly a favorite. He describes using one on The Long Riders. The Keach brothers were fitted with wireless mikes when Jim Webb learned that they intended to ride into the Chattahoochee River at the conclusion of the dialog. Concerned about immersing the radio packs in the river, Webb resolved to boom the scene. Chris McLaughlin thought that he could capture the dialog with a Sennheiser 815 off a 10-foot ladder. He turned the mike back for maximum rejection of the sound of the river and they accomplished the shot. At the end, the Keach brothers did ride into the river and Webb didn’t lose any mikes. “So I have a lot of respect for the 815,” he said, “it got me through a lot of tough places.”

It’s key to an understanding of technique that there was no agenda, no rules about how each scene needed to be recorded. Jim Webb approached each project with an eye to achieving the Director’s vision and capturing the elements needed for the picture as a whole. Duke Marsh says: “I think with Jim it was, if I’m [Post] mixing this thing, or I’m going to do the Post work on it, what do I want to hear?”

And Jim himself says, “You just gotta do what you gotta do, you know. And I never worried, pretty much at all, about what people thought about what I was doing. If I saw a way to do it and it felt right, that’s what I was going to do.”

Each project presented its own set of challenges to test his skills and preparation.Noises Off presented a particularly complex situation. Originally a stage play, it concerns an acting company rehearsing and presenting a play on an elaborate set. Come opening night, everything goes awry, cues are missed, props misplaced, and the comic errors pile one atop the other. The two-story set mirrored the set that would be on stage. To accommodate the perspective of the Stage Manager, a key character, the entire set was built ten feet above the sound studio floor, complicating any work from the stage. Peter Bogdanovich, the Director, intended to shoot the entire film using a Louma crane that had the ability to swoop in on individual performers, further complicating efforts to capture the audio with a boom microphone. Moreover, the script took the actors up and down stairs and through doors at a frenetic pace.

The actors hoped to avoid using radio mikes, in part because there was often little costume to conceal them. But they needn’t have worried as the pace and frequent costume changes made that an inconvenient choice.

The original plan was to distribute plant microphones throughout the set and go from mike to mike as the action required. After a rehearsal, Webb said, “Guys, I don’t know.”

McLaughlin thought he could capture the dialog using Fisher booms and had a plan for how to accomplish it. They would use two of the big Fisher booms and, to get them high enough to work the elevated set, they would replace the regular bases with purpose-built scaffolds and mount the booms to the top rails of the scaffolding. Wheels fitted to the scaffolding allowed moving the booms into position as needed.

Jim Webb was open to the idea and brought in Fisher booms with 29-foot arms fitted with Neumann KMH 82i microphones. Randy Johnson joined Chris to operate the second Fisher and Duke Marsh was brought in to work from the greenbeds with a fishpole to catch anything that fell between them. After hearing a rehearsal, Jim Webb said, “This is the way to go. Pull those plants.” They did use a few of the plants to pick up dialog occurring well upstage, under the set overhangs where the booms couldn’t penetrate, but using the big Fisher booms simplified the plan considerably. The plan still demanded considerable mixing skill to blend the two main booms, the fishpole operated by Duke and the occasional plant mike, but there was logic to the operation and the team successfully recorded all the dialog.

Other films presented challenges of their own. The Bette Midler films, The Rose and For the Boys, each presented playback challenges because of the large audiences or the complex shots envisioned by Mark Rydell, the Director. Webb worked with Re-recording Mixer Robert Schaper on For the Boys to build modern elements into period microphones so they might accomplish live-records at the highest quality levels. Robert Schaper recalls:

We ended up stealing vocals off of those mikes in the playback situations. One of Bette’s songs to her husband, when she is reunited with her husband, had a very silky, lovely, studio playback [of] “I’m Going to Love You Come Rain or Come Shine.” And she had a very silky rendition of that. [But] it didn’t match her acting performance at all because she was crying, overwhelmed with seeing her husband that she hadn’t seen in months and she was very worried about it and everything else. And we had planted … a Shure 55 with a rebuilt Shure capsule in it. Even with playback coming at her, the isolation was good enough on her actual live vocal—and she always actually sings all of her lip syncs. And she performed the heck out of the song … I ended up compiling all of that and using her live vocal—rather than the pre-record … from the plant that we had out there … and it turned out to be a really great acting performance.

CREW RELATIONS

“ He left a lot of it to the boom man. He walked on and said the boom man was the money-man, the boom man, he believed, controlled the set. ”
–Duke Marsh

“ He put great trust and faith in his Boom Operator. It was a collaborative effort. ”
–Chris McLaughlin

Over the course of a career, every Sound Mixer works with many Boom Operators, Utility Technicians, and Playback Operators. All who worked with Jim Webb praise his skills, his concentration, his commitment both to the project and his crew. A few brief stories from Duke Marsh illustrate:

[From Beaches] I would go and grab the snakes at wrap and he comes up behind me with gloves on and he said, “No, I do that.” “But I’m the cable guy; that’s a cable.” And, instantly we were buddies. And he’d go, “But, Duke, you gotta understand, those snakes are for me so I can work off the truck.” And in my whole career with him, in the rain, in the mud, in the snow, he’d always come off that truck. And there were days when I would say, “But you’re the Mixer.” “Well, you got other stuff to do. Go do that, come back, give me a hand.” That was Jim. He would always back his crew.

And then, in 2001 when he was receiving the CAS Lifetime Achievement Award:

I get a phone call and Jim says, “I want you to come. You’ll be at my table.” Well, he invited Doug Vaughn and Chris McLaughlin. [He delivered a speech accepting the award] then he says, “Those three guys at that table are responsible for a lot of this in my career. If it wasn’t for the boom man, putting that mike in the right spot, I wouldn’t be here.” And he had us stand up and we got an ovation. And I’m thinking, how many mixers pay attention to the guy that’s out front?

AWARDS AND ACHIEVEMENTS

In addition to the CAS Award, Jim Webb won the Academy Award for All the President’s Men in 1977 and the BAFTA Award for Nashville in 1976. He received one other Oscar nomination and three additional BAFTA nominations.

Nashville and All the President’s Men are each featured in both the Criterion Collection and the Smithsonian List. While Robert Altman and Alan Pakula, respectively, are recognized for their vision, Jim Webb shares in the accomplishment through his skill and inventiveness in facilitating that vision.

It’s also instructive to note the Producers and Directors he’s worked with multiple times. The list of three or more film collaborators includes Robert Altman, Francis Ford Coppola, Garry Marshall, Walter Hill, Bette Midler, and Paul Mazursky. Mark Rydell is one of several directors who employed him twice.

For each of these directors, Jim Webb contributed a sense of the role of sound as part of the whole and adjusted his technique to meet the needs of each particular project and the vision of that particular filmmaker. In talking with him, it is apparent that he has taken great pleasure in the process.

Jim Webb: “Good production sound has production value! Don’t give up. Be consistent and do the best you can.”

 

Mic’ing the Instruments

In the course of interviewing for this profile, Jim Webb shared many great stories that didn’t fit neatly into the narrative. This is one of the stories rescued from the trim bin.

In California Split, it started there and at the end of the scene there was going to be—and I found out this about two minutes before we were going to shoot it— there was a piano with tacked hammers in the bar and there was a lady that would play the piano and sing. Elliott Gould was going to be sitting there and they were going to talk a bit while she was playing the piano—and eventually they were going to sing “Bye Bye Blackbird.”

And I said, oh my God, it would be nice to know about this a little bit earlier. So I ran back and the only things I had around in those days were the old ECM-50s which were some of the first electrets from Sony. And I had a bunch of those. So I ran over to the piano, raised the lid and put one taped to the cross bar pointed down and the other one pointed up to the top end. I connected cables from the mixing panel, closed the lid, put a radio on each performer, ran back, turned the equalization all the way up – all I had – and prayed. So I laid down four tracks and it worked pretty well. In fact, they couldn’t duplicate it.

The scene didn’t really make the film but the song is in there, at the end, over the credits.

Anyway, they discovered that I could do that. So, in the smaller scenes in Nashville, where there were just the two gals singing in a place and the piano and whatever, I would do that mike, if it was an old upright, I would stick a couple of mikes in there … and as long as I hadn’t filled up the eight tracks, I could do that.

Well, they decided that I had so much dialog going on that I couldn’t cover all the music too; I just didn’t have enough tracks. So, they hired a guy named Johnny Rosen to come in and they had a sixteen-track truck and they hired him to do the Opryland stuff and all that. And their mixer, I think his name was Gene Eichelberger, was shadowing me just to see what I was doing. And he saw me doing this lavalier routine and I’m thinking to myself, I can’t tell anybody in Nashville that I’m using lavaliers to mike instruments because they’re going to laugh me out of town. Next thing I know when I get to Opryland, Eichelberger is over borrowing every ECM-50 I’ve got and he’s taping them to fiddles and everything in the orchestra he can find. So, I thought, well, OK, that’s how we’re going to do this. And that’s how it all kinda went down.


Interview Contributors

These colleagues of Jim Webb assisted in the preparation of this profile by making themselves available for interviews:

Crew Chamberlain was Webb’s Boom Operator on several films including The Milagro Beanfield War, Legal Eagles, and Down and Out in Beverly Hills.

James Eric knew Jim Webb from his days working the microphone bench at Location Sound. Later, he served as Utility Sound on Out to Sea.

Robert Janiger is a Sound Mixer and friend who collaborated on further development of the Ultrastereo mixer.

Harrison “Duke” Marsh worked with Jim Webb on seventeen films including Pretty Woman, For the Boys and Noises Off. He worked variously as Playback Operator, Utility Sound and Boom Operator.

Chris McLaughlin boomed twenty-one films for Jim Webb starting with California Split and continuing through Noises Off. Among others, he did Nashville, 3 Women, The Rose, The Long Riders, and Hammett.

Robert Schaper was Supervising Music Engineer on For the Boys.

Mark Ulano is an award-winning Sound Mixer who considers Jim Webb a mentor.

Ray Dolby

A Tribute to Ray Dolby

by Scott Smith, CAS and David Waelder

To be an inventor, you have to be willing to live with a sense of uncertainty, to work in this darkness and grope towards an answer, to put up with anxiety about whether there is an answer.

–Ray Dolby

The Dolby name appears so often on films that it has become like Kleenex or Xerox, a generic for noise reduction. But the many innovations of Dolby Labs are largely the work of Ray Dolby, a man of prodigious ingenuity. He died of leukemia on September 12, 2013, at age eighty, at his home in San Francisco. Born January 18, 1933, in Portland, Oregon, Mr. Dolby was hired straight out of high school by Alexander Poniatoff of Ampex Corporation. At the time, Mr. Dolby had volunteered as a projectionist for a talk that Mr. Poniatoff was giving. Impressed by his talents, Poniatoff invited the young Mr. Dolby to come to work with him at Ampex, where he contributed to the design of the first quad videotape recorders.

After completing studies in electrical engineering at Stanford and physics at the University of Cambridge, Ray Dolby invented a system of high-frequency compression and expansion that minimized recorded hiss. He formed Dolby Labs in 1965 to bring this noise reduction system, called Dolby A, to market. Mr. Dolby later turned his attention to the problems of sound recording for motion pictures, which still relied on decades-old technology. His endeavors would lead to the introduction of a surround sound system that could be duplicated using traditional optical soundtrack printing techniques. It replaced the expensive and cumbersome printing techniques previously used for big-budget films.

At Dolby Labs he is remembered as much for mentoring a new generation of scientist/engineers as for his particular innovations. He was a scientist who expanded creative horizons for artists.

His contributions are covered in greater detail in Scott Smith’s series “When Sound Was Reel” in the Summer 2011 and Winter 2012 issues of 695 Quarterly. There is also a very fine video tribute available on the Dolby website. These are available at:

https://www.local695.com/Quarterly/3-3/3-3-when-sound-was-reel-7/

https://www.local695.com/Quarterly/4-1/4-1-when-sound-was-reel-8/

http://www.dolby.com/us/en/about-us/who-we-are/leadership /ray-dolby.htm

Nashville

Nashville

by Anna Wilborn

When Joe Foglia rang me up to offer me a spot as the Utility Sound Tech on ABC’s Nashville, I fell on the floor laughing. Move to Nashville? I had a new house, a new baby, a three-year-old and a husband who was neck deep besting a new VH1 show. I’d already heard the stories of ungodly hours, the daily multiple locations, the stake beds, the stairs, the tiny costumes, the non-soundstages, the lack of a great Thai restaurant … “I’m fine thanks,” I chuckled to Joe, as I tossed my kid a toy. I got in my car and headed to Costco. Forty-five minutes later, I pulled in to the parking lot. It’s a mile and a half away. Had to get some diapers at Target. That was another two-hour ordeal complete with honking and expletives (not from me of course!). Six weeks later, my whole house was packed up, boxes shipped, and I was bouncing my baby on my lap as my flight to Music City lifted up out of the smog.

It had been over two years since I’d worked with Joe and, thankfully, nothing had changed. Except the recorder. And the monitors. And the sound reports. And the media. And some of the microphones. And the timecode boxes. And the IFBs. And the follow cart. I soon realized the only thing recognizable was Joe’s smile. Even the boom guy, Scott Solan, was different. He hails from an Irish, hockey-playing borough of Syracuse, NY, with a long list of credits, including the new Star Trek features,Transformers: Dark of the Moon and Thor. Scott is a thoughtful perfectionist. He forgets nothing and leaves no stone unturned in his drive for a quiet, locked-up location. Scott has the unique ability to up everyone’s game, both within and beyond our department. Our very first scene up on starting the new season was indicative of the next ten months to come: 6 earwigs, music playback, live stage microphones, PA system, 4 wires, 50 extras and 3 RED cameras. I suddenly yearned for a forty-five-minute drive to Costco with a toddler and a teething baby. I pondered the validity of the lease agreement just signed by my new tenants back in Los Angeles. I suppose I could get a lawyer … Joe just smiled and shrugged, “Welcome to Nashville!”

Matt Andrews is at the helm of our music playback. He is the Chief Engineer at Sound Emporium in Nashville and a bona fide Grammy Award winner (I know ’cause I kinda stole it off his mantle one night when we were shooting down the street from his house). Matt’s credits include Playback Tech on Walk the Line and 2nd Studio Engineer for the O Brother, Where Art Thou? soundtrack. Joe and Matt are a match made in heaven. Watching them together is like a Martin and Lewis film. There’s nothing better than seeing the two of them behind the racks, heads down in a flurry of cables and connectors, troubleshooting and finishing each other’s sentences. They are the yin to each other’s yang. Given that Joe spent his formative years at Criteria Recording Studios in Miami Beach, it’s no surprise.

Each episode offers up four to five musical numbers with anywhere from one to a dozen performers. Matt’s playback paraphernalia includes a Pro Tools 10 rig in a small, red, rolling rack, closely followed by Playback Utility Cassidi Spurlock, dragging The Biggest Pelican Case In The World. Seriously. If Nashville ever floods again, we can just ditch the cables and all hop in. He runs the Pro Tools via a Quad-core Mac Mini with a Focusrite Rednet 2 interface. He typically comes armed with twenty-four tracks of music all broken out in stereo pairs from vocals to cowbell. Matt is all about the Dante matrix system. Over the next few months, we plan to fully integrate Dante so both he and Joe can pull any track they choose out of a thin little Ethernet cable. It will also drastically cut down on the cabling which, in turn, will reduce the propensity to pick up hums and ground loops along the way, a typical nuisance of our large-scale music venue locations.

Six years ago, Joe looked at his shiny new eight-channel Sonosax mixer and thought, what am I going to do with all these inputs? Now he knows. Problem is the board is continually maxed out with all the live vocal microphones, booms, wires and music. An upgrade looms in the near future. For now, the new addition to the family is a Dante-compatible Soundcraft Si Expression digital mixer. It has fourteen faders with four layers for up to fifty-six tracks. “It’s great,” Matt says, “we just hit a button and the board instantly switches to a whole different mix.” To it we input the Shure handheld wireless stage microphones and Matt’s music and timecode tracks. It allows instant accessibility to all audio on Matt’s playback rig as well as all live stage microphones. From there we feed customizable mixes to the QSC PA and the actors’ Ultimate Ears custom molded in-ear monitors. Our actors sing aloud to their pre-records, and are then recorded by Joe. (Y’all following this? There will be a quiz at the end.) This gives our Music Editor, a more precise way to sync, rather than relying solely on timecode.

Joe’s primary recorder is the Sound Devices Pix 260i. It is capable of up to thirty-two channels and is also Dante compatible. It carries a 250 gigabyte solid-state hard drive and a compact flash card which gets turned in for dailies. He backs up to a Sound Devices 788T which simultaneously mirrors to a one TB hard drive. We mostly use Sanken COS-11 wires with Lectrosonics SMV and SMQV transmitters, matched to a six-channel Lectrosonics Venue receiver. Schoeps CMIT shotguns are used with Cinela mounts, K-Tek boom poles and Lectrosonic HM plug-on transmitters. Scott and I use Shure P9RA receivers to listen to Joe’s mix. The clarity is remarkable and the channels are mixable so we can have boom in one ear and wires in the other if we choose. These are the same receivers we use on our actors for their in-ear monitoring.

In the early days of Season One, the performance playback music was fed only to the actors via in-ears, Phonak earwigs, or small stage monitors. Famed music producer T-Bone Burnett noticed during a performance shoot that the audience wasn’t getting as excited about the music as they could be. He wanted speakers blasting the crowd with music. Joe then contacted Ray Van Straten at the speaker company QSC in Costa Mesa, CA, about a possible relationship. A love affair was born. We now receive both practical and mock-up KW series speaker arrays to pump music to the crowd for a real concert look and feel.

Normally, being this far away from Los Angeles would spell the usual equipment and expendables headaches. Thankfully, in Nashville we have Trew Audio right in our backyard. When Joe arrived in town, he wheeled the carts right into the middle of the shop like a sound pit stop. Software updates, new cables, batteries, fluids, tires pumped up, and we were off and running. Rob Milner has been a big part of our crew and it goes something like this: “Hey Rob, I need a sevenfoot cable to run from the Zaxcom wireless to a split XLR with a four-pin.” An hour later, we send the drivers. Having them here has made the transition to the South seamless. Glen Trew was the Sound Mixer on the pilot and the first three episodes before Joe took over. He still comes in from wherever he is around the world (last time it was Amsterdam) to do our 2nd unit days. He’s like a rock star around here. It takes him a half-hour to get from crafty to the cart with all the hugs and handshakes in his way.

Nashville has been the best thing to happen to my little world in quite some time. We’re having a blast both on-set and off. Our hours are sane, the people are jaw-droppingly friendly and there’s never a lack of fun things to do with festivals and concerts every weekend. I can say the road signs are more confusing than anything I’ve ever seen (even the locals admit that), but when people actually let you merge with a friendly wave and a smile, all is forgiven. The other day, my husband found himself stranded in the rain with a dead car battery and a flat tire, and yes, two very disgruntled kids in the back seat. Before he could find his AAA card, someone had pulled over, jumped the car, fixed his flat (with a plug!) and bid him a good day. Don’t you have a pretty picture of that happening in Los Angeles? Gotta love this sweet Southern country livin’! Viva La Nashville!


 

Glossary of highlighted words

IFB Interruptible Fold Back: A system for supplying audio as it is being recorded to artists and technicians. The signal path from the microphones is “interrupted” before going to the recorder and “folded back” so it may be heard by the people involved in the process of making or supervising the recording.

Focusrite Rednet 2 The Rednet 2 system is the premium line of audio interfaces for network distribution over Ethernet cable manufactured by the Focusrite company.

Dante A system of hardware, software and network protocols for delivering digital audio through Ethernet cable.

QSC A manufacturer of speakers, amplifiers and signal processing equipment.

Ultimate Ears A manufacturer of speakers and custom-molded, in-ear monitors.

Glee

The Road to 600: The Evolution of Playback on Glee

by Phillip W. Palmer, CAS

Pilot and Run of Show

When I got the call asking if I was interested in mixing a pilot for Ryan Murphy and Fox Television, the Producer asked an interesting question. He asked how comfortable I was doing a musical pilot, and whether I could manage the production side of things for a group of Producers who, while experienced, had never done this type of project before. Looking back now, almost five years to date, I had no idea what I was in for.

The pilot had elements of several processes: live-record, live-record to playback, playback only and combinations of all three. What we learned from the pilot, and how our company and cast operated, set the tone and process for a long journey. Soon after we started work on the pilot, we knew we were in for something special. Since October 2008, we have produced close to one hundred episodes and nearly six hundred musical numbers.

The music production and playback for the pilot was a completely different situation than the run of show. The music had mostly been prerecorded earlier, giving us time to figure things out and adjust our production process accordingly. For the run of show, music production has been a race against the schedule.

Glee still remains bound by the network episodic schedule, which we attempt to hold to eight days per episode. When the script is released, the music team goes to work immediately arranging and composing anywhere from five to as many as eleven musical numbers per episode. The temp versions are sent back and forth to our Executive Producers for notes and preliminary approval. Then the cast members are brought in to record their specific vocal tracks. The completed music mix is then sent back to our Producers. Upon final approval, the music goes to David Klotz, our Music Editor, for preparation of playback on set. The Pro Tools sessions he builds are specific to our purpose, which include timecode as an audio track, click, thumper, music mix, any specific and special music stems, vocal and vocal effects, and background vocal and vocal effects tracks. The playback session track count will frequently be upward of twenty-five or more stems.

Live-Records vs. Playback

The advantages of live-records are obvious on camera. The drama of the moment and the nuances of the performer yield an authenticity that is often undeniable. What we learned on Glee is that this works for us only sometimes. We discovered early on that repeated live performances, especially when sung “all out” take after take, have a detrimental effect on the performance as time went on. Essentially, after ten or more takes, the moment was lost, as well as the performer’s ability to continue to work through the day and into the next on a TV production schedule. We had to decide which songs needed to have this live performance effect, and plan our production accordingly.

In the pilot, all the Glee Club student auditions were recorded live on set including the piano, with the exception of “On My Own,” performed by Lea Michele. Her audition intercut with her singing the same song in several locations, and was prerecorded for playback and lip sync on set. The shower scene with Cory Monteith singing, “I Can’t Fight This Feeling” a cappella, was recorded live as well. Our vocal coach gave Cory a pitch and then we ran a thump track for tempo. The thump track is essentially a 40 Hz click track played at a low level through an eighteen-inch subwoofer. The 40 Hz thump can be removed later in Post by the use of a notch filter, leaving the vocal recording unaffected. The artist can feel and maintain a tempo and Editorial can easily cut back and forth between takes of live recording. The remainder of musical performances in the pilot were prerecorded and played back for lip sync.

Pro Tools

While there are many Digital Audio Workstations available to the Production Sound Mixer today, we use Pro Tools for several reasons, foremost among them being that the music production team uses Pro Tools for their recording process and we can easily modify their sessions for our use. We have found that the use of Pro Tools on the set has been an invaluable tool to our playback workflow. We can easily manipulate any session to match what we are currently filming and, if need be, send that same session back to the Music Editor so he can prep it accordingly for Editorial. We can easily do things such as change level and volume to match camera angles, or make music edits at the request of the Director. The Music Editor can then load our session files to see what we’ve done on set.

There has been an evolution to our playback Pro Tools sessions since the pilot. In the beginning we simply had a mix, essentially the Producers’ approved demo, with a click track added. As the seasons have progressed, we have added regular stems to our sessions that we find very useful. Our Music Editor adds a thump stem, which matches the click track, so we can assign a separate output for the thumper. We can do it on the fly, or program it in the automation to dump the music at any point and go to a thump to record dialog during the song. We also add a timecode stem as an audio track, which comes in very handy when creating any offspeed versions of the song. The timecode will always stay locked at any speed if it is an audio track and part of the session. We have the music stems combined as a mix, unless there is a specific stem that needs to be split out, such as a piano track. The vocal stems are all separated by lead vocal and effects. If there are six lead parts in a song, there will be six stems and six effects stems. We have the background vocals combined, but sometimes it’s difficult to distinguish the background vocals in the overall mix. Having the ability to boost the background vocal stem by 4 dB to 6 dB during playback helps our cast follow their cues. When the sessions are completed and sent to us, there are frequently dozens of stems to manipulate.

Playback Equipment and Installations

For the pilot, our playback gear on set was a simple Pro Tooks Mbox Mini audio interface and a MacBook. We made the most out of it, but quickly knew we had to improve on our rig to handle more complex playback situations. After the first season, we built a cart that had a dedicated Mac Mini, twenty-inch monitor, Pro Tools Mbox Pro audio interface, a backup Mbox Mini, Command8 control surface, Mackie 1402, Comtek transmitter for earwigs, Sennheiser receivers for VOG, and video monitors. This rig stayed fairly unchanged until an overhaul this past summer for our fifth season. The current playback cart has a new Mac Mini and monitor, MOTU Traveler audio interface, Lectrosonics Venue for VOG, Black Magic Smartvue Duo HD monitors, and the Mackie 1402 and Comtek base station from the older rig for audio distribution,earwigs and monitoring. Our mobile speaker complement consists of two JBL EON10 speakers for small sets and two Mackie SRM450 speakers for larger sets and exteriors. We built several lengths of custom speaker snakes so both power and signal can be run from the playback cart. Also included in the speaker arsenal is an eighteen-inch powered subwoofer for both thumper and low end when needed.

As we progressed through the first season we began to see the need for speaker installations in our main sets. Speaker placement became difficult as we battled with multiple camera angles, Steadicam 360’s, set walls and crew. We found the only good place to put them was up in the air. The first set to get this dedicated installation was the McKinley High Choir Room. This set saw the most playback by far, and still does to this day. For the Choir Room we mounted four JBL EON10’s surrounding the set. They are permanently hung and wired to a space just off set where we park the playback cart for 99% of the music on that stage. From that position we have “drive lines” to several places on the stage where we can drop a speaker and tie right in. This makes the music playback in the hallways very easy, as we are able to place speakers at either end of our long hallways without dragging cables through the set.

Season Two saw the construction of the McKinley High School Auditorium on stage. With this construction build, we installed six Mackie SRM450 speakers, two on each stage wing and a pair in the house, plus an eighteen-inch powered subwoofer for thumper. They are all wired, both power and audio, to a distribution amp and power control rack placed above the Stage Manager’s desk on stage right. They exist as a functional part of the set decoration. Both the playback and the main cart are set up in the same spot each time we work this set, so all the cable runs, including power, audio, video and bell/light, are permanently run underneath the set.

For Season Four, we built a new set for the storyline set in a New York dramatic arts school called NYADA. This new set is a dance rehearsal space, large and open, with high ceilings and giant windows that look out to Manhattan. We faced the same issues as with the McKinley Choir Room, and chose to suspend a pair of Mackie SRM450’s from above the greenbeds aimed down through the fabric ceiling and into the set. As with all previous installations, they are prewired with both power and drive lines to one central spot for the playback station.

The most recent set construction has been a New York City diner, built for Season Five on the backlot of Paramount. This was an incredible undertaking for the construction department, both in scope and speed. They used an existing space in the backlot but expanded up to create a two-story, high ceiling, Broadway performance diner. For this installation, the speakers are incorporated into the set design and mounted on the set’s west wall as part of the set decoration. We used a pair of the new QSC K10 speakers with the QSC yoke mounts for a permanent installation. We ran power and signal wires through the set walls to a drop point to facilitate connecting to the playback cart.

Playback Process

Most of the music scenes on Glee happen within a normal scene of dialog. Occasionally we have a stand-alone music piece but, for the most part, we fold the music playback into the dialog as best we can. The playback volume is often so loud it is at rock concert level. As we go from dialog recording to music playback, the transition is often abrupt and becomes difficult for Editorial. Anything that may happen within the song is lost due to the high playback level. We attempt to bridge this transition between dialog and music with a blending element.

 

The key to making this work is recording the elements we see during the playback as wild sound so Editorial and Post Sound can add these tracks to play under the prerecorded music. Due to our very tight episodic television schedule, Editorial doesn’t have the time to build the background noise and Foley for our multiple music scenes. To do this we make every attempt to do a “Foley Pass” of things like laughter, whistles, footsteps, hand claps, crowd applause, set pieces moving or falling, or anything that makes noise during the musical number. We record this wild track with the music playback at a very low volume. For the Editor, the Foley Pass becomes an important element in making the musical number feel real.

When we choose to record a performance live, we often prerecord the music stems and record the actor singing on set. The music is fed to the actor via earwig and we record the vocal as usual, with a boom microphone. We try, not always successfully, to leave the temp vocals in for the wide shots, and go into the live-record when we get into close-ups. In our experience, it saves the actor and the performance. I do my best to create a mix in the Comtek public IFB for the Director to get a feel for what we are recording. For the IFB feed, to the boom operators and set crew, I leave the playback track out or run it at a low volume. I split tracks one and two as a post-fader mix for Editorial, track one is the live microphone and track two is music. Everything is ISO-tracked pre-fader so it can be adjusted or rebuilt as needed.

Often we are tasked with strange and challenging playback situations. Midseason Three, we had a scene and musical number that took place at a swimming pool with synchronized swimmers. Having a beat to follow underwater is one thing, but having to do lip sync is another. Luckily, after some tests, we found the synchronized swim music equipment “Oceanears” worked very well for our needs. The swimmers and our cast were able to hear the playback feed from the underwater transducers. I was quite impressed by the clarity and the distance the music could travel underwater at nominal levels.

One script called for a musical number being sung from a golf cart while moving. That works well if it’s traveling a short distance, but that wasn’t the plan. They wanted to load down the golf cart with cameras and drive the entire length of the song, some twoplus minutes. We negotiated for an additional golf cart, placed a speaker with wireless receiver in the picture cart and transmitted from our “sound golf cart,” which slowly became the “everyone else” golf cart. We essentially did two angles several times, first leading then following. The playback rig was somewhat simple, a MacBook Pro, MOTU Traveler, and a Lectro UH transmitter. The speaker on the cart was a battery-powered Sound Projections SMP1 fed from a Lectro UCR411. We had a good time with this one. Certain musical performances call for special shots that require playback manipulation—specifically, off-speed filming for incamera effect. Frequently, we speed up both camera and playback by as much as three times normal. When the image is played back at normal speed and the music is laid back in, the artist appears to be singing in sync while everything moves in slow motion. This is achieved by speeding up both the music stems and the timecode stems. We transmit the high-speed timecode to a slate and roll camera, then playback as you would in a music video. Post can then manually sync the music to the displayed timecode as it’s locked in the session as an audio stem.

The Crew

Commitment and cooperation from the entire shooting company from the beginning has been the key to making this all work seamlessly (or what appears so). I can’t imagine what this would be like if the crew didn’t understand how challenging it is on a daily basis. It’s difficult for each department in their own way, and we respect and strive to work together to make it happen. We have had three Directors of Photography for the run of show: Christopher Baffa, Michael Goi and Joaquin Sedillo. Each one of them has worked with us to get what we need to achieve our goals, both with sound recording and the music. It’s a cooperative effort, as always, and I’m grateful for our working relationship. When our needs impact the way the show is shot, we have to have a plan and options. I can’t stress how important it is to have multiple plans of operation. My sound crew has undergone some changes since the pilot, but for the most part, has remained constant. Patrick Martens has been my Boom Operator for the entire run. Devendra Cleary was Utility Sound Technician and Playback Operator for the first two seasons, and then moved up to Playback only in Season Three. Mitchell Gebhard joined the crew as Utility full time in Season Three. After Season Three, Devendra moved on to mixing full time, and Jeff Zimmerman joined us as Playback Operator beginning Season Four. Without the unbelievable ability and flexibility of these people, I would be completely useless as their Sound Mixer. They show incredible professionalism on a daily basis and shine in their abilities to do the job. I provide the guidance, but they get the job done.


Glossary for highlighted words

Click Track A series of audio cues in time to a piece of music. Typically, the click track is generated in a DAW and used by musicians or dancers to keep time to the music.

Thumper A playback system to reproduce the beat of music as a series of low-frequency thumps. The tones are typically about 40 Hz so they may easily be removed from a track without harm to recorded vocals. A special thumper speaker system optimized for low-frequency reproduction is used to play the track. The thumps permit performers to follow the beat of the music without musical playback that might interfere with dialog recording. Originally invented by Hal and Alan Landaker for Warner Bros. Studios. (See 695 Quarterly, Volume 2, Issue 1, Winter 2010)

Stem A mix of multiple audio sources. Example: A blend of music and effects, without dialog. The use of a stem allows complex source material to be treated as a single unit in the final mix or as a temporary part of the process of editing and recording audio.

Mbox An audio interface manufactured by Avid for use with its Pro Tools software.

Command8 A mixing panel control surface manufactured by Avid for use with their Pro Tools audio editing software.

VOG Voice of God. A portable public address system that allows a Director to address groups of performers and technicians with an authoritative voice.

MOTU Traveler The Traveler is an audio interface for connecting multiple microphones, and other audio inputs, to a computer. It is made by MOTU (Mark of the Unicorn), a manufacturer of hardware and software for computer recording

Black Magic A manufacturer of speakers, amplifiers and signal processing equipment.

Earwig A miniature monitor designed to fit within the ear canal like a hearing aid.

Greenbeds A series of catwalks above the sets in a studio.

Foley Pass An alternative to the studio process of Foley recording. The Foley Pass is recorded on-set at the time of principal photography. At the completion of the shot, the AD, at the request of the Mixer, calls for a Foley Pass and the performers go through all of the motions of the scene but without dialog and either without playback or with the music played very softly. This makes it possible to record all the natural sounds as an element separate from the music and speech. The Editor can use these sounds to add a natural background to the scene. It is an expedient alternative to the more elaborate process of the Foley stage but it also can preserve some of the immediacy of the scene.

IFB Interruptible Fold Back: A system for supplying audio as it is being recorded to artists and technicians. The signal path from the microphones is “interrupted” before going to the recorder and “folded back” so it may be heard by the people involved in the process of making or supervising the recording.

File Formats for Music Playback

File Formats for Music Playback

by Gary Raymond

I was asked to discuss optimum file formats for Music Playback (PB). This is an important topic that continues to evolve. Traditionally, the media and file parameters have mirrored the Production Sound Mixer’s formats.

When I started in the ’90s, most Mixers were using Nagras. As a result, the spare Nagra ended up being the logical (convenient) machine to also use for playback. As a result, tape speed was typically the same as the Mixer’s. There were definite limitations to the two-track format. When I worked on For the Boys in 1990, we had several large master shots that Mark Rydell, the Director, decided he wanted to shoot from scene beginning to end. Unfortunately, no one told Editorial as they had prepped all the reel-to-reel tapes as separate beginning, middle and end segments. To make matters worse, they didn’t know what combination would be desired so we had tapes with Orchestra-L, Bette Midler Vocal-R; Orch. & Bette-L, Jack Sheldon Trumpet-R, Orch. without Vocal-L, Jack-R and about a half dozen other permutations. I remember the

Editor bringing down this big box of about 50 seven-inch reels and us sorting through them. Then Mark announced he wanted to do the master shot all the way through. Duke Marsh, who was doing the playback with me, grabbed a second Nagra and we loaded the first part of the desired mix of the song on Nagra 1, the middle of the same song on Nagra 2, and stood by holding the pinch roller ready to let it fly on Playback. As Nagra 1 was playing, we had to start Nagra 2 at the correct spot and then, while it was playing, reload Nagra 1 with the end of the desired mix. I remember Mark Rydell came up to us after our successful playback day and said he wouldn’t do that job if someone held a gun to his head.

Keith Wester, who I worked with on Never Been Kissed, told me he started as a Playback Operator and, in those days, it was off a record. He’d find the groove (literally), mark it with a piece of white chalk and hope the needle didn’t bounce when he dropped it.

In the late ’90s, there was a flirtation with DAT (introduced by Sony in 1987). This was limited to the DAT formats. The DAT was more convenient in some ways than the Nagra (you could auto cue to preset markers) but it still suffered similar problems of any tape-based system. One was that the position coding information would actually get worn off with 20–30 repeated rewinds. Another unique disadvantage of the DAT relative to the Nagra was the fact that it couldn’t be edited the way reel-to-reel tape could be (with razor blade in hand). All editing had to be done “off line” and retransferred.

For this reason, in 1993 I switched to Pro Tools, a nonlinear computer-based system. If we had been using Pro Tools in 1990 when we did For the Boys, we could have loaded all the various playback combinations into one session and been happy clams. Pro Tools (computer-based recording, editing & playback) was vastly superior to tape systems as far as “function” (ability to manipulate the audio), although not necessarily “performance” (sound quality). It took a while for the computers to catch up with the sound quality of a Nagra; however, for playback applications, the tradeoff between function and (audio) performance was decidedly biased toward function. This is why the computer-based system (Pro Tools or similar) has become the de facto standard.

There have been many shows I’ve worked on where I had to do on-the-fly things that would have been impossible with an analog or digital tape-based system. This includes pitch shifting; I transposed the playback songs on the Britney Spears movie Crossroadsthe first day on set when it was determined the songs had been recorded in the wrong key.

On House, I used Pro Tools to provide PB for a slow-motion scene. This was a helicopter crash scene with dialog that the Director wanted to play in slow motion but not pitch shifted. The scene was shot in real time at twenty-four frames per second and then I did some tests at various frame rates to see how fast the actors could lip sync to their playback. Interestingly, it’s a function of the complexity of the particular spoken words. In this case, forty-four frames per second was the fastest the actors could sync convincingly. So, camera matched that frame rate and we shot the playback version of the scene. In post, everything was slowed down to normal twenty-four frames so, when viewed, it looked like the actors were talking in slow motion but with their voices’ normal pitch (something that would have been impossible with tape).

On Drag Me to Hell, a séance scene required reverse playback of the actors’ live lines. These effects could not have normally been done on set with a tape-based system.  

This brings us to the key issue, which is often either:

1) The PB material is not prepared for what is eventually desired on the set or, 2) more frequently, a live-record is used as the playback master.

In both these cases, the frequency sampling rate and bit depth must be decided.

When performing a live-record (as I did on Almost Famous, Rock Star, 8 Mile, or The Hangover), I usually match the Production Mixer’s settings. This is important if timecode will be used. That’s pretty straight ahead as it’s a “closed information loop system” between the Mixer and me.

When using straight PB tracks or files prepared by someone else, I also will usually consult with the Production Mixer and match rates.

However, even when you ask, you don’t always get what you requested.

The evolution of current Music Playback is that half the time I get music tracks from the Director’s Assistant off their iPhone five minutes before they want to roll. This is often the case even when I ask for a better format a few days in advance. They may provide me something in advance, but often it’s not what they ultimately want to use on set.

We are seeing a revolution in technological information acquisition that is being driven by computer media and smart cellphone capabilities. The ability to send information on a personal smartphone is conditioning the population to expect any bit of information to be instantly produced. The misperception is that all information is equally available. To a person who does not have to create information but simply download commercially available product, there is a lack of appreciation of the technical creative process. As a result, creative decisions that used to be decided weeks or days in advance are now made “on the fly” to suit the creative process

The good side is that this has allowed more spontaneous creativity on the part of the Director. The bad side is that there is an expectation that anything can be ready on the spur of the moment. So, in this sense, with regard to prepared material provided by others, we have de-evolved to the point where probably half the playbackonly projects I work on now are iPhone downloads. The first thing to suffer is audio quality, of course.

When prepping a film, television or commercial, I still ask for WAV or AIFF files when possible and an audio CD backup. A good conversation with the Editor (if there is one at that point in the film) can also be valuable.

If timecode will be used, I will match the desired rate which, of course, is dictated by camera format and, if no TC, the Mixer’s preference. With the aforementioned “iPhone” transfers, I’ll convert them to the preferred formats.

In live-record situations, the same pretty much applies. Obviously, the higher the sampling rate and bit depth, the better the sonic quality. However, conversion transfers with digital must be considered because converting from one sampling rate to another, whether up or down, degrades the sound quality. For that reason, I’ll normally record at the highest sampling rate that I think will be ultimately used. Getting the highest quality sound verses the convenience of various formats will continue to be an issue.

I’m expecting the next stage of this evolution to be direct brain scan downloads off the call sheet.

Happy Playback.


Glossary for highlighted word

Live-Recording The process of recording a musical performance on set rather than having the players mime to the playback of a studio session. Sometimes a live-recording will be used to generate a playback master that is immediately put into service to shoot alternate angles and closeups.

Music Playback and Live-Record

From My Perspective:
Music Playback and Live-Record

by Joseph Magee, CAS

As a Local 695 professional, we hear a lot of crazy things at work and no, I’m not talking about that sick old generator staring at you fifty feet from set.

Have you ever heard one of these gems?

• Your Producer says, “I have a friend who knows Pro Tools and should do playback,” the Music Supervisor says, “Right on man.”

• Your Producer appointed to watch over the musical scenes in the film wants it all recorded live, with no tempo glue for editorial. He says, “That’s the only way to have a real performance, no click ever, live pre-records and live on the day. Our Editor will make it all work in Post.”

• The Director has a relative with an amazing home studio (in his garage) for pre-record. “The tracks will rock for sure. They will prep everything.”

• The UPM tells you, his faithful Sound Mixer, that music playback needs to happen the next day without a hitch; we don’t have a track yet. “Also, we don’t have a budget for a music playback person so you guys figure it out. Remember, you have a whole trailer full of gear I’m paying for.”

• The Music Supervisor has an MP3 they will email you sometime soon; it’s all good.

• And last, but not least, the Film Editor wants you to get playback timecode on the slates because that’s how he used to do it when he was doing music videos.

Oh brother!

I’ve been privileged to work on production music for feature films for more than two decades now. Before coming into the world of on-camera musical performances, I recorded classical and jazz records and broadcasts, worked as an orchestral scoring mixer for features and mixed front-of-house live sound for large venues including the Hollywood Bowl. Over the years, I’ve developed a keen sense of the procedures that facilitate a smooth production and the elements that enhance an artist’s ability to give a great performance. My projects have given me the chance to work in feature film pre-production, prerecord, production and post with many acclaimed music producers, composers, musicians and recording artists all facilitating the filmmakers’ vision. I do believe I have a unique perspective that starts from the very beginning and extends to the bitter end in final Post.

Although every project is slightly different, each usually starts with the music team, Director and Producers visualizing how the scene will play and then planning so that all the elements are in place on the shoot day. This is essentially the same as with any other scene in a feature film, except that a music performance has the complexity of managing creative work in three separate periods of work: the initial music composition/rehearsal/pre-record, the on-set performance to camera and through to creation of the scene in post. However, different than the rest of the feature film, these three distinctive periods are tied to the element of synchronous performance locked to the established timeline of the music track. This makes the music scene full of its own technical and artistic challenges.

How a production approaches the pre-record sessions influences the success of the whole venture. A good pre-record session should take place with awareness of how the scene is to be shot and the pace of the performance should mesh with the demands of dancing, screen action and other visual elements. Ideally, the same singers who appear on camera should record their own performances for playback (PB) tracks. It’s more natural for actors to match their own performances rather than a hired studio singer. The transition from dialog to music to dialog is more believable if the voice is the same throughout. And, if done well, the pre-record functions as a first rehearsal for the scene. It should be executed long enough in advance so that the musical performance can “season” in the actor’s brain for at least a few days.

The ideal scenario is to execute pre-records that will make it to final dub. During my many features with Disney, this also proved to be financially prudent. Yes, the tracks will be sweetened, edited, fixed to picture and stem mixed in the film’s final theater presentation. But the musical, artistic content will be set and adhered to, creating the exact intention of the musical moment, the storyline and the actors’ performances.

A synth track mock-up will not achieve this; it may get you through the day but that’s about it. The mock-up has a very good chance of not feeling the same, or sounding anywhere as good as the final track. The hastily assembled temporary track does a poor job of conveying the emotions of the scene for cast and crew—a sure recipe for a lifeless performance. Even if the track exactly matches every beat and every note, music is a “feel” thing and if the performers don’t feel it, the audience in the theater likely won’t either. The substitution of better music in Post might improve the scene technically but won’t do anything to breathe life into the unmotivated performances during production. I’ve found this to be a common theme—time spent in preparation makes filming go better and lessens the need to spend time in Post fixing mistakes.

A well-prepared playback should have vocals that are dry and relatively free of compression or processing. Vocal FX should be available as separate stems and mixed to the environment on the day. A believable music scene requires natural bridges between dialog and music. The performer can best deliver these transitions when every syllable from the recording can be easily heard in the playback. Pro Tools is the industry standard software/hardware for feature films. The sound FX, dialog and music teams all use Pro Tools. It is the standard for the dubbing stages as well. So it saves a lot of time if Pro Tools is also the software of choice for on-set music playback. The technical sound platform software for communication from beginning to end of a production process should be a standardized. When someone chooses to use different software, it just creates conversion issues. Fortunately, Pro Tools is easily accessible on many levels and with many types of hardware. The one exception to this standard is often the music score composer’s personal studio, but this can be worked out by conversion to Pro Tools before the score leaves to see the outside world.

The Pro Tools session that goes to set for music playback should have the music locked to a bars/beat grid. This will enable very quick edits if you are called upon to create magic while a 1st AD waits, not so patiently. The grid is easily achieved in advance, not so quickly on set at the last minute. I also believe in using your prep time to print a click and thump track, beginning to end. Even though your grid is functioning and your click is a plug-in firing off the grid, it is easier to show and cut a visual region when folks are at the rig trying to work out cues. We are lucky today that most choreographers and music folks all have a common ground in Pro Tools and are able to use the visual aide of the screen to communicate with each other. I also have my memory locations already set for song structure before anyone steps to my screen to talk cues. Another detail most often missed for the prep of the sessions is that PB timecode should advance to a new hour for each different song. This will help Editorial in the long run.

I believe that a music-intensive show should not rely on PB timecode on an audio track. An Avid Sync HD I/O should be used on films with music-intensive scenes. This device should be synchronous to a video sync reference. Good news is there are a few ways of setting up this requirement, which now makes the on-set hardware compliment much lighter.

In many situations live-music-record is very important. Combinations of music playback and live-record performances, if executed properly, are often worth their weight in gold in Editorial. Even a few words of live-record cut into the pre-record in Post enables the audience to believe the musical performance in the final cut.

On the other hand, a show built from all live-record can be a disaster in Post. Folks giving their accounts of “all live-record” shows don’t always tell the whole story. Often these shows require extensive editing and pitch work to correct meandering tempos and modulating keys. I have worked on a long list of projects with well-meaning Directors who have gone down this road from the excitement during production to frustration in Post.

If you do have to go “all live” during production, you’ll need to provide the performers some sort of mapped tempo either using a click track through earwigs or a thumper or both. If the singing is a cappella, you’ll also need to play a pitch reference at the right moments. Even so, some key modulation and tempo variations are likely to occur.

Modern earwigs are very useful although limited by volume and low fidelity. I started doing this on-set work back in the days, first with earwig inductance loops taped into the set, and then with neck loops. So I am comfortable explaining the current limitations to talent and creating an environment that helps the devices do their jobs. For example, when transitioning from speaker playback to earwigs and back to speakers, I like to leave the thumper running at a very low level the entire time. The pulse helps provide the “rhythmic glue” to tie the separate moments into one seamless feeling. A thumper quietly pulsing away also helps to keep the full range speaker volume level lower throughout the day.

Active eighteen-inch subwoofers today are very affordable and do a great job. The source of the thump is also very easily tuned on-set in Pro Tools. The sample used for the thump can be highly tuned prior to arriving to set. I have used the same sample for thump for many years. With the current state of the art in active loudspeaker design, I think everyone should take advantage of better fidelity playback on set. A speaker system with higher than average Total Harmonic Distortion (THD) and poor crossover points is fatiguing to the cast and crew. When music plays on set and sounds great, the day goes by more smoothly. It’s easier for performers to follow lyrics that are clearly articulated and better fidelity helps them “feel” the music and translate that energy to the performance. New, high-quality designs are affordable and durable. Passive speakers with amp racks on set and drive racks with crossovers and EQs are basically a thing of the past. I worked through those days and am happy not to use that gear anymore. If a production requires very high sound pressure level (SPL) playback or on-set monitor mixing becomes critical, I then recommend employing a professional touring company to join the team.

The Playback Engineer should try to coordinate his efforts with both Editorial and the Production Mixer. A conversation with each before the assignment starts can sort out issues and make the process smoother. This is the best time to bring up the issue of playback timecode. Having both time-of-day (TOD) code and playback code married, available in burn-in windows for Editorial is the best way to load and edit synchronous music playback scenes. When loaded correctly, endless hours of sliding sync or making on-the-fly corrections will be completely avoided for the editorial team.

This production workflow is easily accomplished. For the Production Mixer, it’s only necessary to print the PB timecode on one analog track on your multi-track and the mono music playback reference on another track. Your multi-track is already synchronous with your TOD code.

The media management company contracted for dailies and editorial workflow can then easily meet the need for PB code in a second window, if requested. On a show where Editorial is taking your tracks directly, they can create the second code window on their own. Either way, it will save numerous days of questionable sync work.

The relationship between the Avid assistant and the Playback Engineer is vital to maintaining sync in the music scenes. The initial conversation between Playback and the Assistant Editor responsible for loading each day’s work into the Avid will set the tone between departments.

The Playback Engineer should provide to Editorial a master playback 48 kHz, 24-bit stereo interleaved file for each musical piece performed. The file should be created from the exact playback session and have the positional timecode reference identical to the day’s playback work. This file with the correct timestamp will enable the correct loading of all of the takes with music playback timecode. Sent at day’s end, the file labeled PB Edit Master, should go directly to the Avid assistant editor; I deliver this file via Aspera, with explanations regarding the use of the playback in the scene.

I’ve found that it takes a complete team effort to pull off a complicated PB, live-record, earwig, thumper day on set. Technology has gotten more complicated and offers more production possibilities, but increases workload. Personally, my favorite shows are a team effort with playback integrated into the sound crew. Coordination of cable runs, speaker and thumper placement, music edits and session maintenance, music cues with the 1st AD and earwigs to talent is all very doable when executed by the whole team.

In my experience, the most effective way to operate PB is to coordinate with all the departments responsible for the creative process, before stepping onto set. The Playback Engineer can act as a bridge between Production and Post Production on the music scenes, assisting workflow and maintaining accountability. From my perspective, an effective Playback Engineer is always prepared before coming to set each day. Wise colleagues in Production and Post should bring him aboard early enough to make those preparations.

 


Glossary for highlighted words

Stem A mix of multiple audio sources. Example: A blend of music and effects, without dialog. The use of a stem allows complex source material to be treated as a single unit in the final mix or as a temporary part of the process of editing and recording audio.

Live-Recording The process of recording a musical performance on set rather than having the players mime to the playback of a studio session. Sometimes a live-recording will be used to generate a playback master that is immediately put into service to shoot alternate angles and closeups.

Earwig A miniature monitor designed to fit within the ear canal like a hearing aid.

Thumper A playback system to reproduce the beat of music as a series of low-frequency thumps. The tones are typically about 40 Hz so they may easily be removed from a track without harm to recorded vocals. A special thumper speaker system optimized for low-frequency reproduction is used to play the track. The thumps permit performers to follow the beat of the music without musical playback that might interfere with dialog recording. Originally invented by Hal and Alan Landaker for Warner Bros. Studios. (See 695 Quarterly, Volume 2, Issue 1, Winter 2010)

Aspera A company making software to facilitate transfers of large data files.

Cinegear Expo

Cinegear Expo

Paramount Studios hosted Cinegear Expo for three days this year from May 31 to June 2. More than 250 companies set up booths along the New York street sets and in half a dozen soundstages.

Although primarily a camera and lighting event, Trew Audio had a booth and Sound Devices was also present to show their recorders and their new Pix 220(i) and 240(i) audio/video recorders.

There were also many companies whose products are useful for professional sound and video techs. They included Filmtools (tools and accessories), G-Technology and SanDisk (hard drives and digital storage). IDX (batteries), Insurance West and Insure My Equipment.com, Marshall Electronics and Nebtek (video monitors), Packair Airfreight and Global Express (cargo expediting), Studio Carts and Innovative (equipment carts).

Exhibitors of interest to people involved in video assist and data asset management included AJA Video Systems, BlackMagic Design, Codex, EVS and Light Iron.

The Annual J.L. Fisher Barbecue

A Boom and a BBQ

by Laurence B. Abrams

The Fisher microphone booms that we use in production today are the evolution of a design first manufactured by James L. Fisher in 1951, when he was working in the shop at Republic Studios in Studio City.

So successful was his design that it is the only one to survive that era … and after evolving somewhat since then, it is the only major studio boom in use today.The studios needed microphone booms that could hold the heavier mikes in use at that time and that would also permit the operator to swing and extend the arm and cue the mike as needed. These studio booms used a system of sliding weights to keep the boom arm balanced as it was extended or retracted to follow action. Mole- Richardson and several of the sound shops at the motion picture studios, such as Paramount and 20th Century Fox and Republic, had each developed their own proprietary studio booms. Thinking he could do better, Mr. Fisher began working on his own design in his spare time and came up with a boom that turned out to be lighter and more functional than the competition.

After a long career developing and manufacturing sound and camera booms, Mr. Fisher passed away in 2005. But more than 60 years after its introduction, his booms are still in use today and were on display at Fisher’s annual Open House and BBQ Lunch, held this year on May 18. Now in its eighth year, this all-day free event was conducted at the Fisher facility in Burbank and featured product displays from a variety of camera, grip and lighting equipment manufacturers along with Fisher’s complete product line, including of course, the full selection of Fisher microphone booms.

As in the past, Local 695 Microphone Boom Operators Andy Rovins and Laurence Abrams were on hand to demonstrate the 23-foot Model 7 boom arm and Model 6E base. Lots of Local 695 members came by during the day to chat, share production stories, do a little networking, and jump up on the boom to give it a quick run. Some of the folks who stopped by were experienced Fisher boom ops from way back and some were seeing it for the first time. Plenty of camera operators and grips and electricians came by, as well, and got a chance to try out the boom for themselves and gain some new insight into what we do. More often than not, they’d jump down and say something like “Hey, this isn’t as easy as it looks!”

The company’s current president, Jim Fisher, son of the boom’s designer, offered guided tours of the facility and machine shop. Fisher sales reps Frank Kaye and Cary Clayton were there to answer questions … and there was plenty to eat and drink, with food trucks and BBQ grills serving burgers and dogs, chicken and steaks, and our personal favorite … BBQ pizzas.

If you missed it, watch for next year’s announcement and when you’re there, be sure to stop by to say hello. If you still need to learn how to use the Fisher boom, be sure to take advantage of Local 695’s unique Fisher Microphone Boom: Oneon- One Intensive training program. To sign up for a personal training session, see www.local695.com/mbr/edu-fbt.php for details.

How I Spent My Summer Vacation

How I Spent My Summer Vacation

By Jim Tanenbaum CAS

Editor’s note: An abbreviated version of Jim Tanenbaum’s story about his recent journey to Viet Nam appears below.  Jim’s complete, lavishly illustrated 150-page journal, detailing his encounters with poltergeists in two of the three hotels, and the novel recording techniques invented by a Vietnamese videographer which Jim has not yet dared to try is available to read in PDF format. (If photo reproduction is poor, save the downloaded file to disk and view in Adobe Acrobat.)

In 2010 and 2011, I spent autumn in Beijing, China, at the BIRTV (Beijing International Radio and TeleVision) trade show, courtesy of John and Nina Coffey and some of the companies they represent. I was looking forward to going back again in 2012, but alas, it was not to be. Probably because of my telling all and sundry what a great time I had before, the owner of one of the companies that defray my expenses decided to go himself instead of sending me.

Of course, I was not happy about this turn of events, as I love traveling, especially when someone else foots the bill. To me, the most interesting aspect of being in another country is the people there. Second is the food, and a distant third are the museums, palaces, and all the other touristy stuff. I do go to see those places, but they’re at the bottom of the list. However, I was looking forward to seeing the Great Wall this time.

My disappointment was short-lived, however. Soundman Steve Miller was looking for a replacement to take over his teaching position in Viet Nam, and Laurence Abrams (who creates the great diagrams for my 695 Quarterly articles) recommended me. The client was VTV (Vietnam TV), the government-run national TV network. The rest, as they say, ispho (Vietnamese rice-flour noodles, pronounced more like “fuh” than “foe” or “poe”).

My travels and adventures are far too extensive to fit in the print version of the Quarterly, but will appear here soon in the unabridged version.  Check back to find out what happened when I asked for a “hot dog” in Viet Nam or my attempt to climb the “Stairway to Heaven” to see the Buddha.

Here are a few brief excerpts:

1. The wrap party for my Da Nang class was held at a local restaurant. When I arrived, all the students were there, seated at a long table. I was greeted by a large poster with my picture, and my name spelled correctly (unlike China, where a large red banner read “James Tanen Baum” and my exhibitor’s badge had yet another misspelling).

This dinner lasted much longer than the one in Ho Chi Minh City, with courses separated by just enough time that I was never sure if there would be another one.  

Finally, the meal was over, but I wasn’t taken back to my hotel. Oh no, now there was going to be a “Karaoke Party.” My protests that I only worked “behind the microphone” were to no avail. The karaoke unit did have songs with English lyrics, but the remote control was malfunctioning, and even with repeated banging by the operator, it failed to produce any songs I was even remotely familiar with. I had to make do with an a cappella rendition of…

2. My teaching style was “foreign” to the students in several ways. I use elements of Zen in teaching, and also real-world examples to aid in understanding what would otherwise be sterile academic concepts.

“Imagine you are at the beach, and the tide is coming in. If you stick a surfboard in the sand and stand behind it, will your feet get wet? Of course they will, because the water will simply wash around the narrow obstacle, just like low-frequency sound will. And when the waves crash against the board, they will knock it down even if you try to hold it upright, just as low-frequency sounds will push and pull on a flimsy wall to pass through it. (Actually, the original sound waves will be stopped by the wall, and new ones generated on the other side, but you get the idea.)

“Now imagine that kids are throwing rocks at you. Will the surfboard protect you if you hide behind it? Yes, because it can easily stop the small rocks, which cannot go around it, just as the small highfrequency sound waves are blocked. And you can hold the board upright when the rocks hit it, just as even a lightweight wall will stop high-pitched sounds.

“Another point: imagine there’s a small hole in the surfboard—a rock can pass through without losing any of its energy, but only a small amount of the water in a wave can get through. A large amount of high-frequency noise can enter through a small opening, but only a small amount of low frequency can get in, providing the wall is rigid enough to prevent flexing.”

This not only teaches about acoustic shadows, but also gives the students the meta-knowledge to handle any specific noise infiltration problems I haven’t mentioned in class, when they are out shooting in a practical location.

3. Sunday was my last day in Viet Nam. I chose to walk south from my hotel, rather than north as I had the Sunday before. I wanted to check out the large lake near the hotel, and the interesting bridge and island temple.

On the way there, I stopped at a small park with a large statue. There are many of these scattered throughout Na Noi and other cities. While I was taking pictures, a young woman approached me with large sack of what had to be tourist merchandise. I motioned her away, but she was persistent. She thrust a “Viet Nam” cap at me and waved it. “How much?” I asked automatically.

“150,000 dong.” The dong is the Vietnamese monetary unit, equal to 1/20,000 of a U.S. dollar, so the cap would cost me $7.50.

“That’s too much. No thank you.” I went back to my picture taking.

She was not to be gotten rid of that easily. I should never have spoken English. Usually I speak gibberish (“bohg pretzam etza eesh”), because these peddlers know a great many languages well enough to be a nuisance. But I was distracted watching kids on skateboards with only a single wheel fore and aft, and spoke without thinking. (Maybe they have these two-wheelers in Los Angeles and I never noticed.) She removed other colors of caps from her bag. I had seen them in stores and from other street vendors, and the going price was $5 American … after you haggled them down from $20.

“How much you give?” Never, never, speak a recognizable tongue to a street vendor.

“50,000 ($2.50).”

“Too little. You give me 100,000.” She opened and closed the cap’s Velcro strap to demonstrate this valuable feature.

“No, 50 or nothing.” I put my camera away and turned to leave.

“What color you want?”

I picked out a red one, checked to see if the seams were good, and stuck it in my (very large) pants pocket. I deliberately paid her with small bills, which I keep in a separate place from the big ones like 200,000s or 500,000s.

Never, never, never buy something from a street peddler. She held out the remaining caps.

“You buy more.” It was not a question.

“No, I have only one head.” She didn’t get the joke. She put the caps back and drew out a stack of guidebooks for various Vietnamese cities. In English, but I’m sure her sack held copies in all the major languages. But even at a distance I could see they were bootleg photocopies. I spread my hands out. “No thank you.” Postcards and picture books were next.

I gave up and walked away. She followed me for a quarter block, calling out “CD … DVD … SIM Card,” then went back to her spot in the park, like a spider in the center of its web.

Jim

Nagra Memories

Nagra Memories

Editors’ note: With the invention of the Nagra recorder, Stefan Kudelski made high-quality recordings possible without the need for a truck full of equipment. He enabled location recording in the same way that the substitution of film for glass plates enabled photography. Moreover, his commitment to quality in both design and construction helped define excellence in our profession. In a continuing tribute to his contributions, we are printing accounts of first experiences with the recorder. We’ll continue to feature stories of working with the man and his inventions as they become available to us.

Jerry Zelinger:

I was starting to write my experiences with the Nagra and was thinking only of the model III and then it occurred to me that my earliest experience was with the Nagra II. I had just graduated from high school and was working at the new listener-sponsored FM radio station in Los Angeles, KPFK.

I was producing programs for children, among other things, and one day the production manager showed me this portable wind-up tape recorder that was donated to the station. He called it a Nagra. I had never heard of such a thing … made in Switzerland.

It certainly beat an Ampex 600 with a very long extension cord. He asked me if I could use it for any of my programs. After thinking about it for a couple of days, I came up with a concept for a man-onthe- street radio program I titled Street Thoughts (not a children’s program). At the time, man on the street or M-O-S (not to be confused with “mit-out-sound”) shows were a question by an interviewer and then the answer by whomever and then the question repeated and then an answer. My show was to be the “big question” and then a montage of answers occasionally inserting the question re-phrased. It was only 5-10 minutes long but took hours cutting and splicing. Boy, would Pro Tools have helped then.

That Nagra II served us well. It had great sound quality and the spring never failed me.

One of my first experiences with a Nagra III was back in 1965. I didn’t own one yet but I had a friend, Flynt Ranney, owner of Spectra- Sound Recording Studios who did, and he was generous to loan it to me when I started out making films with my friend Bob Abel. I was making a little documentary with Bob about Christmas in Los Angeles.

We were shooting a Christmas Mass at the Greek Orthodox Church in downtown L.A. and for some odd reason, I had to rewind the roll of tape. I had my earphones on and didn’t realize that I was rewinding with the speaker on. Parishioners around me were smiling and nodding at me (which I thought “how nice”) but I was unaware until I took my earphones off that everyone around me could hear the “chipmunks.” I turned red with embarrassment and immediately turned off the speaker.

On another early Nagra outing, I was making another film with Bob Abel about drag racing called Seven Second Love Affair. We were at Lions Drag Strip in Long Beach and we wanted to capture the incredible sound the dragster makes as it accelerates when you’re sitting in it. No wireless mikes could do the job (not then), so I put the Nagra in the nose of the dragster and used an Altec 21-BR-180 high-level condenser mike capable of the 150 db sound levels (I had to build a battery power supply for the mike). The dragster roared out of the starting line and we all prayed that it didn’t crash or blow up (We didn’t have the $1,800 to replace the Nagra).

I still remember that sound like a rocket and then the parachute is released and just silence and the sound of the tires on the gravel.

Obviously, I didn’t tell Flynt about putting his Nagra in such a precarious situation. And I still have the recording.

I eventually bought a brand-new Nagra III from Ron Cogswell at Ryder Sound. I do remember that I had to put something like $200 for a down payment and that it was several months before it arrived. Ron said not to worry; if I didn’t want it when it arrived, someone else would be standing in line to buy it.

It served me well on a lot of documentaries, commercials, TV shows, some features and even some music records.

I still have it.

Kirk Francis:

It was late 1968 and I had been working for about nine months at a big L.A. ad agency, running their small recording studio— voiceovers, radio spots, etc., on big old Ampex 351 ¼” recorders. I had no real idea what I was doing but, compared to what those ad agency folks knew, I was a damned genius—some things never change. Anyway, I quickly grew tired of that and began looking for other gigs. I recorded a few bad rock and roll bands at various studios around Hollywood, but even at that young age, quickly burned out on the late nights and long hours spent indoors. Someone suggested that I get into movie sound—often done in the daytime and outdoors, every shot being different, and the pay wasn’t too bad either. Before I knew it, a trusting fellow from New York named Jim Datri handed me an elegant-looking metal box called a Nagra III, a converted Bolex mono-pod with a Sennheiser 404 on the small end plugged into a KAT-11 preamp, and a set of Beyer headphones which seemed to weigh about 13 pounds. To my studio-inured eyes, the whole rig looked like some sort of arcane scientific testing apparatus. Suddenly, I was in charge of recording sound for a motocross documentary, lugging the thing over hill and dale someplace in the depths of Orange County—and tethered to a 16mm Arri S by a sync cable, like the ass-end of a donkey at a costume ball—as dirt bikes roared around us menacingly. Good thing I was only 21 years old…

I still love those old recorders, in no small measure because they remind me of what the job I have been doing ever since used to be but sadly isn’t anymore: The crew would assemble, the director would actually make a plan, and then we’d all shoot it—usually in well under 10 hours (!). The sound crew’s task in this process was to create, as best we could, a one-track representation of what it all sounded like. A big day might involve three mikes, as radio mikes were yet to be “perfected” and the idea of shooting both a wide and a tight shot at the same time was considered to be very bad manners. Now, we have got to the point where our job is less like that of a framing carpenter and more like that of a clearcut logger. The Nagra III, IVL, and then the IV-S, were the rocks upon which our livelihoods were built. We depended upon them, and they always delivered. In my eyes they remain to this day iconic, soulful works of practical art.

 

Behind the Candelabra

There’s No Place to Hide Behind the Candelabra

by Javier M. Hernandez
(Photos by Claudette Barius/HBO)

The scene started in a wide shot and we planted two mikes just in case they started early. We hadn’t seen the rehearsal, so we needed to be ready for any possibility. In the tub, Douglas and Damon’s close-ups were shot at the same time so we covered them with two booms. A mirror behind Damon reflected most of the bathroom so we had to work from below and our mikes were almost touching the bath bubbles. Even the camera needed to be wrapped in a towel. The one thing we had in our favor was that the Jacuzzi wasn’t actually running this time.

How I ended up on my knees in Liberace’s bathroom is a tale.

I first worked with Sound Mixer Dennis Towns on the HBO series Unscripted, produced by Steven Soderbergh’s company. We then worked together on some movies Soderbergh directed including The Informant, Haywire and Contagion. Over those years, a movie about Liberace was always in the air. When the call came with an official start date and the news that Michael Douglas would be playing Liberace and Matt Damon his young lover, Scott Thorson, we all knew it would be a special project. To make it even more special, Soderbergh announced this would be his last film.

I had been casually looking at clips of Liberace on YouTube since I first heard that Soderbergh was interested in making a movie about his life. The numerous challenges this project would present quickly became obvious.

Soderbergh has his own style of filmmaking: most importantly, he likes things to be real. With this project, that meant many practical locations and sets full of mirrors. And not the “set mirrors” that you can gimbal; they would be real. And often quite large. And reflective surfaces would be the norm for almost every scene. Even Liberace’s piano and clothing were reflective. Oh, and there would be musical numbers, some involving complicated vari-speed playback and other fancy tricks.

When you work with Soderbergh, the days are short but intense. Soderbergh knows exactly what he wants to shoot, his preparation and vision are clear from the moment he starts describing the setup. Everyone on set knows what is expected of them and he hires the kind of people who can work with minimal need for explanation.

Unlike working with more conventional directors, you can’t assume with Soderbergh that you’ll get it in coverage if you miss a line or two in the master. There are also not many takes. If he likes the first couple of takes, why do it again? If he likes the way the scene plays in the master, why not play it in a oner? This often means having everyone on wires and booming only when possible.

Knowing the challenges we would be facing, I recommended to Dennis Towns that we hire Gerard Vernice as our utility. I had just worked four seasons with him on Chuck. I knew he was a master with wires, that we worked well together and that he would fit in perfectly with the pace and style of a Soderbergh film.

From day one Candelabra was a challenge.

Everything in Liberace’s wardrobe was silk, polyester, and various unknown fabrics, topped off by tons of sequins, rhinestones and noisy jewelry. It became apparent that Gerard would need to wire Douglas in his dressing room as he had to come up with something new and inventive for every outfit.

I thought I had the easy job, as I ended up with the responsibility of wiring Damon. Once we solved the dilemmas of the day with our principals, we would wire the rest of the cast. They weren’t exactly easy to wire either, as they were also dressed in period garb. Skimpy costumes, noisy fabrics, bare chests and lots of gold chains were the norm. I had it easy for a while but, as the story progressed, Damon’s wardrobe became more difficult. His character started to wear polyester shirts unbuttoned to the navel and more of those damn gold chains. Sometimes he wore nothing more than a speedo—not many places to put a wire!

We got very lucky: I was able to work a boom for most of the scenes where the wardrobe was noisy or nonexistent. But getting a boom in often meant crawling on my knees, popping up and down and even jumping over a couch in one scene. For the scenes where the boom couldn’t be in the room because of reflections, we made the wires or plant mikes work. Sometimes in this business, you just have to have luck on your side.

And that was just an average day at work.

One of the most difficult scenes started with Douglas and Damon in the hot tub. They got into a fight, got up out of the tub, walked through the bathroom to a dressing area, went into a closet, and then crossed to a mirrored vanity. Often for sensitive scenes they had private rehearsals, meaning we couldn’t see the blocking and had little time to work out any possible issues. In this case, after they privately rehearsed, Soderbergh walked us through the scene pointing out the four different spots in the bathroom and dressing area where he planned for them to talk. He said it casually, but Soderbergh knew this wouldn’t be easy for us. He trusts his crew to get the job done with minimal fuss or delay. No biggie: just wire two naked men in a tub or get a boom in without a reflection in a bathroom filled with shiny objects.

As I described at the beginning, we had two plants to cover the wide shot and worked two booms from the soap suds for the matching close-ups. When Douglas got out of the tub, I was still on my knees, booming from underneath as we were still limited in where we could be. Then we cut to the shot of Damon in the tub with the champagne bottle in the foreground. The huge mirror behind Damon required him to be on a plant mike. Douglas then crossed into the closet to put on his robe where Gerard was waiting with a boom to get his offscreen dialog.

Douglas then re-entered the bathroom and went to the vanity. As he made the cross, I came in underneath to get his lines. And then things got interesting: the rest of the scene played out in one take.

At the vanity, we were shooting into the mirror and Douglas was speaking into a plant mike while Damon’s off-screen lines were on the plant mike by the tub. Damon then crossed to the closet where Gerard was still waiting to boom Damon’s lines as he got dressed. Damon then walked back into the room where his lines were picked up by a plant mike by the doorway. As Damon walked toward Douglas at the mirror, I picked him up on the boom, still from underneath, and then the camera panned from the mirror reflection into an over on Damon. At this point, I was able to boom both actors from underneath as the camera moved from the over on Damon, past Douglas’s back and more mirrors, into another over, this time on Douglas.

Although it’s only part of one scene, this shot required two booms and three plants.

Oh, by the way, did I mention that Soderbergh doesn’t use a video feed so Dennis had to mix all of this blind?

While the tub scenes involved the most mikes and presented some unique challenges, I still felt lucky to be able to boom at least some of the dialog, even if I was on my knees the whole scene. You see, as a boom operator, sometimes the hardest thing is to rely entirely on wires. There can be a helpless feeling in the pit of your stomach as the cameras roll because, if something doesn’t work, you’re not able to fix it on the fly.

On Candelabra we dealt with this on a regular basis. Sometimes it was due to wardrobe, sometimes the sets and sometimes because Soderbergh wanted to shoot a long scene in a wide shot oner.

Liberace’s wardrobe presented unique challenges with every different shirt, cape or wig. Each change of wardrobe required Gerard to go to Douglas’ dressing room and come up with something new and inventive. The “backstage” scenes would be the first time Liberace was in full performance wardrobe. We had a chance to look at the wardrobe the day before but, frankly, seeing it didn’t help much; it just added to our concerns. Gerard went to off to wire Douglas, not really knowing what the solution would be, but he was smiling when he returned to the sound cart. At first he was having trouble finding a quiet place to put the mike. The jacket was quite tight fitting and made of a very noisy material. Then a brooch was added and Gerard quickly put a Countryman B6 with a small amount of butyl gum adhesive behind the brooch. The butyl served two purposes: it held the mike in place and isolated the mike from touching the brooch itself. Instead of trying to work around all the necklaces, jewels and sequins, Gerard decided to use them in his favor. Often he threaded the B6 mike through one of Liberace’s many necklaces, and placed the element within a link or charm, leaving the mike concealed, yet out in the open. Doing this helped us achieve the cleanest audio by allowing us to place the mic in a perfect spot for dialog while minimizing clothing rustle and rubbing.

We shot many scenes at the LVH in Las Vegas. The set designers and their crew meticulously dressed Liberace’s penthouse to look as it did back in the 1970s when it was called the Hilton Hotel in Las Vegas. Did I mention Liberace’s love of mirrors yet? The very last scene we shot in the penthouse was another of those of those scenes where we would have no choice but to rely mostly on the wires.

It was a Friday night and we had been having a good day. Most of the scenes were in the bedroom and we had been able to get it all on the boom. Looking at the sides, I knew we had an almost three page scene coming up in the living room area. I was just glad it was no longer playing in the Jacuzzi area as it had originally been written. They had planned a small party after wrap. After shooting in Liberace’s penthouse, we were going to get to socialize and relax and enjoy the view from the top floors of the LVH. The party was scheduled to start at 9 p.m. It was about 7 p.m. as we set about blocking the scene. This gave us about two hours to set up, rehearse and shoot a three-page scene. The living room was in typical Liberace style: mirrors and windows and a ceiling covered with recessed lighting. (see video clip below)

As Soderbergh started walking with the actors and talking about the scene, it became apparent that most of the scene would be done in a oner. A wide oner. The scene was set up as follows: Douglas and Damon would walk into the penthouse arguing, with dogs barking at their feet, and then walk over to the bar where Douglas was to make a drink. Then they would both walk over to the couch where they continued arguing as they sat down. At the end of the scene, Douglas would come over and give Damon a hug. Soderbergh then confirmed to me that he planned for the scene to be a oner until the end, at the couch, where he intended coverage for the last couple of lines.

While bringing in food for the party which would be held at the penthouse next door, one of the guys from craft service asked me, “So, how we doing?” I told him we had a three-page scene left to shoot and he replied, “Well, I guess we are not starting the party at nine.” I asked him why. Did he think we couldn’t finish three pages in an hour and a half? And then I reassured him. “I’m sure we will done in time for the food to stay fresh.”

Since the shot would involve a big dolly move throughout the penthouse, the camera guys rehearsed the move a few times while Gerard and I wired our actors. Damon’s shirt was made of polyester, but he only had one chain for this scene, so I knew I could make it work. I used a vampire clip and a little piece of moleskin to help lift and isolate the mike from the shirt. Gerard also needed to use a vampire clip on Douglas but, being Liberace, he had a bigger chain. The rehearsal went perfectly. It sounded so great that Gerard and I walked over to Dennis, who was hiding in the hallway behind a statue, and we started high-fiving each other. We were  ecstatic that it was going to work on the wires, knowing full well that there was zero chance of getting the boom in for this wide, constantly moving shot. During our celebration I noticed a discussion going on around Damon, so I walked over to see what was going on. They were adding more gold chains. I knew it had been too easy. After wardrobe had added those extra gold chains, “we” were ready to shoot, but I needed a couple of minutes to find a way to make Damon’s wire work as well as it had in the rehearsal when he had only the one chain. The rehearsal had been so good, but now one of chains was right on the mike and I didn’t have many options. I moved the mike higher, fitting it between the chains. I then put the vampire clip behind a button, using a white mike and a white clip in the hopes that it wouldn’t be seen on the white shirt. It was right on the edge. As Gerard would often say: “We are flirting with disaster.” As I was walking along with the dolly, I realized what a great shot it was. The camera dolly was seamlessly following Damon and Douglas through the beautiful penthouse, with its mirrors and large windows. It’s another example of the kind of shot that Soderbergh is so good at designing: a shot where he can create dynamic action and allow three pages of dialog to just flow naturally. And all I could think was “We better get it. This is a great shot.” It was sounding great, and the whole time I couldn’t take my eyes off Damon’s shirt, looking for any chance that mike might become visible as he moved. Douglas sounded great; even though his chain moved a little bit, it wasn’t on his dialogue. Everything was working. When we cut there was a long beat and Soderbergh said: “That was great. I have it.” Douglas and Damon had a little conference. Soderbergh was happy with the take and so were we. Personally, I didn’t want to do it again. It was perfect. It was like tempting fate. They decided to do one more, for protection. The second take was OK, not as good as the first as I could hear a little bit of the chains. It wasn’t bad, but not as good as Take One. Unlike many directors who might “chase the dragon” in search of another perfect take, Soderbergh realized he had what he wanted in take one, so we moved on. We did a couple of closeups for the last lines. And that was it. The three-page scene was done. It was 8:45 p.m. and the party would start on time.

The last scene in the movie (see video above) was also the last scene we filmed. It involved Douglas flying up to a piano high on a platform where he would sit and sing a song. Since it was a fantasy, there was no handheld mike, unlike in the other performance scenes. He would then stand up from the piano, say good bye, and fly away. This was a complicated scene involving a big dance number, a flying rig, and recording Douglas singing live. On our day off we spent the day rehearsing the scene. It was great to get to see Douglas in his wardrobe in advance. Unlike some of his other performance outfits, this one didn’t have a brooch that might hide the mike and yet it couldn’t go on the jacket. Douglas would be wearing a flying harness and the chances of the mike picking up clothing noise were too great. We all looked at each other and said, “It has to go in the hair.” Going into this project, we had thought that a mike in the hair would be something that we would use a lot, but it never worked out before because Douglas’ hair was too short in the back and you could see the cable. For this outfit he had a Dracula-type collar that stood up and would hide the cable for us. Gerard had a quick word with the hair department and they agreed to help us put the mike in Douglas’ wig. Having a day to rehearse was a great luxury; it gave us time to spot the problems and work them out without being under the stress of shooting. We had the time to work it out that Gerard would wire the wig and the hair department would help hide the cable. They were out of New York and had the kind of theater experience to do a great job. The mike was hot, Douglas was put in the flying rig and away he went. When he got to the piano and started singing, I was so relieved that it not only worked but it sounded great. It had to work. There would be no adjusting the wire or getting a boom in and a plant mike just wouldn’t work. We had tried to use a plant mike in the piano, but it was too noisy and it was picking up the “clink” of piano keys being pressed.

It was an emotional day for everybody. It had been a challenging show and the end was near. Would it be Soderbergh’s last film? As Douglas soared up into the air, I was able to step back and enjoy the magic of movie-making. I just felt lucky to be a part of this film. Despite the crazy day-to-day problem solving, this was the most fun I’ve had on a job in a long time. And none of this even mentions shooting in Palm Springs and Las Vegas in weather so hot the cameras had to be wrapped in ice packs. I went home exhausted every night but proud of the work we were able to do.

The Nagra Recorder – Stefan Kudelski Tribute

A Tribute to Stefan Kudelski and the Nagra Recorder

by Scott D. Smith, CAS

Long considered the “gold standard” for location sound, the Nagra recorders established a level of technical superiority and reliability that to this day is unmatched by almost any other audio recorder (with the possible exception of the Stellavox recorders, designed by former Nagra engineer Georges Quellet).

With the death of Stefan Kudelski in January of this year, this would seem an appropriate time to look at the history of the Nagra recorders and the man responsible for their huge success.

The Early Years

It should probably come as no surprise that Stefan Kudelski would be destined for great works. Born in Warsaw, Poland on February 27 of 1929 to Tadeusz and Ewa Kudelski, it would become clear to those around him early on that he possessed a level of intelligence and ambition exhibited by few other young men his age. His father had studied architecture at Lvov Polytechnics, but later went into chemical engineering. His mother was an anthropologist. Despite this, his childhood years were far from idyllic. With the imminent Nazi attack on Poland in September of 1939, at the tender age of 10, Kudelski and his family fled Warsaw, first to Romania, then to Hungary, and finally to France. He resumed his high school education at the Collège Florimont in Geneva, and later studied electrical engineering at the Ecole Polytechnique in Lausanne, Switzerland.

Like most successful endeavors, Kudelski did not originally come to the idea to create a portable tape recorder directly. His initial interest was sparked by the terribly inefficient work he saw being done at a machine shop in Geneva, where each piece was turned by hand. Realizing that much of this repeatable work could be done by automation, he set about designing what would have been one of the first CNC machine tools. However, he lacked a method to record and store the data necessary to control the motors, and began to look at magnetic recording as an possible medium for data storage.

After dismantling an old recorder to study its design, Kudelski then set about designing a new recorder from scratch. This recorder would be destined to become the Nagra I. However, as the son of a poor refugee family, he was unable to interest anyone in his CNC machine tool project, so he turned his focus to designing a recorder suitable for broadcast use.

Working from his apartment in Prilly, he managed to scrap together enough money to design a prototype machine. It was an instant success, and he sold his first machine for the sum of 1,000 CHF. (While this only amounted to about $228 USD in 1952, it was still a significant amount of money for the young Kudelski). This initial sale was followed by orders from both Radio Lausanne and Radio Geneva.

In May of 1952, on the heels of interest from some well-respected European reporters, he receives an order for six Nagra 1’s from Radio Luxembourg, which convinced Kudelski that he is on the right path. It was at this time that Kudelski left the Ecole Polytechnique and pursued development of the Nagra full time. (Years later, he would receive an “honoris causa” degree from the Ecole Polytechnique, in recognition of his work in developing the Nagra recorder.)

By the end of 1953, Kudelski had established manufacturing operations at a house in Prilly (west of Lausanne), and employed a staff of 11. Toward the end of 1954, improvements were made to the machine (now called the Nagra II), with printed circuit boards being implemented for the audio electronics. The orders continue to roll in, virtually all from word-of-mouth, and by the end of 1956, the staff numbers 17. Despite this success, Kudelski recognizes that there are still improvements that need to be made, especially in the area of the drive mechanism. He continues development of the machine, but opts for a ground-up redesign, as opposed the incremental changes between the Nagra I and II. The result is the Nagra III, introduced in 1958.

The Nagra III Makes Its Debut

The design of the Nagra III marked a significant departure from the Nagra II. Gone was the spring-wound drive mechanism, replaced by an extremely sophisticated servo-drive DC motor. Also absent was the tube-based amplifier circuitry. In its place was a series of modules, each encased in metal, which contain the individual components of the machine. It also sported a peak reading meter (the “Modulometer”), which set it apart from most of the other recording equipment of the period, which still relied on VU meters. It was designed for rugged operation conditions, and could be powered from 12 standard “D” cell batteries.

Acceptance of the Nagra III was almost instantaneous. 240 machines were built in 1958, and in 1959, the Italian radio network RAI (Radio Audizioni Italiane) ordered 100 machines to cover the Olympic Games in Rome, paying cash in advance. With this rapid expansion, larger premises are acquired in Paudex (near Lausanne). Since the Nagra III relied heavily on custom machined parts, a significant investment in machine tooling, along with skilled machinists to run them, was required to keep pace with orders that were now coming in from networks around the world, including the BBC, ABC, CBS, NBC and others. By 1960, there were more than 50 employees working in Switzerland, and a network of worldwide sales agents was established to support the sale and service of the machines.

Nagra Enters the Film Business

The application of portable sound recording to the film industry was not lost on Kudelski or his agents. In 1959, French director Marcel Camus used a Nagra II to record part of the sound on the feature production of Black Orpheus, shot on location in Brazil. Sensing that this could be a burgeoning market, Kudelski quickly set about designing a version of the Nagra III that could utilize a pilot system for synchronous filming (referred to as the PILOTTON system).

This early version of this system was based on technology initially developed in 1952 by Telefunken and German Television, which consisted of a single center channel pilot track about .5 mm wide. However, it did not have HF bias applied to it, which caused the distortion to be rather high, and bled into the audio track. Realizing that a better solution was needed, Kudelski invented the Neopilot system to replace the PILOTTON system. This design consisted of two narrow tracks, recorded out of phase with each other, which resulted in the signal being cancelled out when reproduced by a full-track head. The addition of HF bias helped reduce distortion, which resulted in minimal interference to the program audio.

A companion synchronizer (the SLP) was developed at about the same time, which provided a method to resolve synchronous recordings on the Nagra III. The design of the DC servo motor system provided for an elegant approach to this task, making the AC motor drive systems of the day look archaic in comparison.

The first of the Nagra III’s equipped with the new Neopilot system were delivered in 1962, resulting in a huge increase in sales. Lead times for the Nagra III now grew to 6–8 months, requiring yet more space for production. Also, there were restrictions placed on business by the Swiss government in regards to how many workers could be hired, which hampered the growth of the company.

In 1964, additional office and production space is rented in Renens, with further premises acquired in 1965 in Malley. By the end of 1965, the decision was made to purchase a factory in Neuchâtel. Finally, a huge tract of land is purchased in Cheseaux-sur-Lausanne, which allowed for the construction of a dedicated factory.

Nagra IV Debuts

By 1967, the sale of the 10,000th Nagra III is celebrated, and in 1969, the company moves into their new facilities in Cheseaux-sur-Lausanne. 1969 also brought the introduction of the Nagra IV recorder, which marked yet another significant improvement in analog recording technology. While the basic transport design mimicked that of the Nagra III, the new machine now used much more reliable silicon transistors and sported two mike inputs. The pilot system was also improved, with the flux level on tape being standardized, regardless of the voltage present at the pilot input. The signal was also filtered which significantly reduced the amount of noise that could bleed through into the audio track. Approximately 2,510 of the new machines were built in 1969.

Not content to leave well enough alone, one year later, Kudelski introduces the Nagra 4.2L recorder. While the 4.2L offered a few improvements over the IV, they were not as significant as the changes seen between the model III and IV. If some industry observers were of the opinion that Kudelski had begun to slow down further development of analog recorders, they were significantly underestimating his ambitions…

If One Channel Is Good, Why Not Two?

Seeing further opportunities in the sale of machines to the broadcast and film markets, in 1971 Kudelski introduces a stereo version of the Nagra 4.2, called the IV-S. Built on the same platform as the 4.2, the machines offered many of the same features, but with two channels of recording in the same footprint as the mono recorder. It also marks the introduction of a new pilot system, called NagraSync FM, which records a FM modulated pilot signal at 13.5 kHz between the two audio tracks. This allows for synchronous recordings, without having to reduce the width of audio tracks, and neatly solves the problem faced by trying to use the older Neopilot system for twochannel recording. It also allows for a limited bandwidth commentary track to be recorded on the same channel, which aids in slating for production situations where a standard “clapper” slate can’t be used, without interfering with the program being recorded. While  stereo recorders were certainly nothing new at this point, all the commercially available machines were bulky AC–operated recorders, giving Kudelski yet another significant entry into the audio recorder market.

1971 also saw the introduction of the unique SNN recorder, a miniature recording using 1/8” wide tape, but in a reel-to-reel configuration as opposed to a cassette. Like its predecessors, it also had the ability to do synchronous recording. Although Kudelski had begun development work on the SNN about a decade earlier, he waited until 1971 to bring it to market. This year would also mark the introduction of equipment destined for applications outside of the traditional film and broadcast arena.

Diversification

Whether driven by the need to invent or recognizing that the market for portable audio recorders would eventually become saturated, it was about this time that Kudelski begins to design and manufacture equipment destined for applications outside of the traditional film broadcast market. While he had designed a recorder for military applications as early as 1967 (called “Crevette”), 1971 would mark a significant departure in the direction of the company.

Fresh off the heels of the Nagra SNN and IV-S recorders, in 1972 Kudelski introduced the Nagra IV-SJ, a two channel instrumentation recorder aimed at scientific and industrial markets. Recognizing the application of the SNN recorder for law enforcement use, Kudelski also introduced the SNS, which was a half-track version of the SNN recorder. Recognizing the need for a more economical ¼” mono recorder for broadcast, in 1974 Kudelski introduced the Nagra IS, originally designed to be a single-speed mono recorder aimed at reporters. With a footprint and weight that was almost half that of the 4-Series recorders, this machine gained rapid acceptance by broadcasters who were looking for a high-quality, economical recorder. Like other Nagra products, variations of the basic recorder were soon to appear, which could provide Neopilot sync for film use, as well as two-speed operation. Two year later, the Nagra E was introduced, which was a further simplification of the IS recorder.

Despite the simplification of these products, both maintained the unique trademark characteristics of Kudelski’s design approach, and would never be mistaken for some mass-market cassette recorder.

Just the FAX Ma’am

While Kudelski was known worldwide for his unique audio design talents, somewhat less well known was his keen interest as both a sailor and aviation buff. In fact, Kudelski established “Air Nagra” in the 1960s, which operated a few Cessna twin-engine planes, used primarily to transport businessmen in the local area. Ever aware of the opportunity to bring a new product to market, in 1977 Kudelski would introduce the “NAGRAFAX,” a unique portable weather facsimile machine aimed at the maritime market. While the military had a similar system in use, the NAGRAFAX was aimed at the commercial and private yacht market, and also saw use in airports, ski resorts and coast guard stations. This product would mark Kudelski’s first departure from recording equipment.

1977 saw the introduction of yet another instrumentation recorder, the Nagra TI, which offered four channels of recording (as opposed to the two channels of the Nagra IV-SJ). It also boasted a unique dualcapstan transport, which minimized disturbances in the tape path, a critical design component when the recorders were employed in military operations. This transport would become the basis for the Nagra TA recorder introduced in 1981. Essentially, a two-channel analog version of the TI recorder, the Nagra TA had the unique ability to chase timecode in forward and reverse, and was specifically aimed at the telecine post market.

While the T Audio recorder boasted the most sophisticated transport design of any of the Nagra analog audio recorders, its complex logic circuits caused many users to shy away from it, except for telecine applications, where it had no rival. Despite this, it is still highly prized among audiophiles for its stellar tape-handling features.

Nagra and Ampex—Strange Bedfellows

The year 1983 would see an unlikely alliance take place, with Nagra and Ampex embarking on a joint venture to introduce a portable 1” Type-C video recorder aimed at the broadcast market. While Sony already had a small 1” video recorder on the market, in predicable fashion, the design efforts of Kudelski raised the bar significantly. Employing a lightweight transport and surface mount devices, the new recorder (dubbed the VPR-5) brought a level of sophistication to the broadcast video recorder market that has never been seen since. While the VPR-5 enjoyed a brief period of popularity (with 100 machines ordered for use at the 1986 Mexico World Cup), the everchanging “format wars” brought a premature end to its use.

Nagra and the Cold War

In yet another somewhat unlikely alliance, soon after the introduction of the VPR-5, Nagra joined with the Honeywell Corporation with the intent to produce a highly specialized recorder designed expressly for military use. However, this venture, which utilized all of Nagra’s R&D operations, never brought a product to market. The project was quickly abandoned after the fall of the Berlin Wall in 1989. The only remnant of the effort is a prototype recorder called the “RTU.” This would be the last project that Stefan Kudelski would be engaged with directly in an engineering capacity.

Despite this misstep, much was learned during the development of the RTU, and in 1992 Nagra introduced the Nagra D, a unique (and proprietary) four-channel digital recorder aimed at the film and music recording market. While the Nagra D gained some adherents, by this time Nagra had unfortunately begun to lose its dominance in film sound to DAT technology, which had begun to make inroads in the market while they were still distracted by the Honeywell venture. (In fact, Nagra never did produce a DAT recorder, moving directly from the Nagra D open-reel digital recorder to the introduction of the ARES-C tapeless digital recorder in 1995.)

Despite losing some market share in traditional film sound recording to new players, Kudelski continues to design and innovate. In 1997, they introduced a line of high-end audiophile components, starting with the PL-P vacuum tube preamplifier, and later incorporating the VPA mono-block tube power amplifier, as well as the MPA 250-watt MOSFET power amplifier.

Even further afield from the original focus of the company was establishment of a division devoted to pay-TV set-top boxes for CANAL+ in 1989. This would turn into a very successful growth operation for the company, and continues to be the main business of the firm.

Nagra Today

In 2002, Nagra introduced the Nagra V hard drive recorder, which was intended as the replacement for the Nagra 4 series analog recorders. However, despite the excellent design, by this time Nagra had some of its footing in the film recording market, overshadowed by the development of DAT recording in the 1980s, and the introduction of the Deva hard drive recorder in 1997. Nonetheless, Nagra still enjoys a significant share of the broadcast journalism market with products such as the ARES series solid-state recorders. Currently operated as a separate entity located in Romanel under the moniker of Audio Technology Switzerland, the firm continues to pursue the film recording market, with the introduction of the Nagra VI hard drive/CF card recorder in 2008. Stefan Kudelski’s son, André Kudelski, continues as CEO and Chairman of the firm.

Despite the changes in technology that have taken place in the intervening years since the introduction of the first Nagra recorder, every sound mixer “of a certain age” I’ve spoken with can still recall the first time they used a Nagra recorder. Likewise, the stylistic contributions made to the film business by Kudelski’s introduction of the Nagra are immeasurable. Films such as D.A. Pennebaker’s Don’t Look Back would have simply been impossible to do without the aid of lightweight cameras and recorders. The entire French New Wave movement, led by directors such as Francois Truffaut and Jean Luc Godard would arguably not even have existed without the aid of the Nagra recorder and Éclair camera. Thank you Mr. Kudelski for your marvelous invention.

The author wishes to thank Omar Milano for generously sharing the transcript of an interview he conducted with Mr. Kudelski. I am also grateful for the opportunity to have accepted the Wings Award on behalf of Mr. Kudelski at the Polish Film Festival in America in 2008. It was an honor.

© 2013 Scott D. Smith, CAS

Editors: We will present other articles in coming issues to explore the accomplishments of Stefan Kudelski. We invite members to submit stories and anecdotes of their experiences with the man and his recorders. Please send your anecdotes to: nagra@local695.com

Recording Les Misérables – Part 2

Recording Les Misérables – Part 2: Implementing the Plans

by Simon Hayes AMPS
Photos by Laurie Sparham/Universal Pictures

Beginning my assignment on Les Misérables, I had some enviable, even unprecedented, advantages. I had support from the producers at Working Title Pictures and the Director, Tom Hooper, to use every resource available to achieve live recording of all the vocals without any ADR. And I had a crew of seven skilled associates to help achieve this goal, all handpicked from the best technicians I know, all excellent choices for their ability to work together as a team. But it still remained to coordinate with other departments and develop a plan for how this goal might be accomplished.

Meeting Supervising Music Editor Gerard McCann was the next step and a defining moment in the planning stage. Right away we agreed to join forces and merge his four-man department with my seven-man team. Whatever demarcation had existed, we relegated to history and agreed that the teams would share all the tasks of the daily technical grind including rigging, cabling and loading gear.

Music Supervisor Becky Bentham was also part of this first meeting. She is a legend in the UK film industry. Both Gerard and I had worked with her before and had great respect for her abilities.

The three of us discussed the project in detail and worked out a plan of attack. We would have two live pianists on set at all times. Both were part of Cameron Mackintosh’s team and had years of experience with the orchestrations of Les Mis. One pianist would work with the shooting crew and the other would be available at all times for warm-ups and rehearsal. Whichever one was on set that day would work inside a soundproofed plywood box fitted with ventilated Perspex windows so that the mechanical sound of the Korg electric keyboard would be confined. The player would wear headphones with an IFB feed of the vocal mix in one ear. The pianist was also fitted with a radio mike for direct communication with the actors via their “earwig” feed.

The piano would then be routed both to Pro Tools Rig #1 and also to the sound cart for transmission to the actors’ earpieces. Says Gerard McCann: “We had our live piano performing and three Pro Tools systems, operated by Music Editors Rob Houston, John Warhurst and myself. Simon was able to route that live piano feed into earpieces worn by the actors who were then able to sing to live accompaniment. Our Pro Tools systems had three roles: one was dedicated to playback for tracks that required a fixed tempo, like chorus material. For the larger crowd songs, we would record a rehearsal of the ensemble cast on set on the day, and use that as a playback for shooting so that the crowd could follow along singing in the correct tempo, and this live singing recorded by Simon. This was to allow Tom maximum freedom to use as much of this sometimes rough, raw, but very real sounding live chorus as he chose, together with additional layers he might record later in post. A second machine was dedicated to recording the live vocal and piano mixes from Simon, and the third was used to turn around this recorded material almost instantly for playback.”

In working out the production sound methodology, I was keen to stick to a comfortable workflow; this wasn’t the time to be introducing new or untested equipment into the recording chain. I needed to be using equipment that was second nature to me so my attention might be on capturing performance rather than technical issues.

I chose to gang together two Zaxcom Devas, one the Deva 16 and the other a Deva 5. This would give us 26 tracks. I would give the picture editor two mix tracks to use on his Avid timeline: Mix-1 had the vocals and the mono piano; Mix-2 had the vocals only, without the piano. This gave the editor the facility to adjust the blend of voice and accompaniment as needed.

We linked the two recorders together so they would have identical timecode. The Deva 16 had the two mix tracks plus isolated mikes on tracks 3–16. Machine 2’s ten tracks were all assigned to ISOs.

The two linked machines gave us a total of 24 tracks. Since we might need to use radio links for the two mono booms and the stereo boom, we were limited to 20 radio mikes. I already had two fantastic Audio Developments’ mixers with eight channels each. They were modified to supply either analog or digital signal on all the outputs so we were well equipped for 16 tracks. We reasoned that we would not need all available tracks recording the solo performers, only when recording the chorus, so we could connect directly to the Devas and use the front panel faders on those occasions.

I also ran a safety copy of the mix tracks on a 24-bit Nagra V in case of a hard disk failure on the primary machines. That covered us in the event of an equipment failure on a magical “perfect take.”

Running 20 radio microphones without any inter-channel modulation or interference is not easy. Luckily, the UK was in the middle of switching the legal film industry channels from one band to another to make way for digital television, and we took full advantage of the temporary window available to us to use both channel 38 and channel 69. As Gerard worked out the need for five different Comtek feeds—that’s right, five mixes—our special good fortune became more apparent. Our plan called for Mix-1 to be piano and vocals while Mix-2 would be piano only for members of the music department who needed to concentrate on that element. (Quite a few members of the music team kept two receivers on their belts so they could swap between these two mixes as they wished.)

Mix-3 would be vocal only for use by dialog coaches working on accents. The pianists also used this mix while listening to a direct feed from the electric piano in the other ear.

Mix-4 was a special mix that Tom Hooper and Danny Cohen required for themselves and the camera crew. The music was such a large part of the tempo and timing that the camera crew needed to hear the piano and voices to motivate their action. We added a talkback mike—a Shure SM58 with a transmitter—to permit Tom to communicate with camera operators and grips even during the takes.

Mix-5 was the boom operators’ headphone feed, much the same as Mix-4 but with my voice alongside the singing and piano instead of Tom’s. I was, of course, using the onboard talkback mike on my mixer rather than a handheld SM58. This permitted me to talk to the three boom ops throughout takes about lens sizes, shadows, etc. With the 20 radio mikes, five wireless headphone feeds and Tom’s SM58 transmitter, we would be using up to 26 separate frequencies at any time. The responsibility for wrangling all these frequencies fell to 1st Assistant Sound Robin Johnson. Without his skill and experience, I doubt we would have been able to run that many channels.

All of this equipment would live on two sound carts that could be moved around on location. We were becoming technically ready. The next step was to consider the “in-the-ear” monitors for the actors.

We considered several in-ear monitors and made a decision early on to use a traditional induction loop system over the newer radio systems. To fit within the ear, all of these systems are limited to a very small driver that severely limits sound quality. None of the present designs sound very good. Since the units with a built-in radio receiver offered no audio advantage, we couldn’t justify their extra expense particularly considering the number of units we would need. We concentrated our efforts into finding the best induction loop amplifiers and in optimizing the performance of the traditional design.

We confronted two problems with the available earwigs: their small driver size severely limited bandwidth and they were not very loud. An orchestra with a broad mixture of bass and high frequencies would confuse the tiny driver and the output became muddled. We found the problem was less acute using the Korg electric keyboard as its output is simpler and tends toward the midrange. The pianists were a great help with this by adjusting their play accordingly. We also adjusted the EQ settings on the keyboard to suit the earwigs.

The loudness issue was not so easily resolved. These earpieces were originally designed to assist people with hearing difficulties, not to be used as a reference while singing “Who Am I?” or “I Dreamed a Dream” at the top of one’s voice. We contacted the manufacturer and they were very helpful and supplied us with louder units. We also had them come out and make ear casts of each principal actor to supply them with custom-fitted earpieces both left and right. This helped in several ways. The custom earwigs fit deeper in the ear canal and were less visible to camera. Also, a precise fit ensured that the earpiece was optimally positioned, and its tiny outlet hole unblocked, so it could deliver its maximum output. Having both left- and right-fitted earpieces also gave the option for using both if an actor were struggling to hear. This was really a last resort because it would interfere with the actors hearing their own vocals.

We decided early on not to feed vocals into the earwigs both because of the frequency response issues and also because we would forever be discussing individual preferences on the balance between vocals and piano. This would present an impossible situation because we could only provide one earwig mix on the induction loop. But there are always exceptions—on “I Dreamed a Dream” Anne came to me after the first take and asked to wear both earwigs with the piano as loud as possible and a tiny amount of her own vocal added. Since she was singing a solo, and we didn’t need to provide earwigs to others, we were able to accommodate her.

For a couple of monumentally challenging sequences, Tom staged two actors at locations hundreds of yards apart, harmonizing together in real time but shot with separate cameras. In those instances, we fed their vocals to their earwigs so they could keep pace with one another. This created much hilarity on set as Hugh and Russell realized they could communicate with each other and began comparing progress on the setup and which camera crew might be ready first. There were other exceptions to our no-vocals-in-the-earwigs rule but we generally tried to keep the playback practice as simple as possible.

With recorders, track assignments, piano accompaniment and earpiece distribution worked out, Gerard McCann and I had a good plan for recording the vocals. But we needed to meet with Orchestrator and Music Producer Anne Dudley and her team to confirm that our efforts would meet her needs. We met her and Music Supervisor Becky Bentham at the famous Abbey Road Studios in London. They told us that their engineers would like to hear the mikes we intended to use so we set up some test sessions. The Neumann U87 is the standard condenser microphone in a music studio. Its accuracy is unexcelled and its large diaphragm produces a smooth response to rapid transient changes. The music studio also offers acoustic excellence and the ability to place the microphone in optimum position. No location recording plan we might devise would ever be able to equal that performance. But the live recording offers the advantage of immediacy and an emotional link to the acting so the operative question was whether the fidelity of our system would meet listening expectations.

I chose the Schoeps Super CMITs for our boom operators. These new microphones use DSP noise-canceling technology to reject off-axis background sound. This capability is a great advantage but demands a high level of skill from the boom operator. When the Schoeps were used in testing it became clear that, if they were in an optimum position, the kind possible while shooting a close-up, they could compete on a level playing field with the music studio mikes.

We also tested the DPA lavaliers and Lectrosonics radio mikes. In my opinion, the DPA matches the Schoeps Super CMIT more closely than any lavalier I’ve heard. During the demo at Abbey Road, the engineers, despite initial skepticism, were suitably impressed. They felt they were getting approximately 60 percent of the quality of a Neumann U87 when I believe they were expecting much less. When you consider that the studio mike is placed on a stand in the best possible position while the DPA is rigged on the actor’s chest, that is an excellent result.

Paco Delgado, the Costume Designer, was extremely helpful and collaborative in this process. To hide the lavalier mikes, he and his team supplied us with the necessary cuts of fabric from each costume and also allowed us to make the holes needed to hide cables. He encouraged us to take the lavalier rigging to a level that enabled us to record absolutely clean singing with no clothing rustle. As we started shooting, it became clear that the process of mic’ing the cast was far more time-consuming than on a “normal” film not just because of the need to match fabrics but also because there were so many radio mikes used.

It’s always my aim to deliver as natural a dynamic range as possible so I was in full agreement with the engineers’ request that we not use any compression or limiters in the recording chain. To make the full 24-bit dynamic range available, this meant not only refraining from using or tripping limiters in the equipment but also not riding gain during the take. We used the Lectrosonics transmitters at a very low-gain setting to ensure that limiters would never be engaged. Historically, the higher gain setting needed with radio mikes to stay above artifacts meant that limiters were needed to prevent overloads with louder signals. The ability of the current generation of Lectrosonics’ gear to capture clean signal at lower settings, even with whispered delivery, was impressive and a key reason we were able to take on the project. By agreement between the Music Department and the Sound Department, we used no limiters or EQ anywhere while recording Les Misérables.

With everyone in agreement on the methodology, we turned our attention to the challenges of recording live singing on a movie set. We had to consider the scale of the Paris street scenes and how to manage them. Tom asked me if I would prefer to shoot the exteriors on a soundstage or on location. I knew that Tom wanted to shoot the scenes, some as long as 14 minutes, from start to finish without a cut. I didn’t see how this would be possible outdoors in a modern, aircraft-infested environment but the only stage large enough for the planned scenes, the 007 stage at Pinewood, is not really a soundstage and has poor acoustics. Just a few weeks into preproduction, Tom contacted me to tell me about a new stage being built in Pinewood—the Richard Attenborough Stage—that would be the biggest in the UK. (After our good fortune with the transitional availability of radio frequencies, we began to think someone upstairs was smiling on our project.)

Eve Stewart, the Production Designer, asked me about ways that set design could help with Tom’s vision of a live musical. I commented that for live sound we wanted reality. If they are in shot, the cobbles should be real cobbles, the oak door frames should be real oak, so that any sounds we picked up would be as authentic as possible. She took my suggestion and filled every inch of the 30,000-square-foot stage with sets built with the characteristics of permanent structures.

Our interest in solid oak and stone applied only to areas seen in the shots; outside what the cameras saw we tried to make the set and crew sonically disappear. Our efforts extended even to fitting rubber shoes on all the horses’ hooves.

For Eponine’s number, “A Little Fall of Rain,” we faced the additional challenge of recording the entire number in the rain. We worked with the Special Effects Department to get the best possible rain that would show on camera without drowning the mikes or making too much noise. We covered every part of the set not seen by the camera, every rooftop and every piece of floor, with rubberized horsehair to deaden the raindrops. We had an entire truckload of horsehair delivered to Pinewood. We also had a horsehair cover to provide quiet protection for the camera and asked the camera technicians to wear black “Bolton” cloth (Duvateen) ponchos over their Gore-Tex to soak up the sound of the water hitting. We even had a second boom operator shadow the primary boom with a horsehair roof on the end of his boom pole to shield the primary mike. That was the attention to detail that we exercised and it was possible because of an outstanding seven-man team. With a truck full of rubber-backed carpet, this team padded every dolly track and every walk-and-talk to keep the set as quiet as possible and recovered the carpets as soon as the shot was completed so they never held up the shooting. These efforts paid off not just by reducing noise from footfalls but they helped to deaden sound reflections throughout the set and augmented the many sound blankets we hung for that purpose.

Wind to flutter hair and costumes is a necessary element to create the illusion that players are outside and not on a set. Traditionally, large fans or wind machines provide this but they are quite noisy and compel ADR whenever they’re used. We coordinated with the FX Department to place the wind machines outside the stage and pipe-in the wind through flexible air-conditioning hose. The mikes didn’t pick up the sound of the electric motors at all, just the sound of moving air that mimicked the sound of actual wind. And, since its frequency fell outside of normal voices, it could be effectively removed in post.

After all the technical planning, we were ready to put our methodology to the test. The film had engaged the actors for an eightweek rehearsal period directly prior to shooting. Such a lengthy rehearsal period isn’t the norm but Les Mis was a complex project. I felt it important for the whole sound crew to be involved from the beginning but there was a move to exclude us. I can certainly understand the budget implications of adding a large sound crew for an extra eight weeks. And, the performers can be self-conscious as they develop their performances. Working with playback or with a piano accompaniment will mask errors in pitch or delivery but singing a cappella leaves every performance mercilessly exposed. I could understand the reluctance but I felt it important that everyone become committed to the live recording protocols from the beginning. I worried that, after eight weeks of rehearsal with the blanket of protection afforded by an amplified piano, the cast might balk at the introduction of the earwigs on the first day of shooting. If they felt they couldn’t work without the live piano, the whole plan of live recording would founder. We needed the collaboration between Cast and Sound to begin on the first day of rehearsals.

I also felt that the long rehearsal period was important to more than just the cast. I wanted to use earwigs and radio mikes on every rehearsal so that the Pianists, Roger Davison and Jennifer Whyte, could become comfortable with the process of working within a sound booth and following the pace of the singers from their own headphones. And, I wanted the practice time for the Sound Department so that we might become familiar with the songs, the staging, the head turns, the extremes in dynamics, and work out solutions to the challenges in advance. Sometimes a single performer would need two mikes, one on each side or one close to the mouth and one lower, to handle these variables.

Even more important than the technical issues was the opportunity to become acquainted with the cast and earn their trust that we would deliver quality recordings of their live performances. I pressed these points with the producers and with Tom and eventually we were invited to participate.

By the end of the rehearsal period, the cast was completely unfazed by using the earwigs and having direct communication with the pianists through their lavaliers. They would arrive at our sound carts upon entering the rehearsal stage to ask for their mikes and earwigs before proceeding to the set and enjoyed being able to communicate directly with the pianists without raising their voices to draw the pianists’ attention.

It was going well but we were developing a new process and everyone, Tom Hooper, the Producers, the Music Department and our own Sound Department, wanted a test to confirm that it would all work through editing and mixing to a final product. The “Red and Black” number performed by the students in the café was a good selection for our test. With multiple solo lines from the cast and an ensemble of about 20 students, it provided a taste of most of the circumstances we would encounter throughout the film. From the beginning, I had requested that rehearsals take place in the proper acoustic environment so that we might make test recordings and check the results later through studio monitors. Consequently, our rehearsal space was a proper soundstage at Pinewood that was suitable for a film test. Tom decided to shoot the test with a full camera crew and three 35mm cameras.

The test shoot proved challenging, exciting and interesting. Although Tom had discussed the visual style he had worked out with DP Danny Cohen, nothing quite prepared me for his singleminded enthusiasm for shooting every take all the way through from beginning to end. For the sake of performance and energy, Tom would shoot numbers in their entirety so I needed to be ready at all times. For me this meant multi-tracking and mixing 20 mikes on every take from 8 a.m. to 8 p.m. It was mentally demanding; I had to find a zone and stay focused. My own mixing improved with the constant practice but that was a small benefit as we always intended to remix from the ISO tracks in post. More importantly, the boom operators thoroughly learned the intricacies of every move by both cameras and cast members and became adept at following the singers exactly. Since the long takes forced a camera reload for nearly every take, my crew had an opportunity to act on every little problem revealed by the previous take. Carpet placement could be optimized, a cast member standing on a squeaky floorboard could be shifted slightly and chorus or extras that were whispering when they should have been miming, could be advised. (Many members of the chorus ensemble came from a theater background where ad-libs would enhance the performance. It took awhile before they became comfortable with the understanding that film editing needed consistent, i.e. silent, backgrounds.)

Silencing the ad-libs and background action was a huge undertaking that continued throughout the movie and was a constant negotiation with Tom. He liked the way the ad-libs tended to increase energy in the performances and used them to motivate the soloists to project their singing to rise above the clutter. But I maintained that working this way would force ADR when the adlibs and chatter didn’t match in the cuts. Tom understood; while he encouraged active participation in the rehearsals, he recorded the takes with mimed background action.

We finished the test shoot and I was mentally and physically wiped out. It had been the most challenging day I had ever recorded and it dawned on me that we had 70 days of this in front of us, many without the comfort and acoustic security of a soundstage. Every single day would require immense focus and energy from all of us. We got word very quickly as the test was edited and orchestrated that the vocal recordings were a complete success. Everyone was incredibly euphoric that our workflow had been proved not just possible but hugely successful. There were lots of extremely happy producers after the test.

I’m glad I experienced the test before we started shooting because it gave me a chance to prepare myself for an incredibly demanding shoot. The first part of the shoot was a reduced unit in the French Alps shooting Valjean (Hugh Jackman) traveling on foot from the port to the Bishop’s chapel. We arrived and the 1st Assistant Director told me Tom had chosen a location on the highest mountain peak and it was impossible to access it in vehicles. He asked me to go ‘handheld’ because carrying the kit up the mountain would be impossible. I told him that I wasn’t prepared to compromise sound quality in any way and we set about carrying my 180-pound sound cart up the mountain. It took four men nearly an hour to make a 20-minute trip across the boulder-strewn pass. It was a Herculean effort but we arrived at the summit with all the equipment—the proper D/A converters, the big mixing panel, the high-gain antennas—we needed to do a first-class job. It was just this kind of single-minded purpose and resistance to compromise that got us great production tracks.

Quickly, we learned from Tom and the 1st AD exactly where Hugh would be walking and singing and we set about running a battery-powered induction loop under the rocks. With Tom and Hugh’s permission, we had prerecorded the piano track in rehearsals. If Hugh was comfortable setting a pace in rehearsal, we could run playback from a Mac laptop using Audacity rather than take a piano and Pro Tools up the mountain. Valjean was to walk across the summit covered by a single handheld camera. Arthur, my Key 1st Assistant Sound, asked if he could work with a radio boom to help with the uneven surface at the summit. I asked that he remain on a cable for every shot apart from a 360-degree pan so we might minimize radio electronics in the boom signal chain and maximize sound quality. We fitted Hugh with two radio mikes, one tight and one slightly wider. Tom asked us to shoot the rehearsal so I had no idea of the volume to expect. As Tom, Arthur and the camera crew tracked with Hugh and he began to sing, it became clear we were capturing something magical. I quickly listened to the ISO tracks and decided that Arthur’s boom with the Super CMIT was the best sounding track. Due to the tight headroom Tom was maintaining, it was in a perfect position 10 inches above Hugh’s head. I concentrated my attention on Arthur’s boom track in subsequent takes. There was no background noise apart from Hugh’s wooden clogs and his walking stick tapping the granite. They were not compromising the vocal performance and I decided not to bother Hugh about them so he might get on with his acting.

I should add that before shooting, I spoke with Tom, the DP and the camera crew and told them, “Guys, I know you aren’t going to like this and I know we are in freezing temperatures up a mountain but, if this is going to work, I need you all to take off your Gore-Tex trousers and, if you are tracking with the action, just wear your jeans. Otherwise, all I am going to record is the swooshing of Gore-Tex.” This was one of those moments where all the talking about the importance of sound quality and performance was truly put to the test and it was time to see if the crew really understood what that meant. One by one they duly removed their Gore-Tex trousers.

When we arrived home from France and started setting up to shoot in Pinewood Studios, I went to watch dailies at Editorial. I viewed on an Avid machine through near field studio monitors. It was just the raw mix track which in this case was the boom only. As I saw Valjean walk wearily across the mountain range and into a close-up, I could hear his breathlessness due to the altitude and see the fog from his breath on screen. As he started to sing with such fragility from the effect of the altitude, it sounded so real. I was completely spellbound and I knew in that moment that we were creating something special. Never before had I experienced such a connection while watching a musical. As we shot, it became clear to us that we needed to be flexible and use the best method available to record each scene. Scenes like the factory women singing “At the End of the Day” were staged with multiple solos and hard light that made swinging booms to each player difficult. Those scenes were best recorded on radio mikes with the booms playing a secondary role and the stereo boom serving to add dimension to the radio mikes used on the chorus. That was also the technique used for “Lovely Ladies” but for Hugh Jackman’s “Who Am I?” and Anne Hathaway’s “I Dreamed a Dream” and Eddie Redmayne’s “Empty Tables and Empty Chairs,” the boom was the primary recording device.

On “I Dreamed a Dream,” we were shooting with three cameras and, from the first take it became clear that Anne was going to clutch her chest during the emotive parts of the performance. Of course, that was right where the lavalier was placed. To ask her not to do this action, a part of her instinctive body language during the scene, would have been to stifle the truth and honesty in the performance. After the first take, I told Tom that we couldn’t rely on the radio mike any longer and had to get the boom closer. The “A” camera was shooting a close-up, “B” camera was shooting a wider close-up with the same top-line but “C” camera was shooting a classic wide with three feet of headroom. I reasoned that it was unlikely the wide shot would be used for long and the boom could be painted out if necessary, so I asked Tom if he would permit us to bring the boom into the wide shot. The VFX supervisor was present and instantly said, “The shot is static. If you just keep the boom out while the clapper board is going on before the performance starts, we will get a clear piece of the background needed to matte the boom out.” It was this kind of instant answer and collaborative teamwork that enabled Tom to make quick decisions and keep shooting.

The boom was also invaluable on all the sewer scenes where the radios would have become waterlogged. One of my favorite songs in the movie, “Empty Tables and Empty Chairs,” sounds beautiful on the boom and that was possible because all three cameras were shooting close-ups from different angles so the headroom was the same.

Recording singing differs from recording dialog in that the acoustics on singing need to be the same throughout. It would be wrong to have the wide shots sounding “wide” and the close-ups sounding “close” because when the orchestral music is added, the balance of music and vocal would change shot by shot. This would draw attention to the shifts in camera angle. Yet while recording dialog, it is generally accepted that an acoustic change matching shots of differing sizes actually helps a scene to sound real to the audience. For singing, whatever mikes are used must be in a close and uniform position throughout the song.

It is possible to use slightly different widths of mike placement as long as there isn’t a noticeable acoustic shift. We often used two radio mikes on an actor if their performance required extreme dynamic range and I would rig one lavalier close to the mouth to get a very closely mic’d performance on the whispers, but another lavalier five or six inches further away to pick up the louder pieces while sounding a little more open. Of course, the mikes were recorded on separate tracks so the dialog editor had a choice depending on what sounded better in the final context of the scene, once orchestration had been added.

Another break in filmmaking tradition was bringing a dialog editor, the extremely skilled Tim Hands, aboard just a few weeks into shooting. He was based at Pinewood while we were shooting and I was in constant contact with him daily explaining how we covered scenes, which tracks I thought were best and pointing out any issues I thought he needs to know. It was his job to clean and edit the vocals on Pro Tools. He was extremely subtle in his work and mindful of Tom’s admonition to not remove anything that would diminish the audience connection to the actor. He concentrated on removing background noises that had nothing to do with the on-screen performance. When a scene was starting to take shape in the Avid, the Picture Editing Department would give Tim the EDL and he would give them a bounce back of the edited audio from my ISO tracks. This meant that Tim was often working on a scene many times as the picture editor and Tom made changes but it also had the valuable ‘knock on’ effect of immersing Tim in the material so that he became completely familiar with all of it. When Alastair Sirkett joined him in the post-production process, this intimate familiarity helped him get the best from the recordings and the pair of them delivered an outstanding finished product.

After the film techniques clean up, the tracks pass to John Warhurst, the Music and Sound Editor. He went through them using music industry technique to make them sound their best going into the final mix. This process exemplifies the special collaborative workflow for this movie. Supervising Music Editor Gerard McCann pointed out at the beginning of our planning that the skills and objectives of a film dialog editor and those of a music vocal editor are very different. For instance, a music editor would be working out of his usual skill set if presented with generator noise or lighting hum while a dialog editor would not be at home adding reverb to enhance vocals. An oversimplification but because the vocals on Les Mis were essentially a crossover of both mediums, we needed to make sure they benefited fully from each methodology.

Although Re-recording Mixer Andy Nelson’s main contribution comes at the very end of the process, his involvement began at the conception. He has extensive experience in musicals including work on Evita and Phantom of the Opera. Tom Hooper was familiar with his work on Alan Parker’s The Commitments, a project that featured some live recording to a prerecorded backing track, so he sought out Andy when he was first considering live recording for Les Mis. Andy confirmed the success of the live recording on The Commitments and encouraged Tom to take on the larger challenge of Les Misérables.

Tom encouraged me to contact Andy Nelson when I was first hired. Gerard McCann and I had a long conference call with him to discuss workflow and methodology, check that he agreed with our plans and receive any advice he might offer. We kept in contact thereafter and he regularly listened to and commented on material as we worked.

Andy was particularly keen on not using EQ or compressors and limiters in the recording chain. He also asked that processing done by the dialog and music editors be “virtual” so that changes could be reversed and the material returned to a raw state at the touch of a button. He wanted to have complete control at the final mix where all the elements of score, sound effects, Foley and vocals could be evaluated together and judged as a whole.

For instance, he wanted us to avoid using plug-ins to clean up camera noise because they often have a slight effect on the vocal tone and he thought that the orchestration might effectively hide the camera noise.

Jonathan Allen, a Re-recording Mixer from Abbey Road Studios, was also generous with help and advice throughout the project. He worked on the orchestrations in Post but also joined me on days with big chorus ensembles and assisted both with advice and mike placement.

The whole project was a collaborative project from the outset. It set out to bring to the audience the in-the-moment emotions and the live singing of the cast. The success of that endeavor demonstrates what can be accomplished with everyone working together.

Cameron Mackintosh offered daily support and input for the project. He commented that “Music, if used correctly, should pull the heartstrings.” I believe that the filming of Les Misérables, as envisioned by Tom Hooper and with the support of Producers Eric Fellner, Tim Bevan, Debra Hayward and Sir Cameron Mackintosh, and each and every crew and cast member, really does “pull the heart strings.” It was a fantastic piece of work.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 12
  • Page 13
  • Page 14
  • Page 15
  • Page 16
  • Go to Next Page »

IATSE LOCAL 695
5439 Cahuenga Boulevard
North Hollywood, CA 91601

phone  (818) 985-9204
email  info@local695.com

  • Facebook
  • Instagram
  • Twitter

IATSE Local 695

Copyright © 2025 · IATSE Local 695 · All Rights Reserved · Notices · Log out