Prometheus Takes Flight With Cutting-Edge VFX Technology

May 16, 2012
Ridley Scott utilizes the latest digital animation and 3D techniques to deliver a blockbuster sci-fi film.

Sometime in the late 21st century, a team of explorers discovers a star map among other ancient alien artifacts. The spaceship Prometheus then embarks on an expedition to find the origins of the human race, leading to the darkest corners of the universe where the crew must battle to save mankind (Fig. 1).

1. The Prometheus is on a mission to a distant world to investigate the origins of mankind. But this adventure never would have left the ground without significant innovations in special effects technologies.

Ridley Scott’s long-awaited sci-fi epic hits theaters on June 8 (see “So What’s Prometheus About?”). But don’t expect the same special effects you’ve seen in other Hollywood blockbusters since the 3D renaissance began a few years ago. The cutting-edge director and producer tapped a host of new technologies to bring his singular futuristic vision to life.

Behind The Scenes

Scott’s movies, which include sci-fi classics Alien and Blade Runner, always push the envelope in terms of technology, and Prometheus is no different. He shot the film completely in 3D using Red Epic cameras (Fig. 2) and 3ality Technica’s Atom 3D rigs (see “Video Technology Takes Prometheus Into The Third Dimension”). It includes more than 1400 3D visual effects (VFX).

2. A pair of Red Epic cameras was mounted on 3ality Technica Atom 3D rigs.
Director Ridley Scott regularly used four rigs to shoot Prometheus.

Ten different companies handled the effects, though four companies managed the bulk of the work. Headquartered in London, the Moving Picture Company (MPC) was the film’s leading VFX facility. MPC also created the viral ad for David 8, a robot played by Michael Fassbender in the movie.

The effects artists used Autodesk’s Maya 3D animation software to complete much of the work. Autodesk also is well known for its 3D CAD tools, including AutoCad and AutoCAD Civil 3D. These tools frequently are used on mechanical designs for embedded applications.

The VFX teams also used Pixar’s Renderman and Adobe AfterEffects. Often run on large server farms, Renderman generates ray-traced scenes. It can handle details like lens distortion and add surfaces such as hair and fur to provide lifelike results. Focus displacement support uses displacement maps to add interesting detail to everything from faces to backgrounds.

The filmmakers used NUKE from the Foundry for most of the compositing work, which combines visual elements such as animation and live action, including scenes shot with a green screen. Compositing is more challenging in 3D because of the need to maintain camera angles as well as object position.

Originally developed by Bill Spitzak, NUKE is a node-based compositing tool that’s popular in TV and movie post-production. It won an Academy Award in 2001 for Technical Achievement. NUKEX, the latest version, uses Python scripting in its pipeline integration.

Like most films these days, Prometheus had an extensive storyboard created well in advance. Storyboards are necessary because so many aspects of the film are spread across different domains.They show VFX developers what they will be creating as well as the context in which it will be used. Likewise, the director sometimes will use this previsualization to help actors understand what they will be seeing “virtually” when the film is complete.

Filming Prometheus

Four fixed Atom rigs were used during filming, with a fifth mobile rig on a Steadicam for special shots. The four main rigs had the same configuration and provided different viewpoints on the set. The consistent configuration makes transitions between views significantly easier during editing. The film was shot at ISO 800 and 1000 speeds. This level of camera sensitivity delivered a great picture with few artifacts.

The cameras were connected to 3ality Stereo Image Processors (SIPs). The SIP-2101 has a Web-based interface and is housed in a 1U rack. Multiple SIPs can be combined to handle multiple camera clusters. The SIP records all the information streaming from the cameras, including time codes and convergence information. It also reduces setup time during lens changes and realignment. This is key because with 3D, changes always mean adjusting two cameras, not one, and proper configuration is critical.

Two large 3D plasma screens and a pair of smaller monitors provided real-time feedback to Scott and the rest of the crew. Each screen could be connected to any feed. Real-time mixing also was possible. The SIP provided on-screen advice for convergence, interaxial settings, and user-selectable errors and warnings.

Convergence is vital in 3D filming and editing. It provides the 3D depth queues. Also, it needs to be the same for all of the components that might be combined into a single frame. Similarly, transitions between shots cannot deviate significantly from each other, or the audience will be very annoyed. The SIP handled convergence issues and other 3D issues during filming.

According to Richard Stammers, VFX supervisor for Prometheus, the Red Epic cameras used on the shoot have a 5K resolution at 120 frames/s, but the final film is delivered at a 2K resolution. The lower resolution has a number of advantages in addition to the ability to handle digital pan and zoom.

In particular, only 93% of the 5K resolution is used initially for video recording. The rest, the overscan area around the periphery of the image, is used for convergence adjustments (Fig. 3). This helps eliminate edge artifacts and address other issues such as keystone effects, where images are skewed.

3. The digital intermediate (DI) version of the film uses a central subsection of the image from the Red Epic cameras. The camera’s viewfinder shows the framing guide so the cameraman can capture the desired image.

The general idea is to keep as much useful resolution through the editing process as possible to get the sharpest picture. The digital intermediate (DI) version has a resolution of 2.2K. The DI version is used internally to generate the final version of the movie.

The video stream also includes color information.The director and producer can use this information to handle color grading and adjustment in post-processing.

The filmmakers used a process Stammers calls “plate triage” before they sent video to the special effects companies. It incorporates horizontal and vertical camera alignment information, which enables companies like Reliance MediaWorks to generate a distortion and disparity map.

This information is needed because of the differences that may exist between the left eye and right eye cameras due to issues such as reflection and polarization differences. When these differences occur, the right eye is designated the “hero” eye, and adjustments are applied to the left eye.

Reliance MediaWorks handled the 3D alignment and color mangament. Scott also worked with Company 3’s Stephen Nakamura to adjust everything from color to sharpness throughout the film. These adjustments not only provide a visual balance, they also can highlight or provide the proper mood for a particular scene. This is slightly more difficult to do in a 3D movie.

Only about half of Prometheus has digital effects. There also was a significant number of sets and exterior shots in a variety of locations from Iceland to Scotland to Jordan. Chroma key compositing can be used in some cases. “Green screen” is variation of chroma key compositing where an actor can be photographed in front of a green background that is then digially replaced in post production by another background. More often, the VFX support required a significant amount of rotoscaping to help replace the background.

Rotoscaping is a manually intensive process where an animator edits each frame to cut out what isn’t wanted. A spline outline for an adjacent frame is typically used with each frame because the frames tend to be similar, requiring minor adjustments. The process is the same for 3D, but there are twice as many frames to consider and 3D placement is an issue. Similar editing challenges arise in blending dusty environments.

Scott didn’t use motion capture in Prometheus, though one scene took a low-tech aproach as actors carried torches in a dark tunnel. An alien character was equipped with regular red LED markers on its joints. The actors then were able to interact with the alien, and the hand animation was added in post-production based on the LEDs. Other approaches using infrared LEDs or other motion capture sensors would have been more expensive and time consuming, while the red LEDs still got the job done.

Some objects were real but augmented, like the RT01 Transport (Fig. 4). It’s almost always easier to film a prop like this when the actors will interact with it. Of course, special effects sometimes are required, like when David 8 puts on a 3D light show after a discovery on an alien vessel (Fig. 5).

4. The RT01 provides the Prometheus crew with ground transportation. The transport itself is a mix of practical props and augmented effects.
5. David 8 makes a discovery on an alien vessel, leading to significant consequences. Three-dimensional digital technology blends with live-action visuals to bring the hologram to life in your local multiplex.

The Sounds Of Sci-Fi

Prometheus was recorded using Dolby Surround 7.1, the industry-standard eight-channel sound system. It already provides 3D audio that naturally complements 3D video. Doug Hemphill and Ron Bartlett were part of the post-production team responsible for combining the film’s dialog, music and sound effects.

Hemphill and Bartlett worked closely with Ridley Scott to deliver a “gritty track with highly styled acoustic environments” in this sci-fi horror film. Bartlett recorded Sanskrit syllables that were modified and used with controls, buttons, and background sounds to provide ancient language audio motifs. They also had to deal with a number of audio challenges throughout the process.

For example, the actors often wore clear spherical “bubble” helmets. Microphones were placed inside the helmets to record their dialog. The inside of the helmet was a tough acoustical environment because a fan was required to provide air for breathing and cooling. An impulse response of the interior was used to help match newly recorded dialog with the original production sound.This sampled impulse response was obtained by placing a speaker inside the helmet to record the acoustics of the helmet (Fig. 6).

6. The production needed the impulse response in the bubble helmets that the actors wore because of the challenge of minimizing interior noise.

Some of the audio used inside the helmets and elsewhere was processed using Speakerphone and Altverb. These lines included the various male and female computer warning messages.

Sometimes vintage equipment was used. Listen for variations of chandeliers tinkling and wind chimes ringing during a visually impressive storm on the planet. The particulate matter storm has a jewel-like quality that matches the audio. There’s also a nod to Alien’s robot, Ash, in David 8’s gargly sound near the end of the film.

The filmmakers took advantage of 3D sound effects, which are the norm in the movie industry today, during the film’s production as well. For example, audio can be matched with the video to highlight visual artifacts that are in the foreground.

The audio effort was almost as massive as the video work. A 100-person choir was recorded at the Abbey Road studio. This recording was added at epic moments within the film. At one point, the producers were generating five different audio mixes of the entire film in one week.

Now Playing

Scott can attribute part of the movie’s success to the ability to get the 3D production process as streamlined as existing 2D technology. Much of this streamlining was due to the use of the 3ality Technica hardware as well as the companies involved in the special effects.

According to the movie’s trailers, “They went looking for our beginning. What they found could be our end.” Like most moviegoers, I’m looking forward to seeing the final film. And as awesome as the imaginary technology on screen will be, I’ll also take a more critical view of its 3D presentation now that I know about the innovation behind it.

So What’s Prometheus About?

Ridley Scott and his fellow producers have been very quiet when it comes to revealing anything about the plot of Prometheus aside from its quest to uncover mankind’s origins. Still, we know a few details.

For example, the fictional Weyland Industries built the Prometheus spaceship. Scott directed the first movie in the Alien series, which doesn’t name “the company” that built its ship, the Nostromo, but still includes some shots of a logo for something called Weylan-Yutani. The second film, Aliens, directed by James Cameron, reveals a company called Weyland-Yutani (with a “d”) as the evil megacorp that wants to use the alien for its own ends.

While Prometheus can be considered a prequel to Alien, it is very much its own film. Captain Janek, played by Idris Elba, leads the Prometheus crew. Elizabeth Shaw and Charlie Holloway, played by Noomi Rapace and Logan Marshall-Green, are the archeologists on the expedition. The crew also includes Milburn, a botanist played by Rafe Spall, and Fifield, a geologist played by Sean Harris.

And, the team includes the Weyland Industries David 8 robot, played by Michael Fassbender (see the figure). “I can carry out directives that my human counterparts may find distressing or unethical,” he tells the other members of the crew. So much for Asimov’s Three Laws of Robotics. It also foreshadows some of the company’s unsavory actions in the Alien movies.

David 8, a robot on board the Prometheus, may have an agenda that could jeapordize his human shipmates.

Video Technology Takes Prometheus Into The Third Dimension

Director Ridley Scott used Multiple 3ality Technica’s Atom 3D rig to film Prometheus (Fig. 1). Shooting a movie in 3D can add 20% to 30% to its cost, so minimizing these expenditures can have a huge efffect on the budget. The Atom rig usually held a pair of Red Epic cameras in an over/through configuration (Fig. 2).

1. 3ality Technica’s Atom 3D rigs can mount a pair of digital cameras in a variety of arrangements like this over/through configuration.
2. An over/through configuration has one camera aimed through the beam splitter. The second camera is mounted on top looking down at the beam splitter, which is a 50/50% or 60/40% mirror.

According to Stephen Pizzo, 3ality Technica senior vice president, the over/through and complementary under/through configurations use a beam splitter. The beam splitter is usually a 50/50% or 60/40% transmissive/reflective mirror. It’s good for regular and closeup work. However, it’s susceptible to dust contamination, polarization effects, and color casting.

The beam splitter allows the center axis of the cameras to be significantly closer than what would be possible with a side-by-side approach often used in long distance 3D shots. The distance between the center points is called the interaxial separation.

Interocular separation is the distance between a person’s eyes. This is typically 61 mm for an adult male and 58 mm for an adult female. The interaxial separation normally will be shorter, requiring the cameras to be very thin or positioned using optics such as the beam splitter approach. Other methods are possible as well.

Viewers normally see an ortho-stereo effect when the interaxial distance is similar to, and usually less than, the interocular distance. Hypo-stereo occurs when the interaxial distance is larger than the interocular distance. Decreasing the interaxial separation allows macro closeups, while larger separation is often used for long-range views of skylines or mountain ranges.

The depth of a 3D image is handled by “good disparity” or parallax. This is what the brain expects, and a good 3D rendering will maintain this relationship. Other disparities such as camera misalignment, keystone effects, and temporal effects will cause eye strain or nausea as the brain tries to adjust to them.

Configuring the cameras on the Atom rig for a fixed distance is easy. But unlike a single camera with its own lense system, a 3D setup requires pair of the cameras to operate in sync with each other. Lens adjustments on one camera must be matched with similar adjustments on the other camera at the same time, or there will be a disparity between the two. The same is true for the f-stop adjustment. For 3D films, the match must be microscopically accurate.

Lens control isn’t the only issue when dealing with 3D cameras. All alignment aspects must be addressed. In fact, the Atom platforms have a number of servos for each camera that are under program control. Likewise, the information about all these settings as well as the camera settings is being recorded along with the video stream. This meta data allows for adjustments. It’s also used in the post-editing process where video is mixed and when special effects are added.

Timing is key to successful 3D recording as well. The timecode meta data can be used to keep images in sync, but typically cameras are genlocked to maintain synchronization. One camera or an external source generates a synchronization timebase so each camera records an image at the same time.

The alignment of the cameras with respect to each other and the beam splitter affects the convergence and parallax of the recording. A stereographer normally will monitor this information during filming. There may even be a convergence puller with a small 3D LCD screen to track what is being recorded in real time.

All this information is coming from different sources, including the two cameras. 3ality’s approach is to add a “kitchen sink” box to the system that’s connected to all the devices including power. The box then has a single connection to the outside world. Moving a rig is now a simple matter of working with one cable. On-board power from the box allows for a rig cable swap, because otherwise synchronization and configuration information would be lost.

The cable typically plugs into a device like 3ality Technica’s Stereo Image Processor (SIP). The SIP helps control the cameras to keep them aligned and in focus. It also processes the data streams.

About the Author

William G. Wong | Senior Content Director - Electronic Design and Microwaves & RF

I am Editor of Electronic Design focusing on embedded, software, and systems. As Senior Content Director, I also manage Microwaves & RF and I work with a great team of editors to provide engineers, programmers, developers and technical managers with interesting and useful articles and videos on a regular basis. Check out our free newsletters to see the latest content.

You can send press releases for new products for possible coverage on the website. I am also interested in receiving contributed articles for publishing on our website. Use our template and send to me along with a signed release form. 

Check out my blog, AltEmbedded on Electronic Design, as well as his latest articles on this site that are listed below. 

You can visit my social media via these links:

I earned a Bachelor of Electrical Engineering at the Georgia Institute of Technology and a Masters in Computer Science from Rutgers University. I still do a bit of programming using everything from C and C++ to Rust and Ada/SPARK. I do a bit of PHP programming for Drupal websites. I have posted a few Drupal modules.  

I still get a hand on software and electronic hardware. Some of this can be found on our Kit Close-Up video series. You can also see me on many of our TechXchange Talk videos. I am interested in a range of projects from robotics to artificial intelligence. 

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!