VES Members and up to 2 Guests are Invited to [...]Find out more
By CHRIS McGOWAN
VR party games are a relatively new subgenre of VR games that are bringing people together in an old-fashioned way – with gatherings in the same place in the real world. While multiplayer VR games and social VR platforms bring groups together, it is usually in a virtual world where users tend to be home alone with their virtual reality headsets. VR party games like Loco Dojo, Late for Work, Keep Talking and Nobody Explodes and Acron: Attack of the Squirrels! take a different approach.
“VR party games are generally played together in one room,” says Yacine Salmi, Managing Director of Munich-based Salmi Games. “One person under the headset, the others on gamepads at a PC/console or on their phones. They are generally meant to be enjoyed together in a setting and to encourage taking turns under the VR headset.” One of the studio’s titles is Late for Work, in which a rampaging giant gorilla (the player in the headset) is pitted against up to four human opponents (on PCs).
“A VR party game has a focus on fun and hilarity first, competition second,” comments Sam Watts, Immersive Partnerships Director of Make Real, which publishes Loco Dojo and is based in Brighton, U.K. The multiplayer title features 16 humorous mini-games to play to gain the favor of the “Grand Sensei.” It seeks to bring multiple VR headset users together in one room. “Loco Dojo was meant to be played together, [with] all having a great time virtually but also physically present with one another, which is why it’s been a great success in location-based entertainment VR arcades.”
Watts explains, “VR party games are quick to play, across multiple devices if available, with the emphasis on throughput and making sure as many people as possible at the party can play.” At the very least, there’s a great reason to be in the audience if you are not playing. In contrast, “straight-up multiplayer games are more about competition and typically last longer per session.”
“The biggest distinction between VR party games and other VR multiplayer games is that you will likely have only one VR headset available in a party scenario. That means the game must be designed to involve multiple local players using just one headset,” comments Taraneh Dohmer, Game Studio Operations and Communications Lead for Ottawa-based Steel Crate Games, which publishes Keep Talking and Nobody Explodes. In the game, players – one with a VR headset and the others with no electronic devices at all – must work together to defuse a bomb.
Acron: Attack of the Squirrels! is a VR party game that enables one player to be on a VR device and up to eight others on mobile devices at the same time. Acron was created by Resolution Games, which focuses on VR and AR titles and is based in Stockholm, Sweden. Leo Toivio, the producer for Acron and a game developer at Resolution, comments, “Prior to Acron: Attack of the Squirrels!, no other multiplayer party game had combined VR and mobile gaming at this scale. But we had bigger ambitions for the platform and wanted to make a fun and engaging experience for everybody.
“Acron brings VR and mobile players together to compete or cooperate in high-stakes thievery,” adds Toivio. “One player in VR takes the role of the tree that is the sole protector of the golden acorns and needs to use hand controls to grab and hurl wood chunks, boulders and sticky sap to keep the squirrels at bay. The other team consists of one to eight frenemies who use their mobile devices to cooperate as rebel squirrels, each with their unique abilities and tools, as they try to steal the tree’s golden acorns.”
Resolution Games was co-founded in 2015 by CEO Tommy Palm, who previously was the Games Guru and spokesperson at King Digital Entertainment, which created Candy Crush Saga. Most of Resolution’s games are built with Unity. One of Resolution’s top titles is Angry Birds VR: Isle of Pigs, a virtual reality version of the popular game.
With Acron: Attack of the Squirrels!, he explains, “the intent is to bring people together with the VR experience as the focal point but allowing everyone around to also experience the content and fun in one way or another.
“A lot of our work has been focused on driving mass adoption of VR and eventually AR devices,” he adds. “By creating titles that can reach a larger audience and letting those players have a touch point with VR in a way where they can see VR as the ultimate experience, we are hoping to create a bridge for adoption for some who may not otherwise get exposed to VR.”
“A lot of our work has been focused on driving mass adoption of VR and eventually AR devices. By creating titles that can reach a larger audience and letting those players have a touch point with VR in a way where they can see VR as the ultimate experience, we are hoping to create a bridge for adoption for some who may not otherwise get exposed to VR.”
—Leo Toivio, Producer/Game Developer, Resolution Games
Meanwhile, Loco Dojo can be experienced with up to four players using multiple tethered “PC VR” headsets, including HTC Vive, Valve Index, Sony VR and Oculus Rift. Loco Dojo Unleashed (launched in October) is “essentially the same game, with a couple extra game modes, but for Oculus Quest standalone VR devices,” according to Watts.
Watts thinks that a VR party game should be colorful and attractive to all players without worrying too much about taste or fine detail. “You’re going for the widest common denominator here rather than a specific target gamer type or genre.” There’s a higher chance of the party session being someone’s first time in VR. So, the game has to be simple to pick up and play so the new player isn’t frustrated or made to look silly in front of his/her peers. It also needs to be non-violent. “Nothing kills a party quicker than a MDK or serious gun sim. People are there for a good time, not blood and guts and zombies.”And, lastly, it must be simple to set up. “No one wants to hang around whilst the host fannies about setting up the hardware or the game,” says Watts. Loco Dojo, like most of Make Real’s VR content, was made with Unity and modeled with 3DStudio Max.
In Keep Talking and Nobody Explodes, one player (with the VR headset) is trapped in a room with a ticking bomb. To defuse it, he needs the help of the other players, who are in the physical room with him but have no devices whatsoever. “The most unique element of Keep Talking and Nobody Explodes is that the focus is on communication between the player in the VR headset and the players outside of VR who are using the manual. Each has imperfect information. So, players must use verbal communication to express visual information,” says Dohmer.
Steel Crate Games created the first game prototype of Keep Talking in January of 2014. Dohmer recalls, “We saw that all of the existing VR experiences for the Oculus DK1 only involved the player inside of VR. Our game was designed to solve this problem by creating an experience that would involve the other people in the room – I guess this would have been one of the first VR party games in this sense.” She adds that the team created the game with Unity, “which was the most fully supported game engine for the Oculus DK1 and DK2 when we started the project. This allowed us to spend less time working out the challenges of getting early VR working and focus on creating an interesting and compelling game design.”
“VR party games are generally played together in one room. One person under the headset, the others on gamepads at a PC/console or on their phones. They are generally meant to be enjoyed together in a setting and to encourage taking turns under the VR headset.”
—Yacine Salmi, Managing Director, Salmi Games
For Late for Work, Salmi Games focused on a “pick-up-and-play experience with as short of a tutorial as possible, a comfortable and intuitive movement system to make it accessible to both new VR players and those sensitive to motion sickness, and a variety of game modes to cater to a wide audience.” Up to five can play the game at once – one with the VR headset and four on a PC.
Salmi explains, “We also made extensive investments in the stability and performance of the game. You have to handle up to six renders of the scene each frame – two images in the headset and up to four players split-screen on PC. You also have to handle varying amounts of controllers to connect and disconnect. We ran into many issues throughout development. It’s a lot of work to get this relatively stable.” Salmi Games used the Unity engine and both the SteamVR and OculusVR SDKs to support a variety of headsets.
Make Real’s Watts sees the potential for improvement in many areas of VR party games. “For every successful reduction in extra effort in getting VR set up – like standalone devices and inside-out tracking removing the need for PCs, cables and external sensors – new barriers appear, like the ability to easily cast what’s happening in the headset so the audience can see. Currently with mixed reality support and iPads or GoPros, etc. you can run a composited setup that shows the player within the virtual world, bringing more agency and clarity to the audience of what is happening.”
VR party games are benefiting from the steady growth of the VR platform, partly due to its increasing affordability. Watts notes, “Devices are getting cheaper so we’re seeing more likelihood of there being more than one within a household.” In addition, “with cloud streaming becoming a thing for flat-screen gaming, hopefully soon we will have latency for VR cracked to allow libraries of content to be available instantly without everyone having to check whether they own a particular title, have it installed, etc.”
While VR party games are best played with everyone in the same room, they are adaptable when that’s not possible. Toivio notes that “during the pandemic Acron became an “incredible connector for friends and families from afar.”
Looking at what will further boost VR party games, Toivio remarks, “While there’s always room for technological advancements, I think the onus is actually now on the developers to create compelling, rich content that will keep players coming back and putting on their headsets. Other than that, we are actively working on integrating elements like spectator mode, mixed reality tools and more to our experiences so more people can be involved in social VR experiences.”
Other popular VR party games include DimnHouse’s Takelings House Party, Ruffian Games’ RADtv (Ruffian Games is now Rockstar Dundee, after being purchased by Rockstar Games in 2020), Pixel Canvas’s scary game Reiko’s Fragments and Sony Interactive Entertainment’s The Playroom VR.
By IAN FAILES
In the world of motion capture, mo-cap suits for body capture and head-mounted cameras for facial capture have perhaps received the most widespread attention and use.
But a growing amount of research and innovation is being made in hand motion capture and finger tracking, spurred on by developments in VR, gesture recognition and haptics, and by the increased need to acquire accurate finger performances for animated characters, without having to keyframe all the minute detail of finger movement.
Indeed, finger tracking, and more correctly, accurate finger and hand tracking, is becoming a mainstay in several related areas where a human is motion captured or needs to control some kind of interface. Finger tracking is now a key part of gaming, VR, virtual training, virtual prototyping, biomechanics, remote interaction, sign language, and in entertainment such as live concerts or live streaming events.
In relation to visual effects and animation, a classic use of finger tracking could be for almost any CG character that has hands and fingers of some kind, using the motion capture to have them give proper expression, hold items or even do much more elaborate things such as play musical instruments. Again, this can be time-consuming to animate via keyframing.
To get a sense of the world of finger tracking at this critical juncture, particularly in relation to visual effects and animation, four leading companies in this space – StretchSense, Manus, Xsens and Rokoko – discuss their latest motion capture glove offerings and their thoughts on the state of play in the industry.
WHAT MAKES FINGER TRACKING IN VFX TRICKY?
For visual effects and animation practitioners, gloves are the predominant method for carrying out hand and finger tracking, partly because they tend to fit into an existing ecosystem of ‘wearable’ body and facial capture hardware used in the VFX and animation workflow at a studio.
However, there are other finger tracking solutions around, too, including optical hand tracking, computer vision-based devices and even neural signal tracking devices (an example of the latter is Facebook Reality Labs’ wrist-based input device, still a research project).
Finger tracking is difficult. Think of all the dexterity in our fingers, the way we can quickly make a whole range of fast or subtle moves, the many self-occlusions that can occur as our hands and fingers cover other fingers, or as we place our hands behind our backs or in pockets or around objects. There’s also the experience of ‘drifting,’ where small changes occur to the tracked location of the fingers over time. These are the things that can make finger motion capture hard, and why several companies are out there trying to solve it for the most accurate finger tracking possible.
“What makes it an absolute extreme challenge is the fact that my hand and your hand are inherently different,” notes Bart Loosman, CEO of Manus, which makes the Prime X series of gloves, and has also partnered with Xsens to offer Xsens Gloves by Manus.
“Having a product that can then calibrate and adjust its measurements to whatever’s going on with the finger length is hard,” adds Loosman. “Every finger can bend three ways, that’s three bending points. So, you might end up having to strategically place the sensors and then just extrapolating what certain parts are doing.”
A further challenge is that innovation has been happening in body capture for many years, while finger tracking is relatively newer. “Customers, when they do hand motion capture, expect a quality on par with full-body motion capture,” comments Rob Löring, Senior Business Director for 3D body motion at Xsens. “There is still always a lot of work going on right now with finger tracking, and that’s one reason we partnered with Manus – we want to see how far we can get in perfecting our product.”
Manus’ finger tracking technology, used in its own Prime X gloves and the Xsens Gloves by Manus, rely on flex sensors and inertial measurement units (IMUs), and include the capability for haptics. Meanwhile, Rokoko, which started offering its Smartgloves in 2020, also relies on IMU sensors, similar to its Smartsuit Pro motion capture suits. CEO Jakob Balslev comments that venturing into finger tracking forced the team to rethink a lot of the logic they had used in their body tracking solvers.
“Body movements are somewhat predictable,” Balslev says. “A walk is a walk. I can tell you what a walk looks like. A throw is a throw. But try to describe to me what your fingers do while you are walking? Finger movements are of course completely intuitive to the person doing them, but it’s not at all intuitive to predict and describe what you’re doing with your fingers when, say, speaking, or while you’re falling. That also why it’s so hard to keyframe animate fingers and why the need for a capture solution is crucial to animators.”
Benjamin O’Brien, the CEO of StretchSense, which makes the StretchSense MoCap Pro gloves, further identifies some inherent technical challenges in finger tracking technologies. He suggests that solutions such as IMUs and optical markers may not always provide the desired accuracy for a motion-captured animated performance, at least not on their own. The StretchSense gloves are somewhat unique in that they incorporate stretch sensors (essentially a capacitive rubber band), while finger tracking with the gloves utilizes a pose detection system coupled with machine learning elements.
“We had a 12-year legacy of stretchable sensor technology that behaves completely differently to anything else in the market that can very accurately measure fingers,” explains O’Brien. “The gloves are based on the idea that you should generate motion capture data that is of such high quality that the clean-up burden in post-production is greatly reduced, making them as economical as possible for a studio or artist.”
FINGER TRACKING: A STATE OF PLAY IN MOTION
Right now, it certainly feels like there is a large amount of change occurring in the area of finger tracking. New technologies are being developed and refined constantly, and companies, including the ones here, are also positioning themselves at certain price points in the industry. For example, StretchSense is generally at the higher price end of the market, with Manus and Xsens at the mid-level in terms of price, and Rokoko at the lower end for pricing.
Innovation continues in many different ways. For its Smartgloves, Rokoko is further adapting the tech behind the gloves to introduce a hybrid tracking approach to tackle the common finger tracking challenges of drift and occlusion that can occur with IMUs, as Baslev explains.
“We launched the first generation of the gloves based on the tech that’s tied to our suit, which are the IMU sensors. But inside the products that we’ve shipped is already another technology based on electromagnetic fields (EMF). There is a small box at the palm of the hand that creates a frequency field around the hand. It will provide an extra level of accuracy to the movement of fingers. It will be the ‘no occlusion, no drift’ solution we’ve been dreaming of since the beginning. And for everyone who has already received their gloves, it is just a software upgrade away.
“And our further plan is something at room level, a room coil-like device,” adds Balslev. “It will create a larger frequency field, so that you have absolute precision everywhere. There will be no occlusion because it’s frequency-based. You can put your hands in your pocket, put them behind your back, you can pick up a glass, drink it, put it down again, and it will be kept in an accurate space. We want someone to play the piano in VR for half an hour with our gloves and still be hitting the keys accurately for 30 minutes. For digital interaction in VR/AR and virtual production with actors, props and virtual cameras this will be a revolution.”
As far as future development for the Xsens Gloves by Manus goes, the two companies behind that product are focusing on their partnership and learning from what each have been doing in the motion capture space in recent years.
“Our vision is gold standard sensor technology that just gives the perfect data,” states Manus’ Loosman. “We think that Xsens, for example, has a great approach, getting data without the need for post-processing. There are some machine learning aspects that Manus is looking at, but this is more to deal with our current data and make our current data better while we are gearing towards the most accurate capture possible.”
Rob Löring, Senior Business Director at Xsens, concurs. “We want to make sure that we are really measuring the real moves of people – the full body and also for fingers. Xsens is of course looking within the realm of machine learning to see what we can add for us. It’s very interesting technology, but we have to move carefully there.”
The future for StretchSense’s gloves continues to be in the further development of its pose recognition and pose detection approaches, and in how their existing machine learning tech continues to form part of the solution. In addition, StretchSense’s Benjamin O’Brien says the company has been considering more about how all motion captured pieces fit together.
“Hands, body and face – they’re all person-centric. But it’s the hands, body, face, environment, other characters, other things, other ways of moving and interacting, and controlling, that, to me, is where the real excitement is.
“Of course,” continues O’Brien, “on the hardware side, we want to push towards that perfect one-to-one quality hand capture. We want lots of practical integrations, but where this is all heading towards is full immersion in virtual environments. Everything else is just a stop along the way.”
By TREVOR HOGG
Just as the growing friendship between a malfunctioning robot companion and a socially awkward middle schooler is the emotional core of Ron’s Gone Wrong, the animated adventure comedy happens to be the first production for co-collaborators Locksmith Animation and DNEG Feature Animation.
Serving as a director along with Locksmith Animation Founder Sarah Smith (Arthur Christmas) is former Aardman and Pixar storyboard artist Jean-Philippe Vine (Shaun the Sheep). “It felt a lot like when Gromit is laying the track in front of the train! It’s two start-ups trying to make a story that is on the level of the companies that we all came from. Locksmith Animation did all of the pre-production while the layout and onwards was handled by DNEG. One of the challenges that we found was that our previs process didn’t quite marry up for technical reasons to the DNEG layout department. We tried to make the handover as seamless as we could.”
Posing a creative and technical hurdle was the lockdown caused by the pandemic. “When COVID-19 hit we should have been taking it out to previews and getting punter reactions, but that only happened in a limited way when Disney helped us out and did a Zoom screening for us,” recalls Vine. “It’s not the same thing. We had to go with our gut.”
The requirements for feature animation rigs differ from visual effects rigs. “What you want for a visual effects rig is to do anything to the nth degree, and speed is not as much of a concern as the functionality to make it as cool as possible, photoreal, and DNEG is fantastic at that,” remarks Animation Director Eric Leighton (Coraline). Dave Lowry at DNEG is a fantastic champion for character animators and pushed for speed as well as rigging, morphing, facial and sculpting tools.”
Another difference is that animation is not shot driven. “Everything from the storyboards, previs, animation, effects and lighting are from the sequence level,” states Visual Effects Supervisor Philippe Denis (Trolls). “Audio was also something that we had to push on our side at DNEG, because with visual effects it’s less important, but for us it’s everything. We tried to optimize as much as possible the pipeline to get that back and forth.”
Ron’s Gone Wrong involves a manhunt as high-tech company Bubble attempts to capture its rogue B*Bot. “We felt that this should be a couple of years in the future so it feels like a plausible world,” explains Vine. “Coming-of-age stories, like E.T. the Extra-Terrestrial and Stand by Me, were references. The story is an adventure that takes Barney and Ron into the wilderness, so it was important for us to find a landscape that worked.” Traveling on at train from San Francisco to Portland, Vine discovered his real-life inspiration for Barney’s hometown of Nonsuch. “We sent Production Designer Aurélien Predal [Mune: Guardian of the Moon] to Eugene, Oregon, because I thought, ‘That city looks like it could be the right vibe. Not too big or small and surrounded by this beauty.’”
Predal partnered with Production Designer Nathan Crowley (Dunkirk) on the world-building. “Early on, Nathan did some 3D passes of design of particular locations, like he does when working on a live-action movie,” says Predal. “Nathan took care of Barney’s house, the middle school, Bubble store and Bubble headquarters. He put the Bubble headquarters on the side of a dam, which was such a strong idea. My role was to bring what Nathan did to the animation world.”
A boxy design language was adopted for Nonsuch while the high-tech company of Bubble had shapes that reflected its name. “I stayed in Eugene for two days and took maybe 1,000 photos of the pavement, road signs, bins and street dressing.” adds Predal. “I also went to the forest. I managed to put together a lot of documentation to brief all of the artists. From the script, it was clear that Bubble needed to be round. It’s as if Amazon and Apple came together to create a headquarters. We also wanted a James Bond feel to it.”
An organic aesthetic was adopted for the forest. “The tricky part was conveying the proper size and scale,” remarks Denis. “The grass was technically challenging because you want to make sure that you get the shape you want, with it still looking believable. Initially, when we started to run those simulations with the characters creating angels in the grass, the grass was popping up left and right. It was fairly striking. We had to simplify that to get the intent of the visual without pulling you out of the movie. Barney and Ron jumping in the river came late in the process. At one point you start to manage your world to budget. We still have some white water, but decided to have them jump in where it wasn’t too crazy. We also had to get interaction between the water and the set. That was probably our biggest effects simulation in the movie.”
“J.P. Vine wanted to have graphic and stylized characters,” remarks Predal. “Our main character designer was Julien Bizat. From his 2D designs, another artist, Michel Gillemain, did CG sculpts of the characters. Once we had the characters moving, we realized with DNEG that the art was fighting with the realism of the cloth simulation, so we had to find the middle ground.” An example of this is the bully named Rich. “He has a fairly skinny body with a big jacket,” remarks Denis. “As soon as you push him against walls, or when Ron is chasing him in the park, you end up with weird shapes. You could not run just a simulation on that because you have bad folding and shapes. You want to drive that with some rigging constraints, put it into shape, and run a simulation on top of that to make it physically correct.” Some fun was had with the woolen hat worn by Ron. “I always love to work with an animator,” adds Denis. “I want to make sure that the intent of the shot is there. The hat worked like an exclamation point for his animation. Sometimes it was right on with the physics, while in other cases we added a little bit of simulation on top to make it livelier.”
Conceptualizing Ron provided an opportunity to clash old and new technology. “We always thought from the beginning that Ron should feel like the most stripped-down computer software,” remarks Vine. “I’m sure that you remember MS-DOS. But then as you get to see his versatility and the way that those pixels can move around, you just start to love the guy. We also worked with 2D animators to do some quick tests in Flash to nail down his appeal. From that point on it was about giving the animators some clear rules about how to manipulate his face because they could position it anywhere they wanted.” Played for comedic effect is the fact that Ron is an analog device. Says Vine, “Ron doesn’t have maps downloaded so he needs to use a real paper one!”
It was important to keep track of the emotional state of Ron throughout the story. “Right at the beginning of the show was I assigned a broken variable to each Ron sequence with 10 being the most fouled-up Ron,” explains Leighton. “When we’re first introduced to him, he’s like a seven. Things keep on getting worse until the bully scene where he is attacking human kids, which is the most broken that a robot could possibly be. Ron gradually gets better until things become stressful and his ‘Ronisms’ start to come back. When he is reprogrammed onto new Ron, we feel for Barney because he has lost his friend. Perfect clean Ron is not interesting. We shot the entire movie out of sequence so to be able to track that rhythm of Ron’s brokenness all of the way through helped.”
Defining the characteristics of the other B*Bots are the cliques in which their owner is a member. “The whole idea is that Barney and Ron are individuals, so you don’t know what to expect,” explains Leighton. “When you go into the school everybody is trying to belong to their own group, so their behaviors are the same. We had the sports, posh, goths and biker groups. On top of that we would do similar kinds of things for their B*Bots. For example, a tech B*Bot when sending a message might do it like a frisbee throw, whereas a footballer B*Bot might do a big arm throw, or the geek B*Bot might toss it up, give a little bump and send the message off. We did a lot of those kinds of tests to take the same sort of communication behaviors that we knew all of the B*Bots would have to have, but then define them based on which clique their owner belonged to.”
Generic characters populate the big crowd scenes. “In the past, I have worked on things where you create everything and spend your time relating assets, but that doesn’t look good,” observes Denis. “We were recreating the layers in Houdini, and put the generic characters into a crowd context, shot them from different points of view and said, ‘That is a good-looking crowd.’ Or, ‘We have too many with short hair in this.’ Or, ‘That shot is not so good. Let’s kill that.’ This means when you actually set up your world, that you are confident your crowd is going to work and you can actually implement that in the shot. Some shots you can always get rid of one character or switch one character for another. You have all of those controls. The idea was to narrow down the choices so we create just enough to get a good-looking crowd. That’s something we spent quite a bit of time on.”
Furthering emphasizing the ostracization of Barney is the color palette, as Nonsuch is muted while his home is vibrant. “In the beginning, Barney is embarrassed about bringing kids to his house because the grandma has goats and chickens in the garden, and there is an explosion of vegetables and tapestries on the wall and sausages hanging from the ceiling,” states Vine. “It needed to feel quirky and specific to the family. When our two heroes head out into the woods, that’s where we heightened the palette because it worked with where they are emotionally. We wanted to have some fun with the Kubrickian world of Bubble that has unusual color choices. We could dial up the intensity as the movie’s intensity dials up.”
Extensive motion graphics were needed for the social network devised by Bubble that is displayed on the B*Bots and the monitors in the control room. “Every time a character is touching a B*Bot, we need to have something on the screen,” reveals Predal. “All of those interactions were complex because you couldn’t anticipate an idea coming from animation and you needed to respond to that. We wanted something poppy and colorful. Often, I found that we could get some nice, clean designs, but it was sometimes hard to read because the B*Bot were rotating or the camera was moving. A lot of the design had to be done in the shot to make sure it was readable. It was a large amount of work. We had to create an advertising campaign as well as a toy within the movie.”
“It’s quite a grounded camera language right up until Ron moves into the world. That’s when things start to get a bit more out of control,” observes Vine. “Sarah and I both love choosing a camera style that feels organic for the scene. For example, there is a fight in a playground between Ron and a couple of bullies who are hassling Barney. We decided to shoot reference with actors and handheld cameras to try to get in there and gave that reference to the layout crew to launch them. We tried to pick our camera style to sell the intensity of the scene.”
It has been quite a five-year long odyssey. The project went from Paramount to Fox, which was in turn bought by Disney. Then there was matter of the growing pains associated with two start-ups working together as one production company. Directors changed. The third act was rewritten. And let’s not forget the pandemic. “We joked that by calling this Ron’s Gone Wrong we were setting ourselves for mishaps,” chuckles Vine. “But we got there in the end.”
“Early on, [Production Designer] Nathan [Crowley] did some 3D passes of design of particular locations, like he does when working on a live-action movie. Nathan took care of Barney’s house, the middle school, Bubble store and Bubble headquarters. He put the Bubble headquarters on the side of a dam, which was such a strong idea. My role was to bring what Nathan did to the animation world.”
—Aurélien Predal, Production Designer
“We felt that this should be a couple of years in the future so it feels like a plausible world. Coming-of-age stories, like E.T. the Extra-Terrestrial and Stand by Me, were references. The story is an adventure that takes Barney and Ron into the wilderness, so it was important for us to find a landscape that worked. We sent Production Designer Aurélien Predal to Eugene, Oregon, because I thought, ‘That city looks like it could be the right vibe. Not too big or small and surrounded by this beauty.’”
—Jean-Philippe Vine, Storyboard Artist, Pixar Animation
“We always thought from the beginning that Ron should feel like the most stripped-down computer software. I’m sure that you remember MS-DOS. But then as you get to see his versatility and the way that those pixels can move around, you just start to love the guy. We also worked with 2D animators to do some quick tests in Flash to nail down his appeal. From that point on it was about giving the animators some clear rules about how to manipulate his face because they could position it anywhere they wanted.”
—Aurélien Predal, Production Designer
By TREVOR HOGG
Navigating through a series of personal and professional obstacles to prevent a worldwide crisis is a staple of the James Bond franchise, but this time around life was imitating art when making No Time to Die.
Originally, Danny Boyle (Slumdog Millionaire) was supposed to direct the 25th installment, and while in pre-production he was replaced with Cary Joji Fukunaga (Jane Eyre) and an entirely new script, the 007 Stage at Pinewood Studios in England accidentally got blown up by a “controlled explosion,” Bond star Daniel Craig suffered an on-set injury, and the April 2020 release was delayed to October 2021 because of the impact of the pandemic. The fifth and final appearance of Craig as the lethal, martini-loving MI6 operative has him brought out of retirement by the CIA to rescue a scientist from a megalomaniac with a technology that threatens the existence of humanity.
Fukunaga is no stranger to adversity, having battled malaria and losing his camera operator to injury during the first day of shooting Beasts of No Nation. “The walls of the stage came down, which was a bummer because we needed that stage space and Pinewood was completely booked. Quite often, when we finished shooting a set – that was it. There was no going back two months later for a pickup shot. That created a major logistical complex puzzle for the rest of our shoot. Daniel getting injured meant that we had to shuffle around again when we were already in this complex Twister game.”
There was no additional time added to the pre-production schedule despite the directorial changeover. “We started all over from page one,” Fukunaga elaborates. “What that meant was writing took place all the way through production. I did something similar like that on Maniacs. We knew our ins and outs, but didn’t always have the inside of it completely fleshed out. The communication involved with having five units going at the same time and keeping everyone in continuity was a major task.”
Previs was critical in filling in the gaps of the narrative. “There were several sequences that would get rewritten in the editing,” explains co-editor Tom Cross (Whiplash). “Sometimes they would be shooting some parts at a certain time and weeks or months later shoot other parts of it. In a way, that gave us time to work with previs and postvis and try to mock up a template for how these scenes could be completed.”
Digital augmentation was an important element in achieving a visual cohesiveness. “You know that a lot of film was going to be shot from all of these different locations at various times and visual effects was one of the glues that held it all together,” Cross says.
A sound mix was created for the previs by supervising sound editor Oliver Tarney (The Martian). “There were certain things, like when the guy walks up to the car and shoots at Bond, that were to me as much an audio experience as they were a visual experience,” states Fukunaga. “I wanted the audience to be inside that car, to hear what it is like to have bullets hitting you from all sides.”
“Cary is from the school of Christopher Nolan where he went into Bond wanting to get as much practical as possible,” remarks co-editor Elliot Graham (Steve Jobs). “Because of the schedule and Daniel’s injury, we had to rely more on visual effects.”
A total of 1,483 visual effects shots are in the final cut. “It took so many man hours alone just to review shots that it would have been too much to do by myself,” admits Fukunaga. “It required an entire team whose curatorial eye had to be sharp to determine whether a shot was worthy to be brought to the review stage. Some shots were going to be entirely visual effects because those things you can’t do in real life. I found it actually quite liberating because my first films didn’t have much money for visual effects, so I was hesitant to rely on them for anything. I wanted the suspension of disbelief to be bulletproof, which meant I tried to do as much in camera as possible. As budgets have increased and allowed for some of the best craftspeople working on the visual effects, I could relax control and trust that we were going to get spectacular shots.”
Unpredictable weather had an impact on the production, particularly on a scene that takes place on the Scottish Highlands “It was supposed to be a blue-sky chase, which it was for much of the main unit shoot,” recalls Graham. “However, while we were there with the second unit it rained the entire time and the roads were covered in mud. These were car-flipping stunts, so you have to be careful because people’s lives are much more important. They did the stunt once. We waited four hours, gave up, did it in the rain, and hoped that we could paint out the rain the best we could.
“We really needed a third time,” Graham continues, “because we didn’t get the shot where Bond’s car knocks into this other car. It was just raining too damn much, and by the time it cleared up a few days later we had to move on because of the schedule. That left us in the position of me asking for a plate of Bond’s car pretending to knock into another car. Visual effects could add the second car in later. Those are the practical realities of filmmaking.”
A veteran of the James Bond franchise is Special Effects Supervisor Chris Corbould (Inception). “The great thing was having the classic DB5 back and not just in a cameo role but in full combat mode,” Corbould says. “Combining that with the wonderful town and tower in Matera, Italy was masterful. My department was responsible for putting together all of the gadgets on the cars, making the pod cars where you have a stunt driver on the roof, as well as all of the explosions and bullet effects.”
The roads were like polished stone, Corbould adds. “We had to put on special tires so that we didn’t skid around too much. Only one road goes through Matera and thankfully we were able to shut it down on many occasions.”
LED stages were utilized for close-up shots of actors during the car chases. “We built LED stages very much like we did on First Man,” explains Cinematographer Linus Sandgren (La La Land). “This time 2.8mm pitch LED panels connected making a 270-degree, 20-foot-radius cylinder surrounding a car, and a ceiling with LEDs as well. At one point, we used a squarish cube set up around some vehicles. For these scenes we shot plates with an array of nine cameras on a sporty vehicle and brought this footage to the stage. This way we could both shoot and light the scene with footage from the actual environment and just an added 5K Tungsten Par on a hydroscope crane working as a sun.”
A special gimbal was produced for the interior of a sinking boat with passengers trapped inside. “It was like a rotisserie that could revolve around and then drop down on one end and completely submerge into a 20-foot-deep tank of water with one of the main actors inside,” remarks Corbould. “It was 50-feet long and could roll around 360 degrees. When we started, the engine was on the bottom of the boat, then as it started sinking the engine revolves around, and add to that copious amounts of compressed air as if air pockets are escaping. It makes for an exciting sequence.”
Then there was a matter of a seaplane. “A seaplane is driving along the ground trying to get enough speed to take off while being chased by gunboats,” describes Corbould. “For that we made a plane with an engine driving its wheels with no propeller on so we could drive up and down to our heart’s content with a driver underneath.”
A high-flying Antonov was, in reality, grounded. “We had to put a mechanism that made it look like the glider was being pulled out from the back of the plane to allow it to drop through the air,” Corbould reveals. “It was easier to do on the ground and safer, especially when you’ve got your main actors in there. In these days of digital effects, you can easily put bluescreen or greenscreen for the sky outside of the back of a transport plane.”
Overseeing the visual effects were Supervisor Charlie Noble (Jason Bourne), Co-Supervisor Jonathan Fawkner (Guardians of the Galaxy), and producer Mara Bryan (Dark City), with the work divided among Framestore, ILM, DNEG, TPO VFX and Cinesite. “There were four prongs of attack to the action sequences, storyboarded by Steve Forrest-Smith and his team,” explains Noble. “2D animatics were then produced for certain sections using recced location stills/video and animated line drawings by Adrian Spanna at Monkeyshine. Subsections of these were then previs’d by Pawl Fulker at Proof using initial location LiDAR scans and previs or art department models. The stunt department also shot rehearsals of specific action beats. This gave Cary the ability to increase the level of detail as required for shot planning/techvis purposes to disseminate his vision.”
Principal photography on the ice lake, when the disfigured antagonist Safin (Rami Malek) fires an automatic rifle at someone swimming below him, was complicated by the natural elements. “The opening scene in Norway on the ice was shot under varying lighting and weather conditions,” states Noble. “We needed to add snow to the trees and make the ice itself clearer with small dustings of drifted snow. One shot from the sequence was selected as the hero look for the sequence. Framestore built and lit a CG environment based on that. This then gave us a guide to aim at for each angle, retaining as much of the original photography as possible while laying in our snowy trees, atmosphere and ice surface.”
Encapsulating the variety of visual effects created for No Time to Die is the Norway safehouse escape that goes across moors, through a river and into the woods. “Largely in-camera with a lot of cleanup of tire tracks, stunt ramps and some repositioning of chase vehicles for continuity, which involved removing in- camera action, replacing terrain and adding CG vehicles, bikes, and any dirt or water sprays as required, but leaving the majority of the frame untouched.”
Most of the sets and locations required some degree of digital augmentation. “We took a dedicated LiDAR scanning and plate/photo texturing team with us wherever we went and sent them to places where we would need to capture environments to integrate into principle photography,” states Noble.
“For example, for the Cuba street scene we had a huge set built on the Pinewood backlot, which required buildings to be topped up and streets extended. Art Director Andrew Bennett (Skyfall) provided us with an excellent layout for what should go where, and we sent our team out to Havana and Santiago de Cuba to scan and texture the desired building styles for street extensions. Building top-up plans were provided by Andrew and realized using set textures and extrapolated set surveys. At the end of the scene, Bond and Valdo Obruchev (David Dencik) escape using Nomi’s (Lashana Lynch) seaplane, moored in the port of Santiago. The foreground was shot at a small wharf in Port Antonio, Jamaica, the mid-ground port cranes were constructed from plates and surveys of Kingston port, and the background skyline came from our Cuba stills.”
Both greenscreen and bluescreen were deployed depending on the scenario. “Exterior work is always a bit trickier with the wind to contend with,” notes Noble. “We often used telehandlers carrying 20-foot x 20-foot screens outside, either lined up to make a wall or to top up existing fixed screens [on the backlot for example]. It was really handy to be able to dress them in shot-by-shot as opposed to building acres of green for every eventuality. They came into their own on the Cuba streets set – we had six on the perimeter of the set, and they maneuvered themselves into place to give us as much coverage as possible on top of the 30-foot-high fixed screens that we had surrounding the build, which had some complex elements to contend with: overhead power cables, telegraph poles, wet downs, vines, and explosions towards the end of the scene.”
A broad range of simulations needed to be integrated into the live-action photography. “Huge dust/concrete explosions to match real SFX ones, fireballs roiling down corridors, fire, smoke, mist, foliage, masonry hits, crowd, water, ocean, bubbles underwater, shattering glass, clothing and hair,” remarks Noble. “For the most part we were matching to in-camera special effects, surrounding shots, and other takes where we were needing to slip timings or separate reference elements.
“The trawler sinking required some complex simulations that all had to interact with one another,” Noble comments. “While patches of oil burn on the water surface, pockets of air are released from the submerging hull along with foam, bubbles, floating detritus, and sea spray from all the activity. Other simulations needed were volumetric passes for light scattering under the water, in particular for the boat’s lights, as well as interactive simulations on the trawler’s nets as it submerges and all the props such as crates, buoys, life raft, and more oil spewing from the damaged engine room.”
Digital doubles were used sparingly because of the success of the stunt department led by Lee Morrison (The Rhythm Section). “Digital doubles were mainly utilized to massage stunt performer body shapes and posture to match surrounding shots on the cast,” reveals Noble. “Face replacements were also required for the principal cast at various points for either scheduling or safety reasons. Some limbs were replaced where actors were required to wear stunt padding over bare skin. A good example would be the fight scene in the Cuba bar where Paloma (Ana de Armas) takes out a few goons with high kicks and flying kicks. Digital limbs and stilettos were used to replace padding and trainers.”
The usual adversaries made their appearance in post-production. “The challenges of volume and time were greatly eased by bringing in Jonathan Fawkner as Co-Supervisor and we divided the vendors between us,” says Noble. “My profound thanks to Mara Bryan, Jonathan Fawkner and Framestore, Mark Bakowski at ILM, Joel Green at DNEG and Salvador Zalvidea at Cinesite for their lovely work.”
By KEVIN H. MARTIN
After failing to secure interest from 20 book publishers in his epic Dune, novelist Frank Herbert managed to convince the car manual gurus at Chilton Press to get his book into print in 1965. A science fiction novel dealing with concepts ranging from the nature of messiahs to ecological concerns, Dune rapidly burgeoned from underground cult classic to a huge and sustained success worldwide. Even so, the property resisted adaptation to the big screen for some time, despite attempts by Arthur P. Jacobs, Alejandro Jodorowsky and Ridley Scott. Ultimately a theatrical version directed by David Lynch debuted, but proved to be a critical and financial failure, foiling partially-written sequels.
Filmmaker Denis Villeneuve was a fan of Herbert’s landmark tome and when the possibility of a new and decidedly more faithful adaptation of Dune arose, he committed to it, bringing along with him a number of his past collaborators, including Production Designer Patrice Vermette, Editor Joe Walker, Special Effects Supervisor Gerd Nefzer and Visual Effects Supervisor Paul Lambert.
An Oscar-winner for Villeneuve’s Blade Runner 2049, Lambert recalls that even before pre-production began in earnest, an enormous design phase had been undertaken. “Dune is such a massive project, with so many worlds to visualize,” he acknowledges. “So Denis and Patrice spent the better part of a year coming up with concepts. Usually that process just serves as a springboard for the final designs, but Denis was so happy with the initial artwork that it really served as a template for other departments. The work was just so strong, with such defined ideas – there were some tweaks down the line, but really it was very close. Sets were built with these same colors and textures, and visual effects created and executed these images in a similar faithful way.”
While the Lynch film had featured rather grand and ornate production design by Tony Masters, this project reflected the filmmaker’s own tastes. “Having worked with Denis previously, I knew that things had to remain grounded,” confirms Lambert.
“[Director] Denis [Villenueve] was so happy with the initial artwork that it really served as a template for other departments. The work was just so strong, with such defined ideas – there were some tweaks down the line, but really it was very close. Sets were built with these same colors and textures, and visual effects created and executed these images in a similar faithful way.”
—Paul Lambert, Visual Effects Supervisor
“No wild pullbacks and moves that didn’t have some basis in how cameras are actually moved, and we tried to build as much as possible on most of our sets rather than encasing actors within a blue box. Trying to keep things grounded, we didn’t go for a big showy effect. We tried to avoid anything too FX-y, even when the ships fold space. In fact, the shield effect is based on past and future frames in the same shot, plus some color changes, so it isn’t like we’re adding or inventing, just using what was actually there. While there wound up being around 2,000 visual effects shots, realism was the word throughout.”
While the majority of work was handled at DNEG Vancouver and DNEG Montreal, an in-house crew, overseen by WylieCo, was also formed for the production. “I think it is very valuable to have offices close to the director,” says Lambert, “as it facilitates the fleshing out of ideas, which in turn affects and improves the shooting process. The in-house unit wound up doing hundreds of the simpler shots, plus we did a few shots with Rodeo FX up in Montreal. They have Deak Ferrand, a fantastic artist who is one of the fastest conceptualists I’ve ever seen. He’s also a good friend to Denis, having worked on Arrival and Blade Runner 2049 as well.”
When production began in Budapest, Lambert assigned a scanning crew from OroBlade to LiDar parts of Wadi Ram, where later location shooting would take place. “That helped with the wides of Arrakeen,” he reports. “Our beautiful wides of the city are actually from helicopter views scanned into the computer. We built that CG world around those real terrain elements.”
For backlot work in Budapest, sand-colored screens were used instead of blue or green, allowing a more naturalistic Arrakis-desert coloration to reflect onto the performers. Natural light was employed whenever possible. “For some massive interiors, we had to come up with creative ways to light the scene,” Lambert reveals. “We connected two separate studios with a tarp across the roof, which had a specific shape cut into it allowing sunlight to come down into part of the set. We could only shoot at a certain time of day with this tarpaulin because the hole permitted sunlight to illuminate the interior just right for a couple hours only, but it really sold the reality of the environment with natural lighting beaming down into our stage sets.”
Close collaboration between VFX, camera and art departments led to a unique approach for interiors requiring set extensions. “Because of the scale involved, the question was often, ‘how high do we build this?’ Traditionally, you would have green or blue up above, but that compromises the lighting for the whole view. I suggested that we consider what the actual tones and shapes would be going up in frame when we did the VFX. So during the shoot, we’d place a particular colored form up there. If there were structural elements, like crossbeams, that were supposed to be up there, we’d represent them simply as well. DP Greig Fraser loved this, because he was able to light it like he would a fully-built set. We’d build a very cheap version, through which he could light as he desired.”
For scenes involving the many aircraft seen in the film, a mix of full-scale vessels and CG craft were created. “Seen from a distance, the ornithopters look like insects,” says Lambert, “but as you get closer in, you can feel the power of these flapping wing aircraft. The Atreides’ dragonfly-like craft is a thing apart from the others. When the Harkonnens strike at Arrakeen, their ships come down looking like they have inflatable wings, and sport a very intricate and detailed look. For some of the shots of full-scale craft shot on location, we might add CG wings, but most of that was actually captured in-camera.”
SFX Supervisor Gerd Nefzer had his teams begin prepping long before shooting in order to have time to transport huge containers holding the ornithopters and supporting equipment to Aqaba. “The containers for Jordan arrived slightly late, so that made it very tough to get everything ready for each shooting day,” Nefzer acknowledges. “It was a huge challenge to get the ornithopters ready. Altogether we had two large ornithopters and a smaller one on location in the desert. The big one weighed maybe 13 tons and was 10 or 12 meters long, so making it look as if it could maneuver took a lot of power. We had one scene with the big ornithopter landing in the desert and then taking off. That needed a 400-ton crane, so a special road had to be built to accommodate transport.”
Nefzer used various gimbals, plus an overhead flying rig hung from a crane. “Our two computer-controlled motion bases had to function on location,” he says, “because our DP needed real sunlight and could not create that look on stage. That meant pouring a concrete pad for the thopter. Then we had to invent a drive file in order to maneuver the ship to follow the path of the sun in the sky for continuity. The large base was rated for 20 tons and the smaller thopter used a six-ton base for sandstorm scenes. We tented the smaller base with wind machines and tons of dust flying around in there, including soft stones, so that the view through the cockpit would really look like it was flying through a sandstorm. It could rotate 360 degrees so that the actors could be upside-down. We had four six-wheel drive trucks from the military so we could mount and drive our big wind machines around into position. As always, Denis tries to get as much as possible in-camera, but on some very wide shots, there just weren’t enough machines. But the part of the shot that we could handle practically – the area around the actors, because you get such a greater sense of believability with performers having to lean into the dust as they move instead of just pretending against greenscreen – was the basis for how VFX went about completing things, matching to the look and density of our sandstorm. Paul Lambert always asked for as much practical as possible to have something to match – ‘even if it is just a few meters’ he said one time – and then he filled out the rest of the shot beautifully. I think this is the right way to make movies these days, instead of just automatically.”
To convey the sense that the mockup was really up in the air, the team used an idea also employed on The Rocketeer, choosing a high-up locale that allowed cameras to shoot down on the airborne object. “We found the tallest hill in Budapest and put a gimbal atop it,” says Lambert. “That gave us a horizon. We surrounded the gimbal with what we affectionately called the dog collar, a 360-degree ramp that went all the way around the gimbal, again sand-screen colored. Rather than extracting the characters and then matting behind them, we were able to just augment the properly-hued backgrounds using some of the tremendous aerial photography done earlier.”
The large backlot in Budapest was used for a crucial moment in the attack on Arrakeen when the palm trees burn. “Denis wanted about 25 palm trees in two rows,” Nefzer states. “With the shoot scheduled over two nights, we had to build the trees and their leaves so they could be burned over and over. We started by taking the bark off actual palm tree trunks, which wasn’t easy, since there are 20 layers. We cut a slot inside to get our gas/alcohol piping inside, along with fire protection sheets, and then reassembled it with the barks back on in proper order. We laser-cut 280 leaves from sheet metal, then added our gas pipes beneath. We had to bend all of them by hand so they looked real, then color them before connecting them to the trees. Nearly every leaf had its own control, so Denis could get various levels of fire, all controlled by 20 technicians. A pair of 1,500-gallon propane systems steamed the liquid, which utilized five miles of hoses.”
In addition to the spice exported throughout the galaxy that is found only on Arrakis, the planet is best known for its massive native life, the sandworms. “The worms are dinosaur-like creatures that have been cruising through the desert protecting the spice for thousands of years,” says Lambert. “They are not very agile, taking something like a mile to stop and turn, with deeply textured looks. The design showed great articulation, kind of like an accordion. While it isn’t all that animated, the sandworm does have interesting textures, which include solid plates between the fleshy parts.
The worm sifting through sand was difficult to get right. A lot of the time, we were seeing more the effect of the worm’s passage than the worm itself, as there’s not much existing reference for things that big in the desert. I did request that production detonate some explosions out there to see how the sand gets disturbed, but given that we were shooting in the Middle East, that kind of thing could be mistaken for an attack and was frowned upon!”
Time-intensive renders for the sims used to produce the organic sand passages were just the nature of the beast for the sandworm scenes. “Even so, there’s often some aspect with simulations that doesn’t look quite right,” allows Lambert. “So occasionally there is paint involved, since otherwise there’d be days and days more waiting to run the simulation again. Fortunately, we started worm development straightaway, so there was time.” To sell the disturbance in live-action scenes, Nefzer built a 12-ft. square plate atop a vibrator. “When it was activated,” Lambert reveals, “you would actually start sinking into the sand. We used that for shots of Paul (Timothée Chalamet) and Gurney (Josh Brolin) when the worm attacks the sand crawler. This largely in-camera solution was what we then copied to extend out beyond the foreground to the rest of the frame, whenever they went wider.”
Conspicuous consumption of the local spice has the side effect of turning eyes blue over time. “The eyes were handled by our in-house unit, says Lambert. “The attention of a viewer automatically gravitates to the eyes, so when you have something very pronounced, it can become very distracting. We didn’t want super-glowy eyes, and strove to keep something of the original tones of the actors’ actual eyes. So if you had one actor in shot who had brown eyes and another who had lighter-colored eyes, the blue effect would be customized to look different for each of them. It wasn’t about a covering effect from the sclera to the iris either, there was a lot of fine detail, and it took a while before Denis approved an overall look with which we could proceed. Then, on occasion, the matter of going too subtle became an issue, because the look might be minimized depending on the camera angle and how light struck the eye. Denis did point out a few times whenhe couldn’t actually tell that the character was supposed to have the blue eyes, so we adjusted in those instances. Paul and Jessica (Rebecca Ferguson) are in transition for a large section of the movie, so the color had to change as the spice affected them more and more.”
Looking back on the project, Lambert still marvels at the opportunity. “When I was first contacted, there was no hesitation on my part,” he laughs. “I mean, we’re talking about Dune here! And in the hands of someone like Denis, I have a very good feeling about how audiences will really feel transported to these worlds.”
Ellen Poon grew up in Hong Kong in the 1960s in a film-going family. Her creative aesthetic was highly influenced by the combination of British and Chinese cultures. A world-renowned expert in visual effects and animation, Ellen is a founding member of MPC’s Computer Graphics department. During her tenure at Industrial Light and Magic, she was the first woman to be made Visual Effects Supervisor. Ellen has a passion for Chinese film projects and has won two Hong Kong Film Awards for her work on Hero and Monster Hunt. Her filmography includes Jurassic Park, Star Wars: Episode I – The Phantom Menace, Jumanji, Frozen and Raya and The Last Dragon. Ellen advocates for increased diversity and inclusion as a member of the Academy of Motion Picture Arts and Sciences Executive Committee.
Starting out, I wanted a mentor, but couldn’t find anyone because computer graphics was in its infancy and there was no one in the field like me. What I’ve learned: when you are trying to forge a new path – for the industry and yourself – you need people who are kind and supportive in your corner. Mentors don’t need to be offering something tangible, like a job; they need to offer guidance, especially on how to navigate the microaggressions that women and minorities experience in our industry. To maintain focus, you need to have a very strong compass. Know who you are, what you’re trying to do,and maintain a sense of professionalism.
“Bold leaps are life defining moments; take them when you have the chance.”
Being the first woman to be named VFX Supervisor at Industrial Light & Magic was a great privilege and a huge responsibility. I felt the weight of representing all women, all minority women, and the need to be not just good – but excellent. I’ve seen how implicit bias manifests – the looks of disbelief from crew and clients that I could be a supervisor, or the unknown quantity that many people find hard to accept because there are just so few of us. It is harder to get work as a female supervisor, but the obstacles have increased my resolve to do well in this business and prove them wrong. For people starting out, this environment can be discouraging, so I am committed to sharing what I’ve learned.
As an artist in a position to lead others, it’s important to stay deeply connected to your craft. It takes budgets and pipelines to make our shows run, but to feed my artistry, I need to be creating at my workstation and imagining out loud. Technology changes nonstop, and to be a good supervisor capable of selling it into a project, you must keep your hands dirty.
Do not be afraid to dare greatly and listen to your inner calling. While working at ILM, a lot of great films came out of Hong Kong, China and Asia. I felt like I had a role to play in making these movies, as well as a connection to helping the business go further technologically in my hometown, the place that shaped me. I pursued the chance to work with the best Asian filmmakers on Hero by quitting my job at ILM to follow my passion. Bold leaps are life defining moments; take them when you have the chance.
Ask Me Anything – VFX Pros Tell All
Join us for our series of interactive webinars with visual effects professionals.
Ask your questions, learn about the industry and glean inspiration for your career path.
Register today at VisualEffectsSociety.com/AMA
By TREVOR HOGG
Transplanted from the streets of Wellington, New Zealand to Staten Island, New York, the television adaptation of What We Do in the Shadows returns to FX for a third season. The mockumentary about vampires Nandor (Kayvan Novak), Laszlo (Matt Berry), Nadja (Natasia Demetriou), Colin (Mark Proksch) and their Latino familiar Guillermo de la Cruz (Harvey Guillén) trying to cope with drudgery of everyday life as well other supernatural beings has not lost its satirical bite. Supervising the over 1,000 visual effects shots for the 10 episodes produced by WeFX, Spin VFX and Maverick is Mohammad Ghorbankarimi (See), who founded Toronto-based WeFX during the pandemic.
“The whole idea of Nadja’s spirit going into a doll that comes to life, talks and has a personality was a hit in Season 2. The writers brought it in more for Season 3, and we’ve made the doll a lot more agile. We had a full CG Nadja doll but only use the portions that change, from mid-nose down to the chin and cheeks. We animate that and track it back onto the actual on-set doll. The eyes, head and neck movements are done practically by puppeteers.”
—Mohammad Ghorbankarimi, Visual Effects Supervisor
Comedic timing had to be kept in mind when creating the visual effects. “I have never done visual effects for comedy before, but the timing is as important as when a stand-up comedian appears onstage and performs a joke,” notes Ghorbankarimi. “Usually, you have to have the effect long enough for the viewer to digest it, but when it’s comedy we’re using the shortest amount to bring the joke and emotion out.
“When I came on board,” continues Ghorbankarimi, “I watched the movie and series at least 10 times to familiarize myself with everything. What I’m trying to do is repeat the same thing but bring a little of my own touch into it. The whole idea of Nadja’s spirit going into a doll that comes to life, talks and has a personality was a hit in Season 2. The writers brought it in more for Season 3, and we’ve made the doll a lot more agile.” Video reference was shot of Natasia Demetriou to get a sense of her movements, facial expressions and timing. “We had a full CG Nadja doll but only use the portions that change, from mid-nose down to the chin and cheeks,” he adds. “We animate that and track it back onto the actual on-set doll. The eyes, head and neck movements are done practically by puppeteers.”
Set extensions were created for the exterior shots. “Half of the main vampire house was built in the parking lot of the studio and the rest was extended in CG,” states Ghorbankarimi. “Because we’re not shooting on location this season, the neighbor house was added in CG. A lot of trees were added. On the other side of the stage there is a car dealership that we had to paint out. There are almost no set extensions for the interiors unless something breaks or doesn’t work as intended. In Episode 302, someone smashes into and demolishes a wall. We also did those kind effects for the interiors. The sets are beautiful. The way that DP D.J. Stipsen lights is painterly. So much design, talk and time has gone into every single room and location.” The plate photography is treated differently than the final image. “By the time it is fully graded and is on TV,” he says, “you don’t see some of the details because it’s so dark, but when you’re working the details are kept vivid and visible with a brighter plate to make sure that all of the details are there.”
The Cloak of Duplication allows the wearer to shapeshift into someone else. “It is one of the coolest new effects this season,” laughs Ghorbankarimi. “The body motion of the two actors has to be similar. Usually, we go from a much shorter actor to a much taller actor. We came up with this particle system in Houdini. The cloak creates this particle, the particle grabs the body, the body transforms the particle and rotates around. By time the cloak is on the shoulders, the taller actor stands up, pushes all of these particles aside and walks out. The fabric is match moved in Maya to generate the particles, and the two bodies inside are matched moved to collide with the particles. The particles are getting color from the environment. You’re mapping color from the actor’s outfit into the particle. We used Nuke to composite it all together. It’s very believable how they spin around like a twister and convert from one person to another.”
Kristen Schaal portrays The Guide from the Vampire Council. “The Guide can shapeshift from human to smoke and vice versa,” explains Ghorbankarimi. “She doesn’t become like real smoke that blows in the air and dissipates. We came up with this idea that the smoke is fully controlled so she can bring the smoke back to her body or dissipate it away based on the action that she does. There is a shot in Episode 301 where as The Guide is climbing a building and she needs to grab onto things. She can bring back the smoke and have it shape into the body part that is needed like a hand. When she wants to look around, the smoke comes together to make an indistinguishable human shape. Every time The Guide wants to land, she jumps into the location. We animated the CG character that drives the smoke, and the moment that the foot hits the ground, the gravity and dynamics drive the smoke from that point on. It is an elaborate effect.”
“I come from a film background and try to come with ideas that are mostly done practically because nothing is more real than real things,” remarks Ghorbankarimi. “This season we did use Unreal Engine and LED panels for one of the episodes. The assets were built in advance, and we did real-time playback. It worked phenomenally. There was a lot of interactive light. It made a complicated episode into something that was doable. The Cloak of Duplication effect is so cool and funny, especially when Nandor is transforming into another character and mimics how that person sounds and talks. The big challenge is making sure that you are elevating, not degrading the show. It’s one of the best sets that I’ve been on. Everybody knows what they’re doing and there are no surprises on set. It’s a well-designed machine.”
“I come from a film background and try to come with ideas that are mostly done practically because nothing is more real than real things. This season we did use Unreal Engine and LED panels for one of the episodes. The assets were built in advance, and we did real-time playback. It worked phenomenally. There was a lot of interactive light. It made a complicated episode into something that was doable.”
—Mohammad Ghorbankarimi, Visual Effects Supervisor
By IAN FAILES
If there’s one thing changing the way that physical production and visual effects interact right now, it’s the rise of LED walls, also known as LED volumes.
LED walls form part of the shift to virtual production and real-time content creation, and have gained prominence by enabling in-camera VFX shots that don’t require additional post-production, offering an alternative to bluescreen or greenscreen shooting, and helping to immerse actors and crews into what will eventually be the final shots.
Several production houses and visual effects studios have built dedicated LED walls and associated workflows. Some rely on bespoke LED wall setups, depending on the needs of the particular project. One company, Zoic Studios, even embarked on a project to install a LED wall inside existing studio space and use it to further the crew’s understanding of virtual production.
How did Zoic do that exactly? Here, Zoic Creative Director and Visual Effects Supervisor Julien Brami breaks down the process they followed and what they took away from this new virtual production experience.
A HISTORY OF VIRTUAL PRODUCTION AT ZOIC
Zoic’s foray into LED walls is actually part of a long history of virtual production solutions at the visual effects company, which has offices in Los Angeles, New York and Vancouver. In particular, some years ago the studio developed a system called ZEUS (Zoic Environmental Unification System) that could be used to track live-action camera movements on, for example, a greenscreen stage and produce real-time composites with previs-like assets. The idea was to provide instant feedback on set via a Simul-cam setup, either a dedicated monitor or smart tablet.
“FTG [Fuse Technical Group] lent us an LED wall for about three months, and it was really exciting because they said, ‘We can give you the panels and we can do the installation, but we can’t really give you anyone who knows how to do it. You have to learn yourself.’ So it was just a few of us here at Zoic doing it. We each tried to figure everything out.”
—Julien Brami, Creative Director and Visual Effects Supervisor, Zoic Studios
When Brami, who has been at Zoic since 2015, began noticing a major shift in the use of game engines for virtual production shoots on more recent shows – The Mandalorian, for example, has of course brought these new methods into the mainstream – he decided to try out the latest techniques himself at home using Epic Games’ Unreal Engine. “I’ve always been extremely attracted by new workflows and new technology,” shares Brami. “I mostly work in the advertising world and usually the deadlines are short and budgets are even shorter, so I’m always looking for new solutions. I’d been looking to employ real-time rendering in a scene as a way to show clients results straight away, for example.
“So,” he adds, “I first started doing all this virtual production stuff in my own room at home using just my LED TV and a couple of screens. What’s amazing with the technology is that the software is free. I just had a Vive controller and was tracking shots with my Sony DSLR. As soon as I saw something working, I called [Zoic Founder and Executive Creative Director] Chris Jones and said, ‘I think we need to do this at scale.’”
“The panels create light, but not strong enough to get shadows. So, you can have an overall global lighting and the value’s just right on the character, but there is no shadow for the character. That’s why we wanted to also have practical lighting. We used the DMX plugin from Unreal, which would send information for correct intensity and color temperature to every single practical light. So, when you move in real-time, the light actually changes, and we can create the real cast shadows on the characters.”
—Julien Brami, Creative Director and Visual Effects Supervisor, Zoic Studios
‘WE NEED TO LEARN IT’
With an Epic MegaGrant behind them, Zoic partnered with Fuse Technical Group (FTG) and Universal Studios to create a LED wall at their studio space in Culver City. It was a big leap for Zoic, which predominantly delivers episodic, film and commercial VFX work. Subsequently, the studio launched a ‘Real Time Group’ to explore more virtual production offerings and use Unreal Engine for visualization, virtual art department deliveries, animation and virtual production itself. “What we said was, ‘We have the time, we have the technology, we need to learn it,’” relates Brami, in terms of the initial push into LED wall filmmaking. “FTG lent us an LED wall for about three months, and it was really exciting because they said, ‘We can give you the panels and we can do the installation, but we can’t really give you anyone who knows how to do it. You have to learn yourself.’ So it was just a few of us here at Zoic doing it. We each tried to figure everything out.” The LED wall setup at Zoic was used for a range of demos and tests, as well as for a ‘Gundam Battle: Gunpla Warfare’ spot featuring YouTuber Preston Blaine Arsement. The spot sees Preston interact with an animated Gundam character in a futuristic warehouse. The character and warehouse environment were projected on the LED wall. “That was the perfect scenario for this,” says Brami.
The wall was made up of around 220 panels. Zoic employed a Vive tracker setup for camera tracking and as a way to implement follow-focus. The content was pre-made for showing on the LED wall and optimized to run in Unreal Engine in real-time. “We had the LED processor and then two stations that were running on the set,” details Brami. “We also had a signal box that allowed us to have genlock, because that’s what matters the most. Everything has to work in tandem to ensure the refresh rate is right and the video cards can handle the playback.” There were many challenges in making its LED wall setup work, notes Brami, such as dealing with lag on the screen, owing to the real-time tracking of the physical camera. Another challenge involved lighting the subject in front of the LED panels. “The panels create light, but not strong enough to get shadows. So, you can have an overall global lighting and the value’s just right on the character, but there is no shadow for the character. That’s why we wanted to also have practical lighting. We used the DMX plugin from Brami. “When you shoot on location, you may have the best location ever, but you don’t know if next year if there’s another season whether this location will be there or will remain untouched. Now, even 10 years from now, you might want a location in Germany, well, we can come back to the same scene built for virtual produc-tion. Exactly the same scene, which really is mind blowing.”
Arising from his LED wall and virtual production experiences so far, Zoic Studios Creative Director and Visual Effects Supervisor Julien Brami believes there are many advances still to come in this space.
One development Brami predicts is that there will be more widespread operation of LED walls remotely, partly something that became apparent during the pandemic. “All of this technology just works on networks. My vision is that one day I can have an actor somewhere in Texas, an actor somewhere in Germany, I can have the director anywhere else, but they all look at the same scene. As long as it can be all synchronized, we’ll be able to do it. And then you won’t need to all travel to the same location if you can’t do that or if it’s too expensive.”
Another advancement that Brami sees as coming shortly is further development in on-set real-time rendering to blend real and virtual environments. “This is going to be like an XR thing. You shoot with an LED wall, but then you can also add a layer of CG onto the frame as well – that’s still real-time. You can sandwich the live-action through the background that’s Unreal Engine-based on the wall and then add extra imagery over the wall.
“Doing this,” Brami says, “means you can actually create a lot of really cool effects, like particles and atmospherics, and make your sets bigger. It does need more firepower on set, but I think this is really what’s going to blend from an actor in front of a screen to something that’s fully completed. You could put in a creature there, you can put anything your space can sandwich. I’m really excited about this.”
“We know how to create VFX and we know how to create content and assets, we just need to get more involved in preparing this content for real-time on LED walls, since the assets need to run at high frame rates…”
—Julien Brami, Creative Director and Visual Effects Supervisor, Zoic Studios
THE STATE OF PLAY WITH ZOIC AND LED WALLS
It became apparent to Zoic during this test and building phase that a permanent build at their studio space may not be necessary, as Brami explains. “We realized a couple of things. First, this tech-nology was changing so fast. Literally, every three months there was a new development, which meant buying something now and having it delivered in three months means it would be obsolete. “Take the LED panels, for example,” adds Brami. “The panels we had at the time were 2.8mm pixel pitch. The pixel pitch is basi-cally the distance between LEDs on a panel, and these define the resolution you can get from the real estate of the screen and how close you can get to it and all the aberrations you can get. When we started shooting, 2.8 was state-of-the-art. But then we’ve seen pixel pitches of 1.0 appearing already. Everybody has seen the potential of this technology, and the manufacturers want to make it even better.”
This meant Zoic decided that, instead of buying an LED wall for permanent use, to utilize the loaned wall as much as possible for the three months to understand how it all worked. “Our main focus then became creating really amazing content for LED walls,” states Brami. “We know how to create VFX and we know how to create content and assets, we just need to get more involved in preparing this content for real-time on LED walls, since the assets need to run at high frame rates, etc.”
Already, Zoic has produced such content for a space-related project and also for scenes in the streaming series Sweet Tooth, among other projects. Indeed, Zoic believes it can take the knowledge learned from its LED wall experience and adapt it for individual clients, partnering with panel creators or other virtual production services to produce what the client needs. “So, if they need a really high-end wall, because everything needs to be still in focus, we know where to go and how to create a set for that,” says Brami. “But if they just need, let’s say, an out-of-focus background for a music video, we know where to go for that approach. “I love where it’s all going, and I’m starting to really love this new technology,” continues Brami. “The past three years has really been the birth of it. The next five years are going to be amazing.”
By IAN FAILES
Aaron Sims Creative (ASC), which crafts both concept designs and final visual effects for film, television and games, bridging both the worlds of practical and digital effects, has long delivered content relying on the long-standing ‘traditional’ workflows inherent in crafting creatures and environments. But more recently, ASC has embraced the latest real-time methods to imagine worlds and create final shots. In fact, a new series of films ASC is developing are using game engines at their core, as well as related virtual production techniques.
The first of these films is called DIVE, a live-action project in which ASC has been capitalizing on Epic Games’ Unreal Engine for previs, set scouting and animation workflows. Meanwhile, the studio has also tackled a number of commercial and demo projects with game engines and with LED volumes, part of a major move into adopting real-time workflows.
So why has ASC jumped head-first into real-time? “It’s exactly in the wording ‘real-time,’” comments ASC CEO Aaron Sims, who started out in the industry in the field of practical special effects makeup with Rick Baker before segueing into character design at Stan Winston Studio, introducing new digital workflows there. “You see everything instantaneously. That’s the benefit right there. With traditional visual effects, you do some work, you add your textures and materials, you look at it, you render it, and then you go, ‘OK, I’ve got to tweak it.’ And then you go and do that all again.
“You build it, and you can go with your team to this virtual world and go scout out your world before you go make it – we never really had that luxury in traditional visual effects. We can build just enough that you can actually start to scout it with a basic camera, almost like rough layouts, and then use Unreal’s Sequencer to base your shots, and work out what else you need to build.”
“But with real-time, as you’re tweaking you’re seeing everything happen with zero delay. There’s no delay whatsoever. So, the timeframe of being able to develop things and just be creative is instant.”
The move to real-time began when ASC started taking on more game-related and VR projects, with Sims noting they have now actually been using Unreal Engine for a number of years. “However,” he says, “it wasn’t until COVID hit that I saw some really incredible things being produced by artists and other visionaries who were coming up with techniques on how to use the engine for things beyond games. That’s what excited me, so I started getting into it and learning Unreal in more detail to help tell some of our stories. Then I realized, ‘OK, wait, this is more powerful than I expected it to actually be.’ On every level, it was like it was doing more than I ever anticipated.”
The other real benefit of real-time for Sims has been speed. “The faster I can get from point A to point B, I know it’s true to my original vision. The more you start muddying it up, the more it becomes something else and the less that you can actually see it in real-time. You’re just guessing until the end. So for me it’s been fascinating to see something that’s available now, especially during COVID when I’ve been stuck at home. It’s made it even more reason to dive into it as much as I can to learn as much.”
Over the past few years, ASC has regularly produced short film projects. Some have been for demos and tests and some as part of what Sims calls the studio’s ‘Sketch-to-Screen’ process, designed to showcase the steps in concept design, layout, asset creation, lookdev, previs, animation, compositing and final rendering. Sims has even had a couple of potential feature films come and go.
DIVE is the first project for which the studio has completely embraced a game engine pipeline, and is intended as the start of a series of ‘survival’ films that Sims and his writing partner, Tyler Winther (Head of Development at ASC), have envisaged.
“The films revolve around different environment-based scenarios,” outlines Sims. “This first one, DIVE, is about a character who’s put in a situation where they have to survive. We’re going to see diving and cave diving in the film, but we wanted to put an everyday person into all these different situations and have the audience go, ‘That could be me.’”
The development of the films has been at an internal level so far (ASC has also received an Epic Games MegaGrant to help fund development), but still involves some ambitious goals. For example, DIVE, of course, contains an underwater aspect to the filmmaking. Water simulation can be tricky enough already with existing visual effects tools, let alone inside a game engine for film-quality photorealistic results.
“It’s very challenging to do water,” admits Sims, although he adds that the technology has progressed dramatically in game engines.
“Instead of just previs’ing the effects shots, we’re previs’ing the whole thing so we can see how the film plays. It helps the story develop in a way that is harder to just imagine while you’re writing it.”
“I thought, ‘Let’s do the most difficult one first, which is water.’ We have other settings in the desert and the woods, which we have seen more of with game engines, but water is hard.”
To help plan out exactly what DIVE will look like – including what physical sets and locations may be necessary – the ASC team has been building virtual sets first and then carrying out virtual set scouting inside them. “You build it, and you can go with your team to this virtual world and go scout out your world before you go make it – we never really had that luxury in traditional visual effects,” notes Sims.
“We can build just enough that you can actually start to scout it with a basic camera, almost like rough layouts, and then use Unreal’s Sequencer to base your shots, and work out what else you need to build.”
That virtual set scout resembles previs, to a degree, continues Sims, while allowing the filmmakers to go much further. “Instead of just previs’ing the effects shots, we’re previs’ing the whole thing so we can see how the film plays. It helps the story develop in a way that is harder to just imagine while you’re writing it.”
This year, ASC helped Epic Games launch its announcement of early access to Unreal Engine 5. The new version of the game engine incorporates several new technologies, including a ‘virtualized micropolygon geometry’ system called Nanite and a fully dynamic global illumination solution known as Lumen.
ASC’s work on a sample project called Valley of the Ancient, available with this UE5 release, showcased that new tech, along with what could be done with Unreal’s MetaHuman Creator app and what could be achieved in terms of character animation inside Unreal (these include the Animation Motion Warping, Control Rig and Full-Body IK Solver tools, plus the Pose Browser).
“I can’t speak highly enough about the tools,” comments Sims. “It’s become definitely a new medium for me. I haven’t gotten this excited since my days of starting in the ‘80s on makeup effects.
“It wasn’t until COVID hit that I saw some really incredible things being produced by artists and other visionaries who were coming up with techniques on how to use the engine for things beyond games. That’s what excited me, so I started getting into it and learning Unreal in more detail to help tell some of our stories. Then I realized, ‘OK, wait, this is more powerful than I expected it to actually be.’ On every level, it was like it was doing more than I ever anticipated.”
“Unreal is able to take in a lot more geo than you would expect,” adds Sims. “A lot more than you would be able to in Maya or some of these other programs, and it’s able to actually interact with the geo and even animate with the geo at a higher density that you normally would be able to.”
One aspect of the real-time approach Sims is keen to try out with DIVE and the related survival films is in shooting scenes, especially with LED volumes. Typically, volumes have served as essentially in-camera background visual effects environments or set-piece backgrounds, or for interactive lighting. ASC is exploring how this might extend to some kind of creature interaction, since the films will feature a selection of monsters.
“It’s still early days,” says Sims, who notes the different approaches they have considered include pre-animating, live animation and live motion capture. “We’re R&D’ing a lot of this process because, in terms of what’s on the LED walls, we’re still working out, how much can you change it? People are of course used to being able to be on set with a puppeteered creature that interacts with an actor and lets you do improv. Right now, we’re trying to create tools to be able to do that.”
While he is immersed in the world of game engines right now, Sims is well aware that there are many developments that still need to occur with real-time tools. One area is in the ability to provide ray-traced lighting in real-time on LED volumes. Because of the intensive processing required, fully photoreal backgrounds on LED walls are still often achieved with baked-in lighting.
“You’re seeing all the results instantly [in real-time]. So, as you’re tweaking it you’re not accidentally going back and doing it again because you forgot that you did something before because you’re waiting for the rendering process. And that’s just the rendering part. The camera, the lighting, the animation all being real-time is another component that makes it just so much more powerful and exciting as a filmmaker, but also a visual effects artist.”
“This means you’re not getting the exact same interaction that you would if it was ray-traced in real-time,” says Sims. “But with things like Lumen coming, it’s likely we’ll get even more interactive lighting that doesn’t need to be baked.”
Sims also mentions his experience with the effects simulation side of game engines (in Unreal Engine the two main tools here are Niagara for things like particle effects and fluids, and Chaos for physics and destruction). “There’s a lot of great tools in Unreal for creating particle effects and chaos or destruction. But a lot of times, you’re still having to do stuff outside, say in Houdini, and bring it in. I’m hoping that that’s going to change.”
For DIVE’s underwater environments, in particular, ASC has been experimenting with all the different kinds of water required (surfaces, waves and underwater views). “We’re also creating our own tools for bubbles and all that stuff, and how they’re interacting, clumping together and forming the roofs of caves – all in-engine, which is exciting. I think it’s just the getting in and out of the water that’s the most challenging thing.”
Despite the push for real-time, Sims and ASC of course continue to work with traditional design and VFX tools, and they are also maintaining links to a practical effects past for DIVE. Makeup effects are still a key part of the film.
“If there’s a reason to do practical, I say still do practical,” states Sims. “In certain situations, the hybrid approach is usually the most successful because of the fidelity of 4K and everything else that you didn’t have back in the ‘80s. You could get away with grain and all of that stuff back then.”
Still, Sims is adamant that real-time has changed the game for content creation. He says that right now there’s such a great deal of momentum behind creating in real-time, from the software developers themselves through to all levels of filmmakers, since the technology is becoming heavily accessible, and something that can be done almost on an individual level.
“I mean, you’re seeing all the results instantly. So, as you’re tweaking it, you’re not accidentally going back and doing it again because you forgot that you did something before because you’re waiting for the rendering process.”
“And that’s just the rendering part,” notes Sims. “The camera, the lighting, the animation all being real-time is another component that makes it just so much more powerful and exciting as a filmmaker, but also a visual effects artist.”
By TREVOR HOGG
Even though the Coronavirus pandemic confined Doug Chiang to home for most of 2020, for the Vice President and Executive Creative Director, Star Wars at Lucasfilm, there was not a lot of family time. “I still couldn’t talk to them much because I was busy!” This is not surprising as he is the visual gatekeeper of the expanding Star Wars franchise, whether it be films and television shows, games, new media or theme parks.
The space opera did not exist in 1962 when Chiang was born in Taipei, Taiwan. “We moved to Dearborn, Michigan when I was about five years old. I remember the home we grew up in Taiwan, the kitchen and bedrooms. Before getting into bed, we had to wash our feet because of the dirt floors. We actually had pigs in the kitchen.”
Arriving in Michigan during the middle of winter introduced the five-year-old to snow and the realization that he didn’t quite fit in, as Asian families were a rarity in Dearborn and Westland where Chiang attended elementary school and high school.
“Our parents always encouraged us [he has an older brother Sidney and younger sister Lisa] to assimilate as quickly as possible, but as a family we were still culturally Chinese. I was your classic Asian nerd. I didn’t talk a lot, was quiet and looked different. I was picked on a lot, but the saving thing for me was that I quickly developed a reputation for being the class artist. I found that as a wonderful escape where I could create my own worlds and friends, so I drew a lot.”
“Star Wars had a huge impact on my generation. That completely defined my career goals in terms of what I wanted to do. I was starting to learn about stop-motion animation. When I saw The Making of Star Wars with Phil Tippett doing the stop-motion for the chess game on the Millennium Falcon, it all connected together. That’s when I went down to the basement of our home and borrowed my dad’s 8mm camera, tripod and lights, and started to make my own films.”
—Doug Chiang, Vice President and Executive Creative Director, Star Wars, ILM
A passion for filmmaking developed as a 15-year-old upon seeing the film that would launch the franchise that he is now associated with. “Star Wars had a huge impact on my generation. That completely defined my career goals in terms of what I wanted to do. I was starting to learn about stop-motion animation. When I saw The Making of Star Wars with Phil Tippett doing the stop-motion for the chess game on the Millennium Falcon, it all connected together. That’s when I went down to the basement of our home and borrowed my dad’s 8mm camera, tripod and lights, and started to make my own films. All trial and error. You’re discovering things by accident. I love that aspect of it.”
A newspaper ad caused the aspiring cinematic talent to enter the Michigan Student Film Festival. “I won first place and that gave me a lot of encouragement. It gave me connections to one of the founders, John Prusak, who would loan me professional tools.
That became the start of my quasi-filmmaking education. When I came out to UCLA for film school it was a lot more of the same. A lot of it was self-driven. After I made one of my experimental animations called Mental Block, I made this film in 10 weeks, where I took over the little dining room of our dorm. I entered it into the Nissan FOCUS Awards and got first place. Winning a car as a sophomore was great! My goal was to direct films, but I realized that everybody in Los Angeles wants to do that, so there’s virtually no chance. I could do storyboards well so that was going to be my foot into the industry. One of my first jobs out of film school was doing industrial storyboards for commercials.” A fateful job interview took place at Digital Productions, which was one of three major computer graphics companies in the 1980s. “I was hired on the spot to design and direct computer graphics commercials. That was my first introduction of combining film design and art direction with this new medium called computer graphics.”
Joining ILM was an illuminating experience for Chiang. “I always thought of the design process as one seamless workflow. ILM was different in that we were doing post-production design, so the art department was specifically for that. The films that I worked on, like Ghost, Terminator 2: Judgment Day and Forrest Gump, were all post-production design. While I was at ILM it started to evolve where we could participate in the pre-production design. It wasn’t until I started working with George Lucas when he hired me to head up the art department for the prequels in 1995 that I realized that was the way George had done it all along back with Joe Johnston and Ralph McQuarrie.
“I was one of the first people onboard while George was writing Star Wars: Episode I – The Phantom Menace,” Chiang continues. “I had been learning Joe Johnston and Ralph McQuarrie, and got their style down. When he told me that we were going to set the foundation for all of those designs, and aesthetically it is going to look different, that threw me for a loop because I felt like I was studying for the wrong test. My goal was to give him the spectacle that he wanted without any of the practical limitations. There were enough smart people at ILM like John Knoll to figure all of that out. It was world building and design in their purest form. I remembered it terrified ILM because they hadn’t developed anything of that scale. The Phantom Menace was the biggest film at that time at ILM with miniature sets. There was a huge digital component, but that was mostly for the characters.”
In 2000, Chiang established DC Studios and produced several animated shorts based on the illustrated book Robota, co-created with Orson Scott Card, which takes place on the mysterious planet of Orpheus and was inspired by his robot drawings. He would then co-found Ice Blink Studios in 2004 and carry on his collaboration with another innovative filmmaker, Robert Zemeckis, which resulted in Chiang receiving an Oscar. “Death Becomes Her was fascinating because the story is about the immortality potion, so our main characters can’t die. It was figuring out, ‘How do you achieve that?’ Real prosthetics can only go so far, and we had never achieved realistic skin in computer graphics before. It was a huge risk to try to combine the two. Forrest Gump was all about subtle adjustments to the reality of the world to create powerful dramatic images. Bob’s films can be complete spectacle like The Polar Express, and you have to lean into that sensibility. Bob doesn’t have a specific filmmaking style. He goes with what works best for his storytelling.”
Zemeckis developed such a strong level of trust in Chiang that he hired him to establish ImageMovers Digital, a ground-breaking performance-capture animation studio. “It was almost a perfect collision with all of my experiences, because I had that strong computer graphics experience working with Digital Productions, I had the strong foundation with ILM for post-production design, and then I had the pre-production design with George Lucas. ImageMovers Digital became this wonderful test-case where we had an opportunity to build a new company from scratch and create a new artform. We were taking a huge risk because the difficulty of that challenge equated to bigger budgets because of the sheer amount of time and the number of people that it takes. Literally, tools were being written as we were going. Those four years were a highlight of my career. It was a culmination of all that history of learning, having Bob as the visionary to drive all of that, and the support of Disney. It was sad for me [when it was closed by Disney in 2010] because we were right at the point of breaking through to having the right tools to make that big transition for success.”
Production on The Force Awakens saw Chiang join Lucasfilm to shepherd the expansion of the Star Wars universe. “Working with George Lucas for seven years, we established a logic in terms of how designs evolve in the Star Wars universe. It’s marrying the design history to our actual history. The prequels are in the craftsman era – that’s why the designs from Naboo are elegant and have sleek artforms. The shapes in the original trilogy become more angular and look like they came off of an assembly line. George always considered Star Wars to be like a period film where we do all of this homework and only 5% of it ends up onscreen, but all of that homework informs that 5%.” Believable fantasy designs need to be 80% familiar and 20% alien, Chiang learned. “I learned specific guidelines from George. When you design for the silhouette, draw it as if a kid could draw it. The other one is designing for personality. Certain shapes are very emotive, and if you can design with that in mind, and on top of that put in color and details, it’ll be more successful. The way your brain works is that you’ll see the shape first which will tell you right away, ‘Is it friendly or bad, and what it’s supposed to do.’ In the end, the details don’t inform the design but can make it better.”
Conceptualizing and constructing the Star Wars: Galaxy’s Edge theme parks situation in Disneyland and Disney World has been a whole other level of design for Chiang. “Film sets exist for weeks and we tear them down while theme parks exist for years, so the materials have to be real and there is no cheating. On top of that, you layer in the whole health and safety component because you’re going to have people wandering through these environments unsupervised. We’re trying to layer in what makes Star Wars special. For instance, the whole idea of no handrails is a real thing in Star Wars. You obviously can’t do that. The aesthetic part was just as challenging because we wanted to create a design that fits seamlessly with our films. One of the things that I didn’t realize is the sunlight quality in Florida is different than Anaheim. You have to tune it to take in account the cooler light in Florida. On top of that, there are hurricanes in Florida whereas you have earthquakes in Anaheim. The WDI engineering team are amazing artists in their own right.
“There was a period of time where films became CG spectacles because the audience hadn’t seen anything like that before,” observes Chiang. “What I find interesting now is that got boring because it became too much sugar. Now I see the pendulum swinging back to where, what is the best technique for the film? One of the great things about working with Jon Favreau on The Mandalorian is precisely that. I don’t know what the younger generation will think because they have grown up with video games where they’re completely immersed in digital spectacle.”
Virtual production works best with filmmakers who know exactly what they want, Chiang explains. “What’s different about what we’re doing now with StageCraft and the volume is we’re bringing a good percentage of post-production work upfront so it involves all of the department heads collaborating to create a seamless process. Virtual production is transforming filmmaking into a fluid process, which is in some ways what computer graphics did when it first became available as a tool. The world building has to be complete before we actually start photographing the film. But it’s not unlike what I was doing with George. When I think back to what George, Robert Zemeckis and Jon Favreau were doing, they’re of the same mind. We were all trying to create a technique to better tell stories and to make the most efficient process to create visual spectacle onscreen.”
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
|cookielawinfo-checbox-analytics||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".|
|cookielawinfo-checbox-functional||11 months||The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".|
|cookielawinfo-checbox-others||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.|
|cookielawinfo-checkbox-necessary||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".|
|cookielawinfo-checkbox-performance||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".|
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.