Ellen Poon grew up in Hong Kong in the 1960s in a film-going family. Her creative aesthetic was highly influenced by the combination of British and Chinese cultures. A world-renowned expert in visual effects and animation, Ellen is a founding member of MPC’s Computer Graphics department. During her tenure at Industrial Light and Magic, she was the first woman to be made Visual Effects Supervisor. Ellen has a passion for Chinese film projects and has won two Hong Kong Film Awards for her work on Hero and Monster Hunt. Her filmography includes Jurassic Park, Star Wars: Episode I – The Phantom Menace, Jumanji, Frozen and Raya and The Last Dragon. Ellen advocates for increased diversity and inclusion as a member of the Academy of Motion Picture Arts and Sciences Executive Committee.
Starting out, I wanted a mentor, but couldn’t find anyone because computer graphics was in its infancy and there was no one in the field like me. What I’ve learned: when you are trying to forge a new path – for the industry and yourself – you need people who are kind and supportive in your corner. Mentors don’t need to be offering something tangible, like a job; they need to offer guidance, especially on how to navigate the microaggressions that women and minorities experience in our industry. To maintain focus, you need to have a very strong compass. Know who you are, what you’re trying to do,and maintain a sense of professionalism.
“Bold leaps are life defining moments; take them when you have the chance.”
Being the first woman to be named VFX Supervisor at Industrial Light & Magic was a great privilege and a huge responsibility. I felt the weight of representing all women, all minority women, and the need to be not just good – but excellent. I’ve seen how implicit bias manifests – the looks of disbelief from crew and clients that I could be a supervisor, or the unknown quantity that many people find hard to accept because there are just so few of us. It is harder to get work as a female supervisor, but the obstacles have increased my resolve to do well in this business and prove them wrong. For people starting out, this environment can be discouraging, so I am committed to sharing what I’ve learned.
As an artist in a position to lead others, it’s important to stay deeply connected to your craft. It takes budgets and pipelines to make our shows run, but to feed my artistry, I need to be creating at my workstation and imagining out loud. Technology changes nonstop, and to be a good supervisor capable of selling it into a project, you must keep your hands dirty.
Do not be afraid to dare greatly and listen to your inner calling. While working at ILM, a lot of great films came out of Hong Kong, China and Asia. I felt like I had a role to play in making these movies, as well as a connection to helping the business go further technologically in my hometown, the place that shaped me. I pursued the chance to work with the best Asian filmmakers on Hero by quitting my job at ILM to follow my passion. Bold leaps are life defining moments; take them when you have the chance.
Ask Me Anything – VFX Pros Tell All
Join us for our series of interactive webinars with visual effects professionals.
Ask your questions, learn about the industry and glean inspiration for your career path.
Register today at VisualEffectsSociety.com/AMA
By TREVOR HOGG
Transplanted from the streets of Wellington, New Zealand to Staten Island, New York, the television adaptation of What We Do in the Shadows returns to FX for a third season. The mockumentary about vampires Nandor (Kayvan Novak), Laszlo (Matt Berry), Nadja (Natasia Demetriou), Colin (Mark Proksch) and their Latino familiar Guillermo de la Cruz (Harvey Guillén) trying to cope with drudgery of everyday life as well other supernatural beings has not lost its satirical bite. Supervising the over 1,000 visual effects shots for the 10 episodes produced by WeFX, Spin VFX and Maverick is Mohammad Ghorbankarimi (See), who founded Toronto-based WeFX during the pandemic.
“The whole idea of Nadja’s spirit going into a doll that comes to life, talks and has a personality was a hit in Season 2. The writers brought it in more for Season 3, and we’ve made the doll a lot more agile. We had a full CG Nadja doll but only use the portions that change, from mid-nose down to the chin and cheeks. We animate that and track it back onto the actual on-set doll. The eyes, head and neck movements are done practically by puppeteers.”
—Mohammad Ghorbankarimi, Visual Effects Supervisor
Comedic timing had to be kept in mind when creating the visual effects. “I have never done visual effects for comedy before, but the timing is as important as when a stand-up comedian appears onstage and performs a joke,” notes Ghorbankarimi. “Usually, you have to have the effect long enough for the viewer to digest it, but when it’s comedy we’re using the shortest amount to bring the joke and emotion out.
“When I came on board,” continues Ghorbankarimi, “I watched the movie and series at least 10 times to familiarize myself with everything. What I’m trying to do is repeat the same thing but bring a little of my own touch into it. The whole idea of Nadja’s spirit going into a doll that comes to life, talks and has a personality was a hit in Season 2. The writers brought it in more for Season 3, and we’ve made the doll a lot more agile.” Video reference was shot of Natasia Demetriou to get a sense of her movements, facial expressions and timing. “We had a full CG Nadja doll but only use the portions that change, from mid-nose down to the chin and cheeks,” he adds. “We animate that and track it back onto the actual on-set doll. The eyes, head and neck movements are done practically by puppeteers.”
Set extensions were created for the exterior shots. “Half of the main vampire house was built in the parking lot of the studio and the rest was extended in CG,” states Ghorbankarimi. “Because we’re not shooting on location this season, the neighbor house was added in CG. A lot of trees were added. On the other side of the stage there is a car dealership that we had to paint out. There are almost no set extensions for the interiors unless something breaks or doesn’t work as intended. In Episode 302, someone smashes into and demolishes a wall. We also did those kind effects for the interiors. The sets are beautiful. The way that DP D.J. Stipsen lights is painterly. So much design, talk and time has gone into every single room and location.” The plate photography is treated differently than the final image. “By the time it is fully graded and is on TV,” he says, “you don’t see some of the details because it’s so dark, but when you’re working the details are kept vivid and visible with a brighter plate to make sure that all of the details are there.”
The Cloak of Duplication allows the wearer to shapeshift into someone else. “It is one of the coolest new effects this season,” laughs Ghorbankarimi. “The body motion of the two actors has to be similar. Usually, we go from a much shorter actor to a much taller actor. We came up with this particle system in Houdini. The cloak creates this particle, the particle grabs the body, the body transforms the particle and rotates around. By time the cloak is on the shoulders, the taller actor stands up, pushes all of these particles aside and walks out. The fabric is match moved in Maya to generate the particles, and the two bodies inside are matched moved to collide with the particles. The particles are getting color from the environment. You’re mapping color from the actor’s outfit into the particle. We used Nuke to composite it all together. It’s very believable how they spin around like a twister and convert from one person to another.”
Kristen Schaal portrays The Guide from the Vampire Council. “The Guide can shapeshift from human to smoke and vice versa,” explains Ghorbankarimi. “She doesn’t become like real smoke that blows in the air and dissipates. We came up with this idea that the smoke is fully controlled so she can bring the smoke back to her body or dissipate it away based on the action that she does. There is a shot in Episode 301 where as The Guide is climbing a building and she needs to grab onto things. She can bring back the smoke and have it shape into the body part that is needed like a hand. When she wants to look around, the smoke comes together to make an indistinguishable human shape. Every time The Guide wants to land, she jumps into the location. We animated the CG character that drives the smoke, and the moment that the foot hits the ground, the gravity and dynamics drive the smoke from that point on. It is an elaborate effect.”
“I come from a film background and try to come with ideas that are mostly done practically because nothing is more real than real things,” remarks Ghorbankarimi. “This season we did use Unreal Engine and LED panels for one of the episodes. The assets were built in advance, and we did real-time playback. It worked phenomenally. There was a lot of interactive light. It made a complicated episode into something that was doable. The Cloak of Duplication effect is so cool and funny, especially when Nandor is transforming into another character and mimics how that person sounds and talks. The big challenge is making sure that you are elevating, not degrading the show. It’s one of the best sets that I’ve been on. Everybody knows what they’re doing and there are no surprises on set. It’s a well-designed machine.”
“I come from a film background and try to come with ideas that are mostly done practically because nothing is more real than real things. This season we did use Unreal Engine and LED panels for one of the episodes. The assets were built in advance, and we did real-time playback. It worked phenomenally. There was a lot of interactive light. It made a complicated episode into something that was doable.”
—Mohammad Ghorbankarimi, Visual Effects Supervisor
By IAN FAILES
If there’s one thing changing the way that physical production and visual effects interact right now, it’s the rise of LED walls, also known as LED volumes.
LED walls form part of the shift to virtual production and real-time content creation, and have gained prominence by enabling in-camera VFX shots that don’t require additional post-production, offering an alternative to bluescreen or greenscreen shooting, and helping to immerse actors and crews into what will eventually be the final shots.
Several production houses and visual effects studios have built dedicated LED walls and associated workflows. Some rely on bespoke LED wall setups, depending on the needs of the particular project. One company, Zoic Studios, even embarked on a project to install a LED wall inside existing studio space and use it to further the crew’s understanding of virtual production.
How did Zoic do that exactly? Here, Zoic Creative Director and Visual Effects Supervisor Julien Brami breaks down the process they followed and what they took away from this new virtual production experience.
A HISTORY OF VIRTUAL PRODUCTION AT ZOIC
Zoic’s foray into LED walls is actually part of a long history of virtual production solutions at the visual effects company, which has offices in Los Angeles, New York and Vancouver. In particular, some years ago the studio developed a system called ZEUS (Zoic Environmental Unification System) that could be used to track live-action camera movements on, for example, a greenscreen stage and produce real-time composites with previs-like assets. The idea was to provide instant feedback on set via a Simul-cam setup, either a dedicated monitor or smart tablet.
“FTG [Fuse Technical Group] lent us an LED wall for about three months, and it was really exciting because they said, ‘We can give you the panels and we can do the installation, but we can’t really give you anyone who knows how to do it. You have to learn yourself.’ So it was just a few of us here at Zoic doing it. We each tried to figure everything out.”
—Julien Brami, Creative Director and Visual Effects Supervisor, Zoic Studios
When Brami, who has been at Zoic since 2015, began noticing a major shift in the use of game engines for virtual production shoots on more recent shows – The Mandalorian, for example, has of course brought these new methods into the mainstream – he decided to try out the latest techniques himself at home using Epic Games’ Unreal Engine. “I’ve always been extremely attracted by new workflows and new technology,” shares Brami. “I mostly work in the advertising world and usually the deadlines are short and budgets are even shorter, so I’m always looking for new solutions. I’d been looking to employ real-time rendering in a scene as a way to show clients results straight away, for example.
“So,” he adds, “I first started doing all this virtual production stuff in my own room at home using just my LED TV and a couple of screens. What’s amazing with the technology is that the software is free. I just had a Vive controller and was tracking shots with my Sony DSLR. As soon as I saw something working, I called [Zoic Founder and Executive Creative Director] Chris Jones and said, ‘I think we need to do this at scale.’”
“The panels create light, but not strong enough to get shadows. So, you can have an overall global lighting and the value’s just right on the character, but there is no shadow for the character. That’s why we wanted to also have practical lighting. We used the DMX plugin from Unreal, which would send information for correct intensity and color temperature to every single practical light. So, when you move in real-time, the light actually changes, and we can create the real cast shadows on the characters.”
—Julien Brami, Creative Director and Visual Effects Supervisor, Zoic Studios
‘WE NEED TO LEARN IT’
With an Epic MegaGrant behind them, Zoic partnered with Fuse Technical Group (FTG) and Universal Studios to create a LED wall at their studio space in Culver City. It was a big leap for Zoic, which predominantly delivers episodic, film and commercial VFX work. Subsequently, the studio launched a ‘Real Time Group’ to explore more virtual production offerings and use Unreal Engine for visualization, virtual art department deliveries, animation and virtual production itself. “What we said was, ‘We have the time, we have the technology, we need to learn it,’” relates Brami, in terms of the initial push into LED wall filmmaking. “FTG lent us an LED wall for about three months, and it was really exciting because they said, ‘We can give you the panels and we can do the installation, but we can’t really give you anyone who knows how to do it. You have to learn yourself.’ So it was just a few of us here at Zoic doing it. We each tried to figure everything out.” The LED wall setup at Zoic was used for a range of demos and tests, as well as for a ‘Gundam Battle: Gunpla Warfare’ spot featuring YouTuber Preston Blaine Arsement. The spot sees Preston interact with an animated Gundam character in a futuristic warehouse. The character and warehouse environment were projected on the LED wall. “That was the perfect scenario for this,” says Brami.
The wall was made up of around 220 panels. Zoic employed a Vive tracker setup for camera tracking and as a way to implement follow-focus. The content was pre-made for showing on the LED wall and optimized to run in Unreal Engine in real-time. “We had the LED processor and then two stations that were running on the set,” details Brami. “We also had a signal box that allowed us to have genlock, because that’s what matters the most. Everything has to work in tandem to ensure the refresh rate is right and the video cards can handle the playback.” There were many challenges in making its LED wall setup work, notes Brami, such as dealing with lag on the screen, owing to the real-time tracking of the physical camera. Another challenge involved lighting the subject in front of the LED panels. “The panels create light, but not strong enough to get shadows. So, you can have an overall global lighting and the value’s just right on the character, but there is no shadow for the character. That’s why we wanted to also have practical lighting. We used the DMX plugin from Brami. “When you shoot on location, you may have the best location ever, but you don’t know if next year if there’s another season whether this location will be there or will remain untouched. Now, even 10 years from now, you might want a location in Germany, well, we can come back to the same scene built for virtual produc-tion. Exactly the same scene, which really is mind blowing.”
Arising from his LED wall and virtual production experiences so far, Zoic Studios Creative Director and Visual Effects Supervisor Julien Brami believes there are many advances still to come in this space.
One development Brami predicts is that there will be more widespread operation of LED walls remotely, partly something that became apparent during the pandemic. “All of this technology just works on networks. My vision is that one day I can have an actor somewhere in Texas, an actor somewhere in Germany, I can have the director anywhere else, but they all look at the same scene. As long as it can be all synchronized, we’ll be able to do it. And then you won’t need to all travel to the same location if you can’t do that or if it’s too expensive.”
Another advancement that Brami sees as coming shortly is further development in on-set real-time rendering to blend real and virtual environments. “This is going to be like an XR thing. You shoot with an LED wall, but then you can also add a layer of CG onto the frame as well – that’s still real-time. You can sandwich the live-action through the background that’s Unreal Engine-based on the wall and then add extra imagery over the wall.
“Doing this,” Brami says, “means you can actually create a lot of really cool effects, like particles and atmospherics, and make your sets bigger. It does need more firepower on set, but I think this is really what’s going to blend from an actor in front of a screen to something that’s fully completed. You could put in a creature there, you can put anything your space can sandwich. I’m really excited about this.”
“We know how to create VFX and we know how to create content and assets, we just need to get more involved in preparing this content for real-time on LED walls, since the assets need to run at high frame rates…”
—Julien Brami, Creative Director and Visual Effects Supervisor, Zoic Studios
THE STATE OF PLAY WITH ZOIC AND LED WALLS
It became apparent to Zoic during this test and building phase that a permanent build at their studio space may not be necessary, as Brami explains. “We realized a couple of things. First, this tech-nology was changing so fast. Literally, every three months there was a new development, which meant buying something now and having it delivered in three months means it would be obsolete. “Take the LED panels, for example,” adds Brami. “The panels we had at the time were 2.8mm pixel pitch. The pixel pitch is basi-cally the distance between LEDs on a panel, and these define the resolution you can get from the real estate of the screen and how close you can get to it and all the aberrations you can get. When we started shooting, 2.8 was state-of-the-art. But then we’ve seen pixel pitches of 1.0 appearing already. Everybody has seen the potential of this technology, and the manufacturers want to make it even better.”
This meant Zoic decided that, instead of buying an LED wall for permanent use, to utilize the loaned wall as much as possible for the three months to understand how it all worked. “Our main focus then became creating really amazing content for LED walls,” states Brami. “We know how to create VFX and we know how to create content and assets, we just need to get more involved in preparing this content for real-time on LED walls, since the assets need to run at high frame rates, etc.”
Already, Zoic has produced such content for a space-related project and also for scenes in the streaming series Sweet Tooth, among other projects. Indeed, Zoic believes it can take the knowledge learned from its LED wall experience and adapt it for individual clients, partnering with panel creators or other virtual production services to produce what the client needs. “So, if they need a really high-end wall, because everything needs to be still in focus, we know where to go and how to create a set for that,” says Brami. “But if they just need, let’s say, an out-of-focus background for a music video, we know where to go for that approach. “I love where it’s all going, and I’m starting to really love this new technology,” continues Brami. “The past three years has really been the birth of it. The next five years are going to be amazing.”
By IAN FAILES
Aaron Sims Creative (ASC), which crafts both concept designs and final visual effects for film, television and games, bridging both the worlds of practical and digital effects, has long delivered content relying on the long-standing ‘traditional’ workflows inherent in crafting creatures and environments. But more recently, ASC has embraced the latest real-time methods to imagine worlds and create final shots. In fact, a new series of films ASC is developing are using game engines at their core, as well as related virtual production techniques.
The first of these films is called DIVE, a live-action project in which ASC has been capitalizing on Epic Games’ Unreal Engine for previs, set scouting and animation workflows. Meanwhile, the studio has also tackled a number of commercial and demo projects with game engines and with LED volumes, part of a major move into adopting real-time workflows.
So why has ASC jumped head-first into real-time? “It’s exactly in the wording ‘real-time,’” comments ASC CEO Aaron Sims, who started out in the industry in the field of practical special effects makeup with Rick Baker before segueing into character design at Stan Winston Studio, introducing new digital workflows there. “You see everything instantaneously. That’s the benefit right there. With traditional visual effects, you do some work, you add your textures and materials, you look at it, you render it, and then you go, ‘OK, I’ve got to tweak it.’ And then you go and do that all again.
“You build it, and you can go with your team to this virtual world and go scout out your world before you go make it – we never really had that luxury in traditional visual effects. We can build just enough that you can actually start to scout it with a basic camera, almost like rough layouts, and then use Unreal’s Sequencer to base your shots, and work out what else you need to build.”
“But with real-time, as you’re tweaking you’re seeing everything happen with zero delay. There’s no delay whatsoever. So, the timeframe of being able to develop things and just be creative is instant.”
The move to real-time began when ASC started taking on more game-related and VR projects, with Sims noting they have now actually been using Unreal Engine for a number of years. “However,” he says, “it wasn’t until COVID hit that I saw some really incredible things being produced by artists and other visionaries who were coming up with techniques on how to use the engine for things beyond games. That’s what excited me, so I started getting into it and learning Unreal in more detail to help tell some of our stories. Then I realized, ‘OK, wait, this is more powerful than I expected it to actually be.’ On every level, it was like it was doing more than I ever anticipated.”
The other real benefit of real-time for Sims has been speed. “The faster I can get from point A to point B, I know it’s true to my original vision. The more you start muddying it up, the more it becomes something else and the less that you can actually see it in real-time. You’re just guessing until the end. So for me it’s been fascinating to see something that’s available now, especially during COVID when I’ve been stuck at home. It’s made it even more reason to dive into it as much as I can to learn as much.”
Over the past few years, ASC has regularly produced short film projects. Some have been for demos and tests and some as part of what Sims calls the studio’s ‘Sketch-to-Screen’ process, designed to showcase the steps in concept design, layout, asset creation, lookdev, previs, animation, compositing and final rendering. Sims has even had a couple of potential feature films come and go.
DIVE is the first project for which the studio has completely embraced a game engine pipeline, and is intended as the start of a series of ‘survival’ films that Sims and his writing partner, Tyler Winther (Head of Development at ASC), have envisaged.
“The films revolve around different environment-based scenarios,” outlines Sims. “This first one, DIVE, is about a character who’s put in a situation where they have to survive. We’re going to see diving and cave diving in the film, but we wanted to put an everyday person into all these different situations and have the audience go, ‘That could be me.’”
The development of the films has been at an internal level so far (ASC has also received an Epic Games MegaGrant to help fund development), but still involves some ambitious goals. For example, DIVE, of course, contains an underwater aspect to the filmmaking. Water simulation can be tricky enough already with existing visual effects tools, let alone inside a game engine for film-quality photorealistic results.
“It’s very challenging to do water,” admits Sims, although he adds that the technology has progressed dramatically in game engines.
“Instead of just previs’ing the effects shots, we’re previs’ing the whole thing so we can see how the film plays. It helps the story develop in a way that is harder to just imagine while you’re writing it.”
“I thought, ‘Let’s do the most difficult one first, which is water.’ We have other settings in the desert and the woods, which we have seen more of with game engines, but water is hard.”
To help plan out exactly what DIVE will look like – including what physical sets and locations may be necessary – the ASC team has been building virtual sets first and then carrying out virtual set scouting inside them. “You build it, and you can go with your team to this virtual world and go scout out your world before you go make it – we never really had that luxury in traditional visual effects,” notes Sims.
“We can build just enough that you can actually start to scout it with a basic camera, almost like rough layouts, and then use Unreal’s Sequencer to base your shots, and work out what else you need to build.”
That virtual set scout resembles previs, to a degree, continues Sims, while allowing the filmmakers to go much further. “Instead of just previs’ing the effects shots, we’re previs’ing the whole thing so we can see how the film plays. It helps the story develop in a way that is harder to just imagine while you’re writing it.”
This year, ASC helped Epic Games launch its announcement of early access to Unreal Engine 5. The new version of the game engine incorporates several new technologies, including a ‘virtualized micropolygon geometry’ system called Nanite and a fully dynamic global illumination solution known as Lumen.
ASC’s work on a sample project called Valley of the Ancient, available with this UE5 release, showcased that new tech, along with what could be done with Unreal’s MetaHuman Creator app and what could be achieved in terms of character animation inside Unreal (these include the Animation Motion Warping, Control Rig and Full-Body IK Solver tools, plus the Pose Browser).
“I can’t speak highly enough about the tools,” comments Sims. “It’s become definitely a new medium for me. I haven’t gotten this excited since my days of starting in the ‘80s on makeup effects.
“It wasn’t until COVID hit that I saw some really incredible things being produced by artists and other visionaries who were coming up with techniques on how to use the engine for things beyond games. That’s what excited me, so I started getting into it and learning Unreal in more detail to help tell some of our stories. Then I realized, ‘OK, wait, this is more powerful than I expected it to actually be.’ On every level, it was like it was doing more than I ever anticipated.”
“Unreal is able to take in a lot more geo than you would expect,” adds Sims. “A lot more than you would be able to in Maya or some of these other programs, and it’s able to actually interact with the geo and even animate with the geo at a higher density that you normally would be able to.”
One aspect of the real-time approach Sims is keen to try out with DIVE and the related survival films is in shooting scenes, especially with LED volumes. Typically, volumes have served as essentially in-camera background visual effects environments or set-piece backgrounds, or for interactive lighting. ASC is exploring how this might extend to some kind of creature interaction, since the films will feature a selection of monsters.
“It’s still early days,” says Sims, who notes the different approaches they have considered include pre-animating, live animation and live motion capture. “We’re R&D’ing a lot of this process because, in terms of what’s on the LED walls, we’re still working out, how much can you change it? People are of course used to being able to be on set with a puppeteered creature that interacts with an actor and lets you do improv. Right now, we’re trying to create tools to be able to do that.”
While he is immersed in the world of game engines right now, Sims is well aware that there are many developments that still need to occur with real-time tools. One area is in the ability to provide ray-traced lighting in real-time on LED volumes. Because of the intensive processing required, fully photoreal backgrounds on LED walls are still often achieved with baked-in lighting.
“You’re seeing all the results instantly [in real-time]. So, as you’re tweaking it you’re not accidentally going back and doing it again because you forgot that you did something before because you’re waiting for the rendering process. And that’s just the rendering part. The camera, the lighting, the animation all being real-time is another component that makes it just so much more powerful and exciting as a filmmaker, but also a visual effects artist.”
“This means you’re not getting the exact same interaction that you would if it was ray-traced in real-time,” says Sims. “But with things like Lumen coming, it’s likely we’ll get even more interactive lighting that doesn’t need to be baked.”
Sims also mentions his experience with the effects simulation side of game engines (in Unreal Engine the two main tools here are Niagara for things like particle effects and fluids, and Chaos for physics and destruction). “There’s a lot of great tools in Unreal for creating particle effects and chaos or destruction. But a lot of times, you’re still having to do stuff outside, say in Houdini, and bring it in. I’m hoping that that’s going to change.”
For DIVE’s underwater environments, in particular, ASC has been experimenting with all the different kinds of water required (surfaces, waves and underwater views). “We’re also creating our own tools for bubbles and all that stuff, and how they’re interacting, clumping together and forming the roofs of caves – all in-engine, which is exciting. I think it’s just the getting in and out of the water that’s the most challenging thing.”
Despite the push for real-time, Sims and ASC of course continue to work with traditional design and VFX tools, and they are also maintaining links to a practical effects past for DIVE. Makeup effects are still a key part of the film.
“If there’s a reason to do practical, I say still do practical,” states Sims. “In certain situations, the hybrid approach is usually the most successful because of the fidelity of 4K and everything else that you didn’t have back in the ‘80s. You could get away with grain and all of that stuff back then.”
Still, Sims is adamant that real-time has changed the game for content creation. He says that right now there’s such a great deal of momentum behind creating in real-time, from the software developers themselves through to all levels of filmmakers, since the technology is becoming heavily accessible, and something that can be done almost on an individual level.
“I mean, you’re seeing all the results instantly. So, as you’re tweaking it, you’re not accidentally going back and doing it again because you forgot that you did something before because you’re waiting for the rendering process.”
“And that’s just the rendering part,” notes Sims. “The camera, the lighting, the animation all being real-time is another component that makes it just so much more powerful and exciting as a filmmaker, but also a visual effects artist.”
By TREVOR HOGG
Even though the Coronavirus pandemic confined Doug Chiang to home for most of 2020, for the Vice President and Executive Creative Director, Star Wars at Lucasfilm, there was not a lot of family time. “I still couldn’t talk to them much because I was busy!” This is not surprising as he is the visual gatekeeper of the expanding Star Wars franchise, whether it be films and television shows, games, new media or theme parks.
The space opera did not exist in 1962 when Chiang was born in Taipei, Taiwan. “We moved to Dearborn, Michigan when I was about five years old. I remember the home we grew up in Taiwan, the kitchen and bedrooms. Before getting into bed, we had to wash our feet because of the dirt floors. We actually had pigs in the kitchen.”
Arriving in Michigan during the middle of winter introduced the five-year-old to snow and the realization that he didn’t quite fit in, as Asian families were a rarity in Dearborn and Westland where Chiang attended elementary school and high school.
“Our parents always encouraged us [he has an older brother Sidney and younger sister Lisa] to assimilate as quickly as possible, but as a family we were still culturally Chinese. I was your classic Asian nerd. I didn’t talk a lot, was quiet and looked different. I was picked on a lot, but the saving thing for me was that I quickly developed a reputation for being the class artist. I found that as a wonderful escape where I could create my own worlds and friends, so I drew a lot.”
“Star Wars had a huge impact on my generation. That completely defined my career goals in terms of what I wanted to do. I was starting to learn about stop-motion animation. When I saw The Making of Star Wars with Phil Tippett doing the stop-motion for the chess game on the Millennium Falcon, it all connected together. That’s when I went down to the basement of our home and borrowed my dad’s 8mm camera, tripod and lights, and started to make my own films.”
—Doug Chiang, Vice President and Executive Creative Director, Star Wars, ILM
A passion for filmmaking developed as a 15-year-old upon seeing the film that would launch the franchise that he is now associated with. “Star Wars had a huge impact on my generation. That completely defined my career goals in terms of what I wanted to do. I was starting to learn about stop-motion animation. When I saw The Making of Star Wars with Phil Tippett doing the stop-motion for the chess game on the Millennium Falcon, it all connected together. That’s when I went down to the basement of our home and borrowed my dad’s 8mm camera, tripod and lights, and started to make my own films. All trial and error. You’re discovering things by accident. I love that aspect of it.”
A newspaper ad caused the aspiring cinematic talent to enter the Michigan Student Film Festival. “I won first place and that gave me a lot of encouragement. It gave me connections to one of the founders, John Prusak, who would loan me professional tools.
That became the start of my quasi-filmmaking education. When I came out to UCLA for film school it was a lot more of the same. A lot of it was self-driven. After I made one of my experimental animations called Mental Block, I made this film in 10 weeks, where I took over the little dining room of our dorm. I entered it into the Nissan FOCUS Awards and got first place. Winning a car as a sophomore was great! My goal was to direct films, but I realized that everybody in Los Angeles wants to do that, so there’s virtually no chance. I could do storyboards well so that was going to be my foot into the industry. One of my first jobs out of film school was doing industrial storyboards for commercials.” A fateful job interview took place at Digital Productions, which was one of three major computer graphics companies in the 1980s. “I was hired on the spot to design and direct computer graphics commercials. That was my first introduction of combining film design and art direction with this new medium called computer graphics.”
Joining ILM was an illuminating experience for Chiang. “I always thought of the design process as one seamless workflow. ILM was different in that we were doing post-production design, so the art department was specifically for that. The films that I worked on, like Ghost, Terminator 2: Judgment Day and Forrest Gump, were all post-production design. While I was at ILM it started to evolve where we could participate in the pre-production design. It wasn’t until I started working with George Lucas when he hired me to head up the art department for the prequels in 1995 that I realized that was the way George had done it all along back with Joe Johnston and Ralph McQuarrie.
“I was one of the first people onboard while George was writing Star Wars: Episode I – The Phantom Menace,” Chiang continues. “I had been learning Joe Johnston and Ralph McQuarrie, and got their style down. When he told me that we were going to set the foundation for all of those designs, and aesthetically it is going to look different, that threw me for a loop because I felt like I was studying for the wrong test. My goal was to give him the spectacle that he wanted without any of the practical limitations. There were enough smart people at ILM like John Knoll to figure all of that out. It was world building and design in their purest form. I remembered it terrified ILM because they hadn’t developed anything of that scale. The Phantom Menace was the biggest film at that time at ILM with miniature sets. There was a huge digital component, but that was mostly for the characters.”
In 2000, Chiang established DC Studios and produced several animated shorts based on the illustrated book Robota, co-created with Orson Scott Card, which takes place on the mysterious planet of Orpheus and was inspired by his robot drawings. He would then co-found Ice Blink Studios in 2004 and carry on his collaboration with another innovative filmmaker, Robert Zemeckis, which resulted in Chiang receiving an Oscar. “Death Becomes Her was fascinating because the story is about the immortality potion, so our main characters can’t die. It was figuring out, ‘How do you achieve that?’ Real prosthetics can only go so far, and we had never achieved realistic skin in computer graphics before. It was a huge risk to try to combine the two. Forrest Gump was all about subtle adjustments to the reality of the world to create powerful dramatic images. Bob’s films can be complete spectacle like The Polar Express, and you have to lean into that sensibility. Bob doesn’t have a specific filmmaking style. He goes with what works best for his storytelling.”
Zemeckis developed such a strong level of trust in Chiang that he hired him to establish ImageMovers Digital, a ground-breaking performance-capture animation studio. “It was almost a perfect collision with all of my experiences, because I had that strong computer graphics experience working with Digital Productions, I had the strong foundation with ILM for post-production design, and then I had the pre-production design with George Lucas. ImageMovers Digital became this wonderful test-case where we had an opportunity to build a new company from scratch and create a new artform. We were taking a huge risk because the difficulty of that challenge equated to bigger budgets because of the sheer amount of time and the number of people that it takes. Literally, tools were being written as we were going. Those four years were a highlight of my career. It was a culmination of all that history of learning, having Bob as the visionary to drive all of that, and the support of Disney. It was sad for me [when it was closed by Disney in 2010] because we were right at the point of breaking through to having the right tools to make that big transition for success.”
Production on The Force Awakens saw Chiang join Lucasfilm to shepherd the expansion of the Star Wars universe. “Working with George Lucas for seven years, we established a logic in terms of how designs evolve in the Star Wars universe. It’s marrying the design history to our actual history. The prequels are in the craftsman era – that’s why the designs from Naboo are elegant and have sleek artforms. The shapes in the original trilogy become more angular and look like they came off of an assembly line. George always considered Star Wars to be like a period film where we do all of this homework and only 5% of it ends up onscreen, but all of that homework informs that 5%.” Believable fantasy designs need to be 80% familiar and 20% alien, Chiang learned. “I learned specific guidelines from George. When you design for the silhouette, draw it as if a kid could draw it. The other one is designing for personality. Certain shapes are very emotive, and if you can design with that in mind, and on top of that put in color and details, it’ll be more successful. The way your brain works is that you’ll see the shape first which will tell you right away, ‘Is it friendly or bad, and what it’s supposed to do.’ In the end, the details don’t inform the design but can make it better.”
Conceptualizing and constructing the Star Wars: Galaxy’s Edge theme parks situation in Disneyland and Disney World has been a whole other level of design for Chiang. “Film sets exist for weeks and we tear them down while theme parks exist for years, so the materials have to be real and there is no cheating. On top of that, you layer in the whole health and safety component because you’re going to have people wandering through these environments unsupervised. We’re trying to layer in what makes Star Wars special. For instance, the whole idea of no handrails is a real thing in Star Wars. You obviously can’t do that. The aesthetic part was just as challenging because we wanted to create a design that fits seamlessly with our films. One of the things that I didn’t realize is the sunlight quality in Florida is different than Anaheim. You have to tune it to take in account the cooler light in Florida. On top of that, there are hurricanes in Florida whereas you have earthquakes in Anaheim. The WDI engineering team are amazing artists in their own right.
“There was a period of time where films became CG spectacles because the audience hadn’t seen anything like that before,” observes Chiang. “What I find interesting now is that got boring because it became too much sugar. Now I see the pendulum swinging back to where, what is the best technique for the film? One of the great things about working with Jon Favreau on The Mandalorian is precisely that. I don’t know what the younger generation will think because they have grown up with video games where they’re completely immersed in digital spectacle.”
Virtual production works best with filmmakers who know exactly what they want, Chiang explains. “What’s different about what we’re doing now with StageCraft and the volume is we’re bringing a good percentage of post-production work upfront so it involves all of the department heads collaborating to create a seamless process. Virtual production is transforming filmmaking into a fluid process, which is in some ways what computer graphics did when it first became available as a tool. The world building has to be complete before we actually start photographing the film. But it’s not unlike what I was doing with George. When I think back to what George, Robert Zemeckis and Jon Favreau were doing, they’re of the same mind. We were all trying to create a technique to better tell stories and to make the most efficient process to create visual spectacle onscreen.”
By TREVOR HOGG
No strangers to the DC Universe on the small screen are Encore Hollywood Creative Director & Visual Effects Supervisor Armen Kevorkian and Encore Hollywood Visual Effects Coordinator Gregory Pierce, who have worked together on Supergirl, DC’s Legends of Tomorrow, The Flash, Titans and now the third season of Doom Patrol. A group of misfits who have acquired superpowers through tragic circumstances are brought together by Dr. Niles Caulder (Timothy Dalton) to form a dysfunctional family that battles the forces of evil with unconventional methods. Each of the 10 episodes has approximately 100 to 200 visual effects shots created by Encore VFX and were divided among facilities in Burbank, Vancouver and Atlanta, based on scenes and expertise. The second season was shortened to nine episodes because of the coronavirus pandemic, which had a ripple impact as the narrative carried over to Season 3 as well as the need to work remotely. Kevorkian notes, “It’s a different world for how we shoot things, but it was nice to get back out there.”
Outrageous storylines and characters are a trademark of the action-adventure-comedy. “One of the things that make Doom Patrol different is that it’s creative in terms of how it’s written, how the characters are developed and the situations that they find themselves in,” remarks Pierce. “All of the effects work that we have to do has to be photoreal because this is a photoreal show, but at the same time there is a lot of surrealness to it. A lot of times with our visual effects work, we don’t know exactly what this will look like because it never happened in real life. But we still have to ground it within the scene. Some of the scenes are clearly written with an idea in mind as to how they should look, and those are related to us during the prep meetings before shooting.
“All of the effects work that we have to do has to be photoreal because this is a photoreal show, but at the same time there is a lot of surrealness to it. A lot of times with our visual effects work, we don’t know exactly what this will look like because it never happened in real life. But we still have to ground it within the scene. Some of the scenes are clearly written with an idea in mind as to how they should look, and those are related to us during the prep meetings before shooting. Other things we have more freedom to experiment with and try new things.”
—Gregory Pierce, Visual Effects Coordinator, Encore Hollywood
Other things we have more freedom to experiment with and try new things.”
Ideas from the comic books are reimagined to fit into the real world. “The writers come up with kookiest ideas where you go, ‘Holy shit, we’re going to do that!’” laughs Kevorkian. “Jeremy Carver [Showrunner] and Chris Dingess [Executive Producer] are great to collaborate with to bring those things to life.”
Over the three seasons, there are not many reoccurring visual effects, reveals Kevorkian. “One of the things that has remained the same since Season 1 is the negative spirit, the entity that lives inside of Larry Trainor [Matt Bomer] and comes out, because it tells the story that it needs to, and looks good visually,” states Kevorkian. “We do have the shield that Cyborg [Joivan Wade] brings up, and the arm cannon when it reveals itself and goes away. Once you do something like that it’s the same methodology but shot in a different way. There are digital doubles for all of the characters. When we can’t do something practically for Robotman [Riley Shanahan] then we’ll do a CG version.”
Encore VFX is currently using a pipeline that utilizes Houdini and RenderMan. “We’ve landed on different and new technologies, especially for our CG characters,” notes Pierce. “We’re doing a lot more simulations on them. We’re trying to push the boundaries for what we can to do to add some dynamics to these characters. During the end of last season and into this season we’ve been doing more motion capture, in particular for [new character] Monsieur Mallah as it helped to give him more of a presence and grounded in a scene.”
April Bowlby portrays actress Rita Farr, otherwise known as Elasti-Girl. “We still do face drooping when she feels nervous and is not in control of her power,” remarks Kevorkian. “We’ve done a few gags where she is able to stretch out to confront someone or grab something from a distance. The general idea of what is happening to Rita is the same – it’s just putting her into different situations. Some of the stuff that she will be doing this season like manifesting into her blob self is a completely different methodology.
“In Season 1, we did butt monsters, which are butts with arms, and they’re making a comeback this year in a bigger sequence. I’m excited to bring them to life because that’s not something you get to do every day. The butt monsters speak to the comedic factor of the show along with being terrifying as well. It’s the most fun that we had animating and coming up with different things that they’ll be doing.”
—Armen Kevorkian, Creative Director & Visual Effects Supervisor, Encore Hollywood
It was challenging to make it seem seamless, but was also fun to do something different with that character. None of it is procedural. If she turns into a blob in a container, she comes out and forms into a person that takes a lot of blend shapes and morphing. It’s figuring out how to do go from CG cloth simulations to real cloth. The biggest challenge has always been Rita doing something that we have to reimagine how we’re going to do it. If I send back shots at all they’re usually Rita shots.”
“Dorothy Spinner [Abigail Shapiro], the daughter of Dr. Niles Caulder, has these imaginary characters that she manifests which have to exist in the real world, such as the Candlemaker [Lex Lang],” explains Kevorkian. “He premiered last season and there is a little bit of him in Season 3. In Season 1, we did butt monsters, which are butts with arms, and they’re making a comeback this year in a bigger sequence. I’m excited to bring them to life because that’s not something you get to do every day. The butt monsters speak to the comedic factor of the show along with being terrifying as well. It’s the most fun that we had animating and coming up with different things that they’ll be doing. There was one moment where the Doom Patrol interacts with one or two of them. We sent our model to props which did a 3D print of one so that the actors could touch it.”
Introduced is the Sisterhood of Dada, which consists of five bizarre supervillains with the ability to act as chaotic as the Dadaism art movement. “There is a lot that we’re creating for that storyline, for example, Dada birds,” states Kevorkian. “This season we also have a gorilla character from the comics called Monsieur Mallah that wears a beret and has a machine gun. There are a few images out there of him. He actually speaks and appears in a few episodes. We started from scratch to avoid any similarities with Grodd [from The Flash]. We built a model and whole new muscle and fur systems for him. The final ADR determines the animation. The facial anatomy doesn’t translate one-to-one with a human, so we have to cheat certain things to make it visually correct. I was never worried about Monsieur Mallah – he looks great.”
Research is guided by the scripts. “If something gets mentioned in a script, like a character, I will do research to see if it’s something that really exists,” remarks Kevorkian. “I will always use that as my starting point. I will go ahead and conceptualize something and send it over to Jeremy, who knows exactly what he wants and is very specific. It makes our process easier knowing that Jeremy has a vision of what we’re creating. If we’re in sync, sometimes we get first-pass approvals. There are times when he chimes in and says, ‘You need to adjust this to tell the story about.’ We’ll have one or two rounds of notes and get there fairly quickly. That adds to the quality, because we’re not going back and forth on things that slow you down.”
“We’ve landed on different and new technologies, especially for our CG characters. We’re doing a lot more simulations on them. We’re trying to push the boundaries for what we can to do to add some dynamics to these characters. During the end of last season and into this season we’ve been doing more motion capture, in particular for Monsieur Mallah as it helped to give him more of a presence and grounded in a scene.”
—Gregory Pierce, Visual Effects Coordinator, Encore Hollywood
A majority of the storyboards, previs and postvis are created by Encore VFX. “They do have storyboard artists in Atlanta to work with directors on sequences,” states Kevorkian. “Encore VFX does all necessary previs and concept art of creatures. At times, some of the concepts that we have to match come from the art department because it’s something they’re going to build practically. Postvis is done with stunts.” An emphasis is placed upon location shooting. “The greenscreen is used for environments that do not exist and need to be created,” adds Kevorkian. “Most of the world building involves digital augmentation. For example, last year we did a scene where Dorothy is on a lunar surface. The art department built part of a set for the lunar surface that had crystals which popped out of the ground. We built our own version based on that for the extensions that we did.”
Visual effects have a close partnership with stunts and special effects. “Stunt Co-ordinator Thom Khoury Williams always has great ideas and works with our ideas,” remarks Kevorkian. “All of those sequences usually come out the way that we prepped them so there are not surprises when we get into editorial.” Doom Patrol definitely has its own visual style, he observes. “Maybe it is a bit retro with some of the equipment that you see on set. The look of Robotman is from the comic books. He’s not your shiny 21st century robot, but something you could have built in your garage. The show definitely has an older feel, so we try to stay within that same world.” Kevorkian found everything was challenging in an exciting way. “I’m excited for people to see Monsieur Mallah and the butt monsters that come back in Episode 304. Each episode is unique, so it will be different things for different people when they watch this season. Doom Patrol is not repetitive. Every season is its own thing.”
By TREVOR HOGG
The line between animation and live-action has blurred so much, with visual effects achieving such a high fidelity of photorealism, that cinema has become a seamless hybrid of the two mediums. This is an achievement that British filmmaker Terry Gilliam (12 Monkeys) finds to be more problematic than creatively liberating. “If you’re watching Johnny Depp on the yardarm of a four-masted pirate ship sword fighting somebody else, there’s no gravity involved! Tom and Jerry and the Roadrunner understood gravity. Modern filmmaking is so artificial now. It’s not real people in real situations.”
Gilliam began his career creating animated segments for Monty Python’s Flying Circus. “I was limited by the amount of time and money that I had to do what I did, so it became cut-outs.” Animation at its best is showcased in Disney classics Pinocchio and Snow White and the Seven Dwarfs, he says. “I love the animation done by the Nine Old Men [Walt Disney Productions’ core animators, some of whom later became directors] because it was such hard work and required an incredible understanding of people and animals.
“The problem is that live-action and animation are becoming one in the same,” observes Gilliam. “As George Lucas got more successful and had more money, it became more elaborate. Rather than have three spaceships flying around you could have a thousand. It becomes abstract at that point. You don’t have what you had when two people are fighting to the death.” Live-action is where Gilliam will remain. “People keep asking me why don’t I make another animated film, but I don’t want to. I like working with actors who bring their own view of the world to the work. It’s not like working with other animators who you are directing to do this and that the way you want it. I want other people to be part of the process and take me out of my limitations and show me different ways of looking how a scene or a character could be played.”
“Animation is, ‘Here’s the shot, let me animate within it and make that work.’ It literally only works for that angle whereas mine works for all angles, but I pick the one that looks the coolest.”
—Robert Legato, ASC, Visual Effects Supervisor
Even after making Deadpool and Terminator: Dark Fate, Executive Producer Tim Miller and his visual effects company, Blur Studio, have continued to produce and create animated shorts, in particular for the Netflix anthology Love, Death + Robots. “When you’re in the animation business you are a student of motion. You’re always trying to carve away at the artificiality of what you’re doing to find the reality that feels natural. I do the same thing in film,” says Miller. “I just use people instead of digital characters. I’m conscious of poses and silhouettes, the kinds of things that animators would think about. All of the time I would walk on set during Terminator: Dark Fate and say to Mackenzie Davies, ‘I need you to drop the shoulder and chin because it looks tougher. Stand a little contrapposto or three quarter because it makes a better silhouette.’ Coming from animation helps with making the transition to a film set because it’s working with groups of artists and having them not hate you.
“I rely heavily on previs and storyboards which allow me to figure out mistakes before they get expensive or impossible to correct,” explains Miller. “Every filmmaker will tell you that there is that special panic that you have on the day [you shoot] because you know no matter what your budget is you’re never going back to this place where you are. In animation you have this luxury of being able to return to any location to redo any shot you want, no matter where in the process you are. That’s a hard limitation to get used to if you’re making the transition from animation to live-action.”
“The problem is that live-action and animation are becoming one in the same. As George Lucas got more successful and had more money, it became more elaborate. Rather than have three spaceships flying around you could have a thousand. It becomes abstract at that point. You don’t have what you had when two people are fighting to the death.”
—Terry Gilliam, Filmmaker
Miller references some things that need to be kept in mind when making the transition from live-action to animation. “You have to be able to recognize enough about your intentions, because it’s a while before you get to see the final shot. On the flipside, you’re not stuck in the continuity that you shot.” All of the high-end animation and visual effects companies have some level of customized commercial software like Houdini, Maya and 3ds Max, he points out. “Then you have Blender, which is a free open source and the whole animation community can contribute to it, and that is a game-changer.”
Collaborating with Brad Bird on both animated and live-action projects such as The Incredibles and Mission: Impossible – Ghost Protocol is Rick Sayre, Supervising Technical Director at Pixar Animation Studios. “Brad is a special case because he had some live-action commercial experience, but is also keenly aware of cinematography and production design, all of the practical production principles. The other big thing that Brad had going for him is he’s a writer-director, and that can translate quite nicely into the live-action side.” Something that a live-action filmmaker has to get used to when transitioning to animation is that everything has to be intentionally planned out, says Sayre. “The happy accident or natural interplay on set, you have to create in animation. In traditional animation it is obvious what you’re producing is a frame. But in live-action you have to be conscious of that. The audience only sees what the camera sees.”
Previs is an animation process that has been adopted by live-action. “A lot of previs in live-action has become an enriched storyboard where you’re finding angles and figuring out what is the most exciting way to shoot this action,” remarks Sayre. “Almost every giant-budget genre picture now will have scenes that are entirely animated. In that sense you have these so called live-action films that are actually animated movies, like Gravity.” The blending of two mediums dates back to Fleischer Studios having animated characters walk through live-action sets. “Pixilation didn’t used to mean giant pixels. It meant that you were moving something a frame at time. That might be an actor on set appearing and disappearing with a cut or doing animation with a miniature. There is such a rich history of filmmaking being a hybrid art.”
“The reason I do previs is that I can’t draw,” admits Oscar-winning Visual Effects Supervisor Robert Legato, ASC (Hugo). “On the last film that I did with Michael Bay, I ended up doing car accident simulations. I don’t simulate for what would actually happen, but for what I want it to do. Then I figure out a clever way of shooting it as opposed to animate and plan every moment. After you piece the scene together, the shortest bridge is what you force the animation into being. People who are fluent in animation would rather animate the whole thing. I find it not working for me well because I’m steeped in live-action work. Even if the animation is probably correct, I question it because it wasn’t done with science, gravity, weight and mass, all of the various things that I factor in when I set up a gag or shoot. Animation is, ‘Here’s the shot, let me animate within it and make that work.’ It literally only works for that angle whereas mine works for all angles, but I pick the one that looks the coolest.
“What we did which I liked in The Lion King, and what Andy Jones [Animation Supervisor] did, was only make the animals do what they can do within the confines of the scene,” notes Legato. “The animals can jump and leap but couldn’t do it 50 times more than what was actually possible. You couldn’t stretch and squash the animation. You had to work within if you could train an animal to do that, it would do that. If you get them to move their mouth with the same cadence of speech you could almost shoot it live. That was the sensibility behind it. Sometimes we wanted to invent our own version of the movie as opposed to being a direct homage. When we ran into some problems, we went back to the old one, and it turned out that they had the same problem and solved it.”
The most widely known animation principles are the 12 devised by Disney’s Nine Old Men during the 1930s and later published in Disney Animation: The Illusion of Life in 1981. Focusing on five in particular that apply to live-action is Ross Burgess, Head of Animation, Episodic at MPC. “Within feature animation, the timing of a character is easier to manage as it’s fully CG and you can re-time your camera to fit the performance. In live-action, often you must counter the animation to the timing of the plate that has been pre-shot. It’s easier to exaggerate the character’s emotions or movements in feature animation. You are not bound to the ‘reality’ of a real-life environment. In visual effects, we use exaggeration slightly differently in the way that we animate our characters or anthropomorphize our animals. It’s all about the subtlety of a character and knowing when you have broken the ‘reality.’
“Drawing has become as important in live-action features as it is in CG animated features,” continues Burgess. “We use custom software that allows the animator to draw over work in dailies to make sure that a character or creature remains ‘on model.’ Animation principles [arcs, squash and stretch] are as important in live-action, and sometimes it’s easier and quicker to draw over the work and show the animator than to trial and error with Maya. A good pose is as important in visual effects. At the end of the day, we are all trying to tell the story in the easiest and clearest way possible. Clarity of pose is the most important ingredient when you’re trying to tell a story in under two hours. Appeal is everything, isn’t it? How much you can identify or love a character is steeped in how appealing the character is.”
“When you’re in the animation business you are a student of motion. You’re always trying to carve away at the artificiality of what you’re doing to find the reality that feels natural. I do the same thing in film — I just use people instead of digital characters. I’m conscious of poses and silhouettes, the kinds of things that animators would think about. … Coming from animation helps with making the transition to a film set because it’s working with groups of artists and having them not hate you.”
—Tim Miller, Visual Effects Supervisor/ Owner, Blur Studio
“A lot of the principles of animation and filmmaking boil down to having an aesthetic sensibility and a clear idea on how to best tell a story using sound and vision,” believes Michael Eames, Global Director of Animation at Framestore. “Animators often use exaggerated poses or actions in a performance to help put a particular story point across. Whether the look is stylized and perhaps extreme, or realistic and more subtle, the principle is the same – just more a matter of degree as to how you apply it. Thinking about stylized animation working in the context of live action, we recently did Tom and Jerry, which is a hybrid. We wanted to find a balance between a typically stylized, almost flat-looking classic 2D cartoon in an environment that is real. One technique we used to help marry the two worlds was to apply a directional rim light that connected to the direction of light in the real plate.”
For Eames, one of the best collaborative experiences with a filmmaker occurred during the making of Where the Wild Things Are with Spike Jonze. “They started out with a group of actors voicing the parts in the space that represented the environment. A given scene would play out in the film. Spike then selected those tracks and played them on set to the costumed performers, to drive their physical performance,” explains Eames. “The bit that didn’t work was the facial performance because it needed to be visceral and delicate. Animation came in and listened to voice recordings, watched the actors in suit, and added to the facial performance by replacing the entire face in CG. That was a perfect combination of three different things that led to one final result. When people complain about CG animation stealing away their thunder, it’s not that. It’s the evolution of filmmaking. Whatever it takes, as long as we’re serving the story in the best possible way.”
By TREVOR HOGG
A poem left in the typewriter belonging to Cornell University classmate Peter Yarrow, who would in turn compose accompanying music, had a huge impact on Lenny Lipton, as the song royalties for “Puff the Magic Dragon” have provided a lifetime of financial security. “It’s a weird story but true!” notes Lipton, who spent a decade-long odyssey of researching and writing his homage to cinematic innovators, titled The Cinema in Flux: The Evolution of Motion Picture Technology from the Magic Lantern to the Digital Era. Lipton is a kindred spirit, being a pioneer of stereography and founding the StereoGraphics Corporation in 1980, which two decades later was acquired by Real D Cinema. The physics graduate developed the first electronic stereoscopic visualization eyewear known as CrystalEyes, which has been used for molecular modeling, aerial mapping, and to remotely drive the Mars Rovers. “I saw 3D movies and comic books as a kid in the early 1950s which got me interested in the stereographic medium.
“When I approached Springer, I thought I had died and gone to heaven,” remarks Lipton. “I wanted a low-cost book that was one volume and had a lot of color. They sent me samples of their books that were beautiful. I found a great editor in Sam Harrison and we worked on the book together. I thought I was done, but you see the book differently when you’re starting to lay it out. No matter how smart you are there are things that you can’t envision. I kept finding things that I could fix and illustrations to be improved.
During the 18 months of working with Sam and the production department, I made 9,000 changes to the manuscript that were big and little. I’m a much better copy editor than a proofreader!”
“The Cinema in Flux calls upon my years of doing what I did. I became a self-taught filmmaker and an entrepreneur. I raised a lot of money for my company; I ran it, registered numerous patents, and had a lot of experience in product development. It was almost like I was training to write this. I do empathize with the inventors and their struggles. It is a book about inventors. I don’t know how anybody becomes an inventor. You’re probably wired that way at birth.”
—Lenny Lipton, Author of The Cinema in Flux
Appropriately, a Hollywood icon known for developing and producing technology to improve the theatrical experience wrote the foreword. “I’m closer to Douglas Trumbull than most people in the world in terms of our careers. I wished that he lived in Los Angeles, but we do see each other often. We are a lot alike.” There is a personal connection to the subject matter. “That’s the beautiful thing about it. The Cinema in Flux calls upon my years of doing what I did. I became a self-taught filmmaker and an entrepreneur. I raised a lot of money for my company; I ran it, registered numerous patents, and had a lot of experience in product development. It was almost like I was training to write this. I do empathize with the inventors and their struggles. It is a book about inventors. I don’t know how anybody becomes an inventor. You’re probably wired that way at birth.”
Part of the motive to write The Cinema in Flux for Lipton was to correct a misconception about the history of cinema. “There has been a tendency in the past for film scholars to think that everything before Thomas Edison’s camera and the Lumières’ Cinématographe is prehistory. But I didn’t view it that way. The real start of what today we consider to be movies occurred on 1659 with [Dutch physicist] Christiaan Huygens’ invention of the magic lantern. Very rapidly people learned how to produce movies and do shows. I would define movies as the projection of moving images on a screen.” The origin of the book occurred during the week of Christmas in 2009. “I got invited to do a talk on stereoscopic cinema at La Cinémathèque française in Paris and was lucky enough to arrive at a moment where they were having an exhibit in their museum about the magic lantern and had demonstrations of the technology. At that time, I didn’t know I was going to write about it.
“I had no idea what the outcome would be,” admits Lipton. “The text is about 400,000 words and the book is 800 pages. I didn’t have an outline. I just headed down the road.” The historical narrative is divided into three eras: Glass Cinema, Celluloid Cinema, and Television and the Digital Cinema. “It didn’t occur to me to use that classification system until I had been working on the book for five years. I had a single Word file that was gigantic. It became a good way to write a book like this because I was able to easily search the whole file to avoid redundancies and put things in a proper order. I had one slim notebook with notes in it. I bought about 400 books. I can’t say that I understood the subject well. I needed time to digest it. I had to keep making mistakes and correcting them. I went to the Margaret Herrick Library, which is part of the Academy of Motion Picture Arts and Sciences, and was fortunate to have access to the Society of Motion Picture and Television Engineers digital library, in particular the journals that they began publishing in 1916. It’s a primary source, and every source has a bibliography where you get to more sources. The Library of Congress has about 50 to 60 years of motion picture periodicals. I also read about glass and celluloid.
“There has been a tendency in the past for film scholars to think that everything before Thomas Edison’s camera and the Lumières’ Cinématographe is prehistory. But I didn’t view it that way. The real start of what today we consider to be movies occurred on 1659 with [Dutch physicist] Christiaan Huygens’ invention of the magic lantern. Very rapidly people learned how to produce movies and do shows.”
—Lenny Lipton, Author of The Cinema in Flux
“I understand this technology and tried to explain it in my own words to the reader,” notes Lipton. “The vast majority of quotes in the book have a historical importance that gives some insight into the inventor’s process and state of mind. Even when the inventor was lying, I thought that was interesting too.” The project provided an opportunity to recognize underappreciated contributions. “For the most part my account celebrates the best-known inventors who were by prior scholars attributed correctly.” There are certain controversial figures like Thomas Edison. “The problem with Edison is by inventing the phonograph, an electronical distribution system, a very good lightbulb, and the first motion picture camera, he invented modern entertainment, and there is no one who can take it away from him. The guy who is most overlooked is Theodore Case who essentially invented optical sound on film as it was used for almost a century. His technology was licensed to Western Electric which took the lion’s share of the credit.”
Some of the significant inventions were not intended to be applied to cinema. “The guys who were working on celluloid for film were thinking about photography and snapshots,” notes Lipton. “The most interesting example of a serendipitous invention that profoundly influenced cinema is Joseph Plateau and the phenakistoscope. It is like a zoetrope. It is a contraption that you can spin and has slits. You look into a slit, then into a mirror and see a moving image. That was invented in about 1832, and it had nothing to do with cinema and the projection of images. But for the next 50 years, inventors applied that discovery of apparent motion and the phenakistoscope technology to the magic lantern. Celluloid cinema, which was with us for most of the recent history of cinema, is a hybrid of the phenakistoscope and magic lantern. You can produce the illusion of moving images by properly projecting or presenting frames that are incrementally different. Another curious aspect was that Joseph Plateau was going blind when he made the discovery.”
The Cinema in Flux is an overview of the technological advancements in cinema. “It’s a sprawling subject,” observes Lipton. “Possibly any one of those chapters could have been turned into a book of this length, but I couldn’t do that. What we called cinema from the get-go included the projection of motion with sound and color.
“The vast majority of quotes in the book have a historical importance that gives some insight into the inventor’s process and state of mind. Even when the inventor was lying, I thought that was interesting too. For the most part my account celebrates the best-known inventors who were by prior scholars attributed correctly.”
—Lenny Lipton, Author of The Cinema in Flux
By 1790, reasonably bright and decent-sized painted color slides were being projected that had narrators, musicians and sound-effects people. Without getting into esoteric disciplines like costume design, makeup and visual effects, I thought that the broad characteristics of cinema that existed from inception were motion, color and sound. Therefore, I needed to explore the evolution of those technologies through 350 years.” The advances in technology have caused the cinematic language to evolve.
“The most dramatic thing that I can think of is the advances in digital cinema, which has had a profound effect on motion picture production and exhibition.” The next major stage for cinema may well be the shift from audience members being passive to active participants, much like video games and virtual reality. “In the long run [that] technology will become feasible. In the short run I don’t know.”
As to whether inventors and their inventions are a product of the period in which they live, Lipton responds, “There is a current that runs through these inventors and maybe all inventors. There is an ornery tenaciousness to many inventors. Often when people invent something, their vision of how it will be deployed is different from the rest of the world. Edison gets credit for inventing the research lab, which is maybe one of his greatest inventions. The damnedest thing is after the research lab became part of corporate entities, they started calling inventors ‘engineers.’ In some cases, if a guy was highly qualified, he’d be called a scientist. Corporations would never call an employee an inventor, and there is a good reason for that. The term inventor implies ownership. Corporations don’t want inventors to have that. The idea of science as a separate discipline doesn’t occur until the mid-19th century. Most of the people I write about in the early days, which until relatively recently, I describe as autodidact, people who taught themselves, or polymaths or both. Polymaths are masters of many disciplines. The idea of a specialty is a new idea. There aren’t many maverick independent inventors in the world.
“I have to tell you that if I wasn’t emotional and didn’t have strong feelings [about the subject matter], I couldn’t have written this book,” reflects Lipton. “I had to focus on the chemistry and the science because if you don’t have any of it then you don’t have movies. A lot of us in the motion picture industry don’t know the history of the technology, so I hope that the book will be a good read for them. It is a small contribution to humanity and human intelligence to be able to provide a book like this because I’m thankful of people who wrote the books, patents and articles; I believe in that tradition. I can only thank God that I had the wherewithal to do it. If we don’t support and help eccentric people who are working on strange projects then we will be less human.”
By KEVIN H. MARTIN
The hero who wakes up to realize his existence is limited to a virtual or controlled-from-outside reality has become something of a staple in genre filmmaking, popularized by The Matrix and The Truman Show, to name just two such entries. While Free Guy treads similar ground, its protagonist Guy (Ryan Reynolds) is no world-class hacker or unwitting reality-show star, but just a bank teller, cheerfully plodding through a ritualized daily routine that involves his Free City surroundings – the setting for a video game of the same name – suffering massive devastation and violence. Once awakened to the situation, Guy must step up his own game to stop the program’s publisher from shutting down Free City permanently.
Director Shawn Levy has a long history with VFX-heavy projects ranging from the first two Night at the Museum installments to Real Steel and several episodes of the Stranger Things series, along with a remake of Starman in the works, plus he intends to re-team with Reynolds on a time-travel project for Netflix. To look after the VFX end of things, he chose Visual Effects Supervisor Swen Gillberg, whose past experience spans both real-world (Flags of Our Fathers, A Beautiful Mind) and fantasy environs (Jack the Giant Slayer and a trio of recent MCU features).
Gillberg joined Free Guy’s production team in December 2018, after the arrival of Production Designer Ethan Tobman. “Ethan had already done some design work on Free City,” Gillberg recalls. “But there was real evolution to things and, design-wise, it was a group effort going well into the next year to develop how things looked in this world. When our DP George Richmond came on, he had a great idea: shoot the real world on 4-perf anamorphic, but handle the gaming section with ARRI 65 and big, sharp spherical lenses to give them a very different look.”
“There’s ‘Reality,’ which is present-day, supposedly set in Seattle. Then, inside Free City, you see our gaming world – a photoreal emulation of a real-world game – which is where most of the movie and most of the VFX take place. But there are also views of a lower-res Free City, as seen on monitor screens when real-world people, including the villain [Taika Waititi], who works in a gaming company, play Free City.”
—Swen Gillberg, Visual Effects Supervisor
Live-action was accomplished in Boston between May and August of 2019, with Gillberg’s old friend Ron Underdahl acting as lead for the bevy of data wranglers. “A company that I’ve used many times in the past, Direct Dimensions, LiDared pretty much everything,” he notes. “There were HDRs for every setup and lots and lots of panels to be done, so still photography was a big aspect throughout. Special Effects Supervisor Dan Sudick was handling floor effects, so the in-camera work was very solid.” Gillberg elected to work with a total of 11 vendors: Scanline VFX, Digital Domain and ILM Singapore were primaries, aided by Halon (which shared previs duties with DD), BOT VFX, capital T, Lola Visual Effects, Mammal Studios and Raynault VFX.
Unlike Tron and most other films exploring this dual-worlds concept, Free Guy actually required three levels of reality. “There’s ‘Reality,’ which is present-day, supposedly set in Seattle,” says Gillberg. “Then, inside Free City, you see our gaming world – a photoreal emulation of a real-world game – which is where most of the movie and most of the VFX take place. But there are also views of a lower-res Free City, as seen on monitor screens when real-world people, including the villain [Taika Waititi], who works in a gaming company, play Free City.
“There was a ton of work involved in making that video game version of the gaming world,” Gillberg relates, “and it had to really hold up when the camera pushes in past the real-world characters to the screen where we see versions of our Free City characters, before transitioning into Ryan and the other actors. It was a fine line for us to walk. While the game versions of their characters did have to be immediately recognizable and compelling, we didn’t want the video game versions of Ryan and the others to look too real, which meant maintaining some contrast between the video game versions and the characters themselves. So, we started that process by figuring out what the video game itself should look like. Did we want something looking more like Warcraft, or other games that are even more realistic?”
Levy’s input was often sought throughout look development. “Shawn wanted the game versions to be recognizable enough that the audience could empathize, even when we’re only seeing them on this gaming screen,” reports Gillberg. “I showed Shawn a range of video games, and based on that we settled on a look that was kind of a cinematic version of Grand Theft Auto. All of the movement of the characters would be at a full mocap level rather than canned animation.
“There was a ton of work involved in making that video game version of the gaming world, and it had to really hold up when the camera pushes in past the real-world characters to the screen where we see versions of our Free City characters, before transitioning into Ryan and the other actors. It was a fine line for us to walk. While the game versions of their characters did have to be immediately recognizable and compelling, we didn’t want the video game versions of Ryan and the others to look too real, which meant maintaining some contrast between the video game versions and the characters themselves.”
—Swen Gillberg, Visual Effects Supervisor
This look brought with it more definition in the emoting talking faces. At first we were going to limit the talking to a simple hinged jaw, but that reduced emotional engagement. To keep a more compelling look, we enhanced the face rigs on the game. There isn’t much talking going on, so instead of head cams we did witness cams. The facial rigs were essentially keyframed, while the hand and body motion was motion-captured.”
In addition to handling all shots of the video game version of Free City, Digital Domain also took on some ‘actual’ Free City scenes. “VFX Supervisor Nikos Kalaitzidis oversaw all that,” notes Gillberg, “which included the opening oner plus a construction site. The real-world game programmers keep reprogramming the Free City environment to obstruct Guy’s passage, so he has to deal with obstacles like stairs moving around.”
Another impediment faced by Guy as he attempts to save the day was handled by Scanline VFX Supervisor Bryan Grill. “The villain decides to stop Guy’s orange car by squeezing the street in on both sides to crush the vehicle between buildings,” says Gillberg. Scanline was also responsible for all of the building destruction. “We went through a development stage to determine the right look for destruction in the video game world, which meant figuring out whether these collapsing buildings were just shells, or was there firmature? It took tons of artwork and iterations before finally winding up with something that has all the gravitas and real emotional impact of a genuine building collapse, but still kept a video game look to things. Pieces fall in a very naturalistic way, but then as they fall, they rotate around their own center of gravity and then kind of suck into themselves. That look for the debris, which also included a wire-frame kind of blue glow, kept it well away from 9/11 photorealism, but there was still enough weight to it that audiences would care when seeing this world being destructed.”
One early design concept that stuck postulated a ‘box’ outside of the gaming world. “This box was all blue skies and white clouds,” Gillberg notes, “so when the buildings blow up, you can see this blue sky behind them – even when the camera is up high looking down at the ground!”
“After we finished shooting, there were multiple huge pushes through the end of 2019 to produce really good-looking temps,” Gillberg says. “Shawn really wanted the test screenings to have decent visual effects in them, so we completed the screening VFX at a higher-[than-usual] level, but then would abandon those and go back to another approach for the real finals. DD was originally assigned to doing previs for the construction site sequence, but at that time lacked bandwidth, so they rendered all of their previs through a game engine in Unreal. This let us quickly evaluate the effects and how they impacted the story, so the screening process definitely led to a mix of lost and gained shots between the tests and when we completed our finals.”
Since post had not fully ramped up when COVID hit, there was a scarcity of finals as of February 2020. “That meant the bulk of the work had to get done during lockdown,” admits Gillberg. “But there were a couple of weeks when it looked like everybody would be going home, as there was a fair amount of pressure to shut the whole thing down. I lobbied heavily against that, feeling there was a good body of talented individuals already on board and that it would be a shame to lose them, so we didn’t ever completely stop. First off, it was a matter of coming up with different protocols in order to work from home. That involved many VPN tunnels into our server on the FOX lot, to establish encrypted links. We set up a screening room at Shawn’s house, which I could drive remotely. Being able to work straight through the heart of the lockdown pleased me greatly, as it let us all stay focused while still having a lot of fun, trying out lots of new ideas.”
While the live-action cinematography had relied upon a single real-world LUT and another pair for the virtual gaming world, the looks were refined significantly during digital grading. “We’d re-balance sequences while retaining the general tone,” says Gillberg. “In our final grade, I worked with George and colorist Skip Kimball, and we let the real world go toward more of a cool-blue feel while pushing the gaming world look to become even more colorful.”
Being free from the constraints of traditional reality for the Free City scenes proved to be a strange attraction for Gillberg and his collaborators. “We had this premise of being in a video game world where anything goes,” observes Gillberg, “and that was a ton of fun for us to play with. I’ll give you an example: we’d be doing reviews with Shawn and somebody would ask, ‘What does this scene need?’ And somebody else might say, ‘This scene needs a dinosaur!’ And the thing here is, we can add that without worrying about the character not reacting to it, because for this clueless bank teller, it is normal to see that crazy stuff happen constantly in this violent world. We got to put in all kinds of video game vehicles and characters from other real games for people to recognize.” That creative rush did have limits, however. “We pushed until I added a TIE fighter, at which point I got told, ‘Stop!’”
By TREVOR HOGG
What are the prospects for virtual production in the post-pandemic world when global travel and working in closer proximity to one another will become acceptable once again? Will it fade away or become an integral part of the filmmaking toolkit? There is no guarantee when it comes to predictions, especially when trying to envision what the technological landscape is going to look like five years from now. The only absolute is that what we are using today will become antiquated. In order to get a sense of what the industry norm will be, a variety of professionals involved with different aspects of virtual production share their unique perspectives.
Geoffrey Kater, Owner/Designer/Director, S4 Studios
“My estimate is that LED walls have a good run for the next five years, but after that AR is going to take over. AI will paint light on people and things using light field technology and different types of tech. Virtual production is here to stay but it’s going to evolve with AI being a big part of that. We decided to go after car processing work and up the ante by using Unreal Engine, which gives you great control over the lighting, art direction and real-time to create our own system that would have the same amount of adjustability. We have the creative work which is building all of the different cityscapes and cars, software development, and working with a stage and LED company to build out what the volume would look like. If you’re going to have a city, you have to figure out stuff like traffic and pedestrians walking around. What we ended up generating was a city that never sleeps. Typically, car driving footage is two minutes long. We figured out how the computer could build a road in front of us that we never see being built, but now you can drive infinitively on the day so directors and DPs don’t have to cut.”
James Knight, Virtual Production Director, AMD
“Our CPUs are what everybody is using to power these LED walls, whether it be Ryzen or Threadripper, because of the setup time. I don’t think that anyone has totally figured out virtual production. It is interesting when you develop hardware because there are unintended use cases. You get college students right out of film school who aren’t jaded like the rest of us and don’t know that you shouldn’t do a certain thing, and in that beautiful messiness they end up discovering something that we didn’t realize was possible. There is still a huge element of discovery and that’s the beautiful nature of filmmaking. The media entertainment business is responsible for almost all innovation in CG that trickles to real-time medical imaging and architecture. Imagine a groundbreaking ceremony where you shovel in the dirt, but the media that has come can look through a virtual camera and see the buildings there in photorealistic CG in real-time. If you make the CPU beefy and fast enough for the filmmaking business, it will be able to handle anything that fashion and real-time medical imaging will throw at it.”
Philip Galler, Co-CEO, Lux Machina Consulting
“People will always want to tell big stories and in many cases big stories need big sets that require big spaces to put them in and light. I don’t think that the studio stage is going to go away. But what we’re going to see is that smaller studios will be able to find more affordable approaches to these technologies whether it be pro versions of Vive tracking solutions that are in the $10,000 range instead of $50,000 to $150,000 for most camera tracking or more affordable and seamless OLED displays that people can buy at the consumer level to bring home. As more of these virtual production studios come online, there will be more competition in the market on price for the services and the price will drop at the low end of the market in terms of feature set and usability. One of the things that people overlook all the time is the human resources that are needed. It’s not just spending the money on the LED. It’s being able to afford the training and putting the right team together.”
“My estimate is that LED walls have a good run for the next five years, but after that AR is going to take over. AI will paint light on people and things using light field technology and different types of tech. Virtual production is here to stay but it’s going to evolve with AI being a big part of that.”
—Geoffrey Kater, Owner/Designer/Director, S4 Studios
Jeroen Hallaert, Vice President, Production Services, PRG
“The design of an XR Stage is driven by what the content needs to be and where it’s going to end up. Sports broadcasting has been using camera tracking for 10 years where you saw people standing on the pitch and players popping up behind them and scoreboards. Then you have the live-music industry that was using media servers and software like Notch to make sure that the content became interactive with movement and music onstage. Bringing those together in early 2018 made it possible for us to do the first steps of virtual production. Within the year, things are going to be shot within an LED volume and the camera will be looking at the LED as the final pixel image.
“Because PRG was active in corporate events, live music and Broadway, we have a lot of skills available, and it’s more of a matter of retraining people than starting from scratch. We now have people running these stages who were previously directing a Metallica show. We’ve got the video engineer from U2 sitting downstairs in the control room.
Our gear is set up in a way that video engineers, whether they have been working in broadcasting or live music, can serve these stages as part of hybrid teams. In virtual production there are jobs and job titles that haven’t existed before, but that’s the glue between all of the different elements.”
Adam Valdez, Visual Effects Supervisor, MPC
“I think of virtual production as simply bringing CG and physical production steps together. We used to think of motion capture as one thing, making previz another, and on-set Simulcam yet another thing. Now it’s clearly a continuum, and the 3D content flows between all the steps. On the large-scale shows MPC works on, it usually means visualization for live-action filmmakers. Our mission is to take what we know about final-quality computer graphics, and apply it to early design and visualization steps via real-time technologies.
“I have been using virtual production extensively for the last year from my home office. I use our toolset to scout environments in VR, block out where action should happen, and finally shoot all the coverage – all in game engine and using an iPad. With regard to technology, it’s all about greasing the gears. The metaverse trend and standards from the technology companies that now hold the reins is a big piece. On industry, we need live-action filmmakers, showrunners and companies like Technicolor to keep a strong educational effort going so we all understand what’s realistic, possible and affordable. If those two ends keep working toward the middle, the virtual production ecosystem should thrive, and then we all win.”
Tim Webber, CCO, Framestore
“We view the VP toolkit as an expansion of what we do as a company, not a standalone item. Everything is about finding solutions for clients and creatives, and the pandemic has basically ratcheted up the need for VP and changed the way we work with filmmakers. In just two short years we’ve seen Framestore Pre-production Services [FPS] establish itself across every facet of our creative offer, with clients from commercials through to feature films and into AAA gaming and theme park rides all wanting to discuss how it can benefit their projects.
“There’s a lot that we can do in terms of explaining to industry just what the techniques and tools are, and where they can best be applied. Virtual production extends far beyond just LED volume and encompasses virtual scouting and virtual camera. Unreal Engine is used in previs and on-set visualisation, along with ICVFX. The beauty of this is being able to offer these as standalones, in combination or as a full suite of creative services that include final visual effects. It is something that we’re employing for our FUSE R&D project.”
“Technology will become less of a limitation, but a vehicle to unleash new ways of creative storytelling. These are truly exciting times, and it is fantastic to see our industry evolving from ‘let’s fix it in post’ to ‘let’s go see the VP team’ to discuss a new creative solution. We are on a great path to becoming even stronger creative partners in the filmmaking industry.”
—Christian Kaestner, Visual Effects Supervisor, Framestore
Christian Kaestner, Visual Effects Supervisor, Framestore
“Each project will be unique in its artistic requirements, which usually inform which virtual production elements are required and the wider technical setup for the project. Virtual production only makes sense if you can effectively offset or reduce some of your location or visual effects costs, and while it gives you more flexibility in some areas, it can limit you in others. Ultimately, only strategic planning and a clear idea of the required assets will help you reduce costs.
“With the advancements in real-time technology, machine learning, tracking systems and LED panel technology, soon we will be able to explore more and more creativity without boundaries. Our industry has been pushing these for the last decades in photorealistic rendering, and soon we will see those results in real-time. Technology will become less of a limitation, but a vehicle to unleash new ways of creative storytelling. These are truly exciting times, and it is fantastic to see our industry evolving from ‘let’s fix it in post’ to ‘let’s go see the VP team’ to discuss a new creative solution. We are on a great path to becoming even stronger creative partners in the filmmaking industry.”
Robert Legato, ASC, Visual Effects Supervisor
“Right now, they ascribe more magic to virtual production than it is. You still need to know what you’re doing. What it really is… is prep. There is now an expression, ‘Fix it in prep.’ Yeah. That’s homework. That’s write a good script. Vet the locations. Pick the right actors. Pick the right costumes. When you show up on the shoot day it’s execution phase because you have vetted all of these things. Working out an action sequence in previs means that I’m vetting it in prep. I’m seeing it work or not work or stretching something I haven’t seen before. I need to lay down the foundation editorially to see if that elicits a response that I want. George Lucas did it when he got dogfight footage from World War II and said, ‘Make it look like that.’ That was previs. Previs is fast iterative work.
“Because I had never shot a plane crash before, I had to practice at it so I didn’t embarrass myself on my first Martin Scorsese movie. So, I used the same pan and tilt wheels, animated in MotionBuilder, shot it live because that’s my way of working, and edited it all together. Before I spent money I shot it 25 times, re-jigged the pieces, found an editorial flow that worked, and that is now the script. I went out and executed that script in seven days. That’s the way to work, because when I’m done and reassemble it, as good as the previs was, it’s going to be 50 times better because the foundation and vocabulary are already there.”
“There is now an expression, ‘Fix it in prep.’ Yeah. That’s homework. … Working out an action sequence in previs means that I’m vetting it in prep. I’m seeing it work or not work or stretching something I haven’t seen before. I need to lay down the foundation editorially to see if that elicits a response that I want. George Lucas did it when he got dogfight footage from World War II and said, ‘Make it look like that.’ That was previs. Previs is fast iterative work.”
—Robert Legato, ASC, Visual Effects Supervisor
Johnson Thomasson, Real-Time Developer, The Third Floor
“Visualization can have multiple functions in a virtual production workflow. It can be used to plan the work that will be attempted, and it can be used as part of execution when actual filmed elements are captured. The Third Floor has been leveraging visualization in multiple modes including previs, virtual scouting, motion control techvis, Vcam and on-set visualization on quite a number of virtual productions to date. With LED wall production, the trend is toward more visual effects content being finaled during production. That means a huge shift in decision-making on design, blocking and camera coverage.
“Traditionally in visual effects, the turnaround between notes and the next iteration is measured in days if not weeks. Real-time is immediate. That opens the door to exploration and creativity. It empowers the key creatives to riff and for good ideas to snowball. In terms of visualization, the increase in rendering fidelity allows those ideas to be captured in an accurate visual form. This often results in a cost savings because down the line there’s less room for misinterpretation. Virtual scouting in VR also has significant cost implications because it can be a replacement for costly trips for large crews to real-world locations, or in terms of set builds. The keys can walk through the virtual set prior to construction, which again, gives a chance to make changes that would be much more costly later.”
Nic Hatch, Co-Founder & CEO, Ncam
“There are two sides to Ncam: real-time visual realization and data collection and reuse. The real-time is not our technology. It’s akin to existing platforms such as Unreal Engine. It allows our end users and the visual effects vendors to create better looking images in real-time, which it has to be if you want to finish in camera. The data side is hugely important, and I feel that’s going to be a game-changer. At the moment, data collection on set is minimal. To some extent machine learning will help. It’s not going to be one technology on its own. It’s going to be everything together. Ncam’s technology is based on computer vision with a bit of deep learning.
“If you look at the quality of real-time gaming over past five to 10 years, it has had leaps and bounds in terms of high fidelity and realism – that’s only going to get better. The more that we can do real-time, the more that we can do visual effects through the lens. There will be all kinds of technology coming out over the next few years that will help us to visualize things better. Reading and calculating light in real-time and the depth analysis that we require, to deep compositing in real-time, all of this will be coming – this is just the start. I’ve never seen the industry embrace it as much as they have done now. Ultimately, there was always change coming.”
Hugh Macdonald, Chief Technology Innovation Officer, NVIZ
“Up until a couple of years ago all of our previs was done in Maya. We wanted to push to doing it in Unreal Engine for various reasons, including being able to do virtual camera sessions as part of previs, but also for on-set Simulcam and more virtual production setups. It involved bringing a lot of new technologies into this world, and Epic Games has been fantastic. An interesting question to come out of this is, ‘Should previs be the department creating the assets for on set?’ A lot of productions these days have virtual art departments. If I could snap my fingers and determine how it would be, I’d have previs focus on designing the shot and not having to generate that full quality, and get the visual effects companies involved from pre-production building these assets, and allow them to make the final quality which is what they’re good at. Twenty or 30 years ago, visual effects was a bit of a luxury for a few big films, and now it is in every film. We’ll see the same with virtual production.”
Kris Wright, CEO, NVIZ
“What is exciting about virtual production is being able to turn up on the day [you shoot] because camera-ready has made visual effects part of the filmmaking process. Often Janek Lender [Head of Visualization] and Hugh Macdonald are not just working in a visual effects capacity, but with stunts, production designers and DPs. It is great to see how this tool is able to work across many disciplines and start to work in a hub. Virtual production is making it a lot more collaborative. On Solo: A Star Wars Story we wrote a rendering tool which enabled everything that they shot with a Simulcam to go in as a low-footprint postvis, and that was staying in their cut in certain cases for quite a long time. This has become a downflow from virtual production into post and editorial.
“What is exciting is this idea that you can have these tools that are helping to accelerate the process whether it would be final pixel, which is the Holy Grail, or now you can start to build quick edits from dailies that were captured through a Simulcam process. If we can keep in step with how filmmaking works and not make it feel like we’re reinventing the wheel, or keep things low footprint but accessible, that’s where it’s successful. What has been the big advancement is the real-time engine, but in a lot of ways we’re still using the same methodologies for filmmaking.”
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
|cookielawinfo-checbox-analytics||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".|
|cookielawinfo-checbox-functional||11 months||The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".|
|cookielawinfo-checbox-others||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.|
|cookielawinfo-checkbox-necessary||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".|
|cookielawinfo-checkbox-performance||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".|
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.