This is part one of a four-part blog series that will take a closer look at visual effects (VFX). We begin here with a brief history of this innovative field, and then discuss VFX workflows and review common terms.
But digital production, as we know it today, is a relatively new focus which began in the early 1990s with movies such as Terminator 2 and Jurassic Park. Those movies broke new ground in what could be done with visual effects.
Then in 1995, Pixar Animation Studios released Toy Story, the first full-length digitally-generated animated feature.
Toy Story was a massive undertaking for the studio’s animation/VFX team, with each of the movie’s frames taking from 45 minutes up to 30 hours to render. The film required 800,000 machine hours for 77 minutes of animation spread across 1,561 shots.
Since then, the use of visual effects in movies has grown significantly, and the nature of VFX work has grown in both size and complexity.
The Democratization of VFX Leads to Increasingly Complex Animation Jobs
The democratization of visual effects began in the early 1990s with open source products and proprietary packages being productized and sold by companies, replacing in-house tools available only to industry pioneers.
Also, standards such as file formats began to be established, which allowed the emerging ecosystem of digital production tools to be linked together into flexible processing pipelines.
The new openness led to a proliferation of visual effects studios. There are now thousands of VFX studios worldwide.
With access to configurable pipelines, studios began focusing on more creative aspects, such as creating characters with increasingly complex physical characteristics.
In the animated film Monsters, Inc., made in 2001, the character Sully had about 1 million hairs. In the movie’s sequel, this same character had 5.5 million hairs.
In general, the drivers behind the growth of VFX is that as the technology has become better, cheaper and faster. This enables the animation and special effects to grow increasingly ambitious.
For example, Disney’s 2016 animated film Moana has almost double the number of complex simulation shots compared to Big Hero 6, a movie made just two years earlier in 2014. Most of the effects in Moana are water, known in the industry as one of the hardest things to animate.
The effects team admitted this movie could not have been processed five years ago. Technologically, it just wasn’t possible.
The VFX Workflow and Pipeline
Now let’s take a look at the VFX workflow and how it fits into the movie production process. The VFX pipeline is similar to an assembly line. Each worker performs a task before handing off what they’ve done to the next person on the line.
Pipelines are like snowflakes; no two are alike. Pipelines will differ from production to production, and sometimes even within the same show.
The differences are that, in VFX, each worker is an artist and the pipeline enables the entire team to see the work as it evolves so it can be evaluated and, if necessary, adjusted for better results.
Some steps are dependent on previous steps for assets to continue their work. The typical VFX pipeline is also very complex. It’s more like multiple assembly lines that criss-cross each other.
While constant iteration is a great way to achieve high-quality results, it is time consuming and expensive.
To reduce wasted effort, most pipelines use placeholder assets or lower resolutions proxies. These are lower-quality (and smaller) assets that stand in for the finished full resolution versions during the numerous iterations.
Using these proxies, artists can defer more time-consuming tasks until later in production when major changes probably aren’t necessary and speed look development without having to render at final quality all the time.
A VFX production pipeline is complex, with many different processes and many interdependencies among them. The biggest challenge is managing the sheer volume of information required to produce photorealistic imagery. A single creature might be composed of hundreds, if not thousands of digital assets. It is often necessary to assemble terabytes of data that must be passed through to the renderer and ultimately a compositor.
Stages of Production
In film, the main stages of production are:
For VFX, pre-production applies to all the activities that happen before the camera starts rolling. For artists, common tasks include visual research, concept design of characters and environments, and defining the color palettes used in the film.
The look development and pre-visualization phases help the director to understand what various sequences of the film could be.
In this stage, the focus shifts from planning to building. In CG animation, where everything on the screen is ‘computer generated’, this means creating all the digital assets.
In the world of visual effects, production typically refers to the live shoot and the collection of on-set data. The digital assets themselves can be created during production.
In feature animation, post production refers only to those tasks that occur once the final images have been created. These tasks include editing, dimensionalization, digitizing footage to create a digital intermediate, color correction and printing the digital images to film for distribution to cinemas.
For live action films, post-production refers to anything that comes after the live shoot, including the bulk of the visual effects work.
Example of a VFX workflow:
Common VFX Terms
Here are some common terms related to visual effects.
LIDAR stands for Light Detection and Ranging. LIDAR is a method for measuring distances with a laser light. In the VFX process, LIDAR scans use a laser to measure the distance from a fixed location to points on surrounding rigid surfaces such as buildings. The data is stored as a “point cloud,” which is a series of disconnected points in 3D space. These points can then be converted into a 3D model of the set, later used for CGI (computer generated imagery).
Modeling creates the digital 3D geometry used in animation and visual effects, referred to as “models” or “meshes.” Typically, the modeler only recreates the overall three-dimensional form of an object. Surface properties, including local color and small irregularities such as bumps and scratches, are represented by texture maps, displacement maps, and shaders.
Shaders and Texture Maps
Shaders mimic the way in which materials react to light. They create mathematical representations of properties such as surface color and surface reflectivity. Texture maps are 2D images that represent the value of a material property at each point on the surface of a model.
Common map types include diffuse (the surface color of the model), bump (used to mimic fine surface details, such as bumps and wrinkles), and specular (used to control the form of the highlights on the object’s surface).
Rendering is the process where the data in the 3D scene is converted into a series of 2D images, seen from the digital camera’s point of view. In film work, these calculations are done offline in batches, typically taking anywhere from minutes to hours to days to process an individual frame.
It is common to render a scene in a series of layers or passes, with each layer showing only a subset of the assets in the scene. For example, a layer might include only the background, or only the foreground characters.
The individual assets that make up a shot must be combined into a single 3D scene, ready for rendering. This is known as “shot layout” or “shot assembly.” Generally, layout determines the file names of almost all the 3D assets.
Compositing refers to the combining of two or more images to make a single image. In VFX, compositing blends all of the rendered and live-action elements that make up a shot into a series of finished images.
Effects and simulations
Effects (FX) is a generic term used to describe the dynamic elements in a shot calculated through mathematical procedures rather than animated by hand. Common examples include fire, smoke, water, hair, cloth, crowds, and destruction effects.
FX tools represent significant challenges to pipeline developers. Some generate huge amounts of data; others generate data that requires conversions or additional processing before it can be consumed by downstream tools. Examples of the former include fluid or liquid simulations. These have significant memory, computation, and storage footprints. Examples of the latter include the surfaces commonly used in fluid and rigid body simulations.
Volumetric effects are typically clouds, dust, water, fire and gaseous fluids. This type of data is challenging for pipeline developers because of its large footprint and because it often requires conversion to other formats before it can be used by other tools.
Qumulo has several helpful resources for learning more about VFX and how our file data platform helps speed VFX production pipelines. Check out our data sheets on VFX, Animation, Media and Entertainment and learn how we’ve helped companies like FuseFX and Cinesite Studios to bring movies and animation programming to audiences faster!
Special thanks to Matt Ashton and Kristi Whitman for their contributions to this blog series. Stay tuned for Part Two of our VFX 101, which will focus on the rendering process, offering counsel for digital artists when considering the cloud to manage complex, compute-intensive rendering jobs.