TL;DR
- A look at how motion capture data is edited, connecting that data to a virtual character and how mocap can be used within a virtual production.
- Recording the finest details of motion is one of the downsides. Where motion capture data must be recorded and potentially modified, it quickly becomes clear that is difficult to edit the unprocessed data, but there are solutions.
For performers, one of the creative advantages of virtual production is seeing the virtual environment in which they are performing. Using motion capture techniques extends this into capturing the motion of performers to drive CGI characters. New technologies are rapidly transforming the creative freedom this brings.
The BroadcastBridge brings us up to speed on this as part of a wider 12-part article series on virtual production. It looks at editing motion capture data, connecting that data to a virtual character and how mocap can be used within a virtual production.
In the tutorial, Phil Rhodes explains how in conventional animation, the motion of an object between two positions is usually described using only two positions — or waypoints — which are separated in time. Changing the speed of the object’s motion simply means reducing the time it takes to move between the two points.
However, motion capture data records a large number of waypoints representing the exact position of an object at discrete intervals.
“It’s often recommended that motion data should be captured at least twice as frequently as the frame rate of the final project, so that a 24fps cinema project should capture at least 48 times per second,” the BroadcastBridge advises.
“That’s well within the capabilities of most systems, but it does complicate the process of editing motion data. It’s impractical to manually alter dozens of recorded positions per second and achieve a result that looks realistic.”
Editing Mocap Data
Tools have been developed to facilitate motion capture data editing. Some of them rely on modifying groups of recorded positions using various proportional editing tools; a sort of warping. Others try to reduce the number of recorded positions, often by finding sequences of them which can be closely approximated with a mathematical curve.
This can make motion capture data more editable, but too aggressive a reduction of points can also rob it of the realism of a live performance, risking a more mechanical, artificial look which is exactly what motion capture is intended to avoid.
Often, motion capture used where a performer is working live alongside a virtual production stage won’t be recorded, so there won’t be any need or opportunity to edit it. Other problems, such as intermittent failures to recognise tracking markers, might cause glitches in positioning that might usually be edited out.
“Working live, a retake might be necessary, although well-configured systems are surprisingly resistant to — for instance — markers being obscured by parts of the performer’s body.”
Rigging and Scale
Connecting motion capture data to a virtual character, requires that character model to be designed and rigged for animation. Where the character is substantially humanoid, this may not present too many conceptual problems, although the varying proportions of different people can still sometimes cause awkwardness when there’s a mismatch between the physique of the performer and the virtual character concerned.
“Very often, the character will be one which looks something other than human. It may be on a substantially different shape, scale or even configuration of limbs to the human performer whose movements will drive the virtual character,” writes Rhodes.
Various software offers different solutions to these considerations, allowing the performer’s motions to be scaled, remapped and generally altered to suit the animated character, although this has limits.
“Although motion capture technicians will typically strive to avoid imposing requirements on the performer, the performer might need to spend time working out how to perform in a manner which suits the virtual character. This approach which can make a wide variety of virtual characters possible.”
The article describes how mocap systems require at least some calibration, which might be as simple as moving around the capture volume with a specially designed test target. Some of the most common systems, using spherical reflective markers, may require some calibration for each performer, especially if the performer removes or disturbs the markers.
Virtual Production and Mocap
Many virtual production setups rely on motion tracking to locate the camera, even when motion capture is not being used to animate a virtual character.
As such, almost any virtual production stage might rely on at least some calibration work, though there is often some variability in how often this is done; performance capture spaces might do so twice daily, requiring a few minutes each time.
“As with many of the technologies associated with virtual production, motion capture, where it’s used, is likely to be the responsibility of a team provided by the studio itself,” the BroadcastBridge reports. “Most of the work required of the production will be associated with the design of the virtual character which will be controlled with motion capture.”
The report concludes, “The technical work of connecting that character’s motion to the capture system is an item of preparation to be carefully planned and tested before the day. With those requirements fulfilled, using an actor’s performance to control a virtual character can provide an unprecedented degree of immediacy.”
While it certainly adds another layer of technology to the already very technology dependent environment of virtual production, it creates a level of interactivity which was never possible with post production VFX.