A Tutorial On Motion Capture Driven Character Anim
A Tutorial On Motion Capture Driven Character Anim
A Tutorial On Motion Capture Driven Character Anim
net/publication/228720023
CITATIONS READS
2 838
2 authors, including:
Baihua Li
Loughborough University
65 PUBLICATIONS 615 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Baihua Li on 04 April 2016.
630-125 356
2. Character Animation offset data obtained from real MoCap. Though rescaling
can be done with the aid of “Talent Figure Mode” in the
2.1 Character modeling “Motion Capture” rollout of 3ds Max, we found it was very
Low number polygon models, composed by triangles or time consuming and could generate unexpected mesh
quadrilaterals, are widely adopted because of the good distortion in rigging process, as shown in Fig. 2. To solve
support for efficient rendering by current graphics this problem, the BIP format was adopted. The BIP file
hardware. Meshes can be generated interactively though a stores limb rotation data which are less dependent to
graphical interface using various primitives and modifiers, physical body measurement.
such as extrusion, symmetry, tweaking, subdivision
(NURMS in 3ds Max), and transformations. Joint meshes To preserve the fidelity of human movement in animation,
are special segments for attaching joints to an underlying MoCap data are usually captured at 60~120Hz. However,
skeleton. They also prevent mesh collision caused by the animation generated from MoCap on a frame-by-frame
deformation of adjacent segments. basis, at a high sampling-rate, can cause very slow
performance and may be unnecessary in applications. An
2.2. MoCap data pre-processing alternative way is to reduce the MoCap data to sparse key-
frames. We found this method makes the animation
MoCap marker data are usually noisy [6, 7]. Missing data compact, effective, and a lot easier to edit and personalize.
can be caused by occlusion. “Ghost” points may be In 3ds Max, the “Key Reduction” in the Motion Capture
involved from the workspace or caused by image analysis Conversion Parameters menu can be used to detect MoCap
algorithms due to imaging ambiguity. The data point are keys based on the motion intensity and interpolate the in-
unlabeled from 3D reconstruction. To infer the underlying betweens. However, the key reduction may cause artifacts
physical structure, an identification procedure is often like jittering movements and “foot-slide”. Jittering
required to know which point in massive MoCap data movements can be improved by tweaking (see Section 2.4).
corresponds to which marker attached to the subject. Even Foot-slide can be solved via “Footstep Extraction”.
with the most sophisticated system [8, 9], auto-labeling Footstep Extraction extrapolates footsteps so that keys that
may fail for complex movements [10]. In addition, motion make up the foot placement and movement will be turned
blur and measurement error can generate jagged marker into footsteps to maintain correct foot-toe-ground contact.
trajectories resulting in wobbly motion in animation. It is
cumbersome to manipulate and very difficult to alter noisy 2.3 MoCap rigging
MoCap data, especially when the key “essence” of the
motion is not distinguished from the large amount of To animate the character mesh, a complete bone hierarchy,
potentially irrelevant details and noise. In practice, we need or “skeleton”, is required with full simulation of Inverse
to re-capture a movement until an acceptable quality is Kinematics. Most commercial animation tools have pre-
achieved. For the MoCap data used in this project, built skeletons or third party plug-ins, such as Biped in 3ds
identification errors were corrected manually or using the Max.
algorithm developed in [10]. Missing data gaps were filled
by interpolation or splines; marker trajectories were
smoothed by various low-pass filters.
357
Manually Vertex Weighting and good flexibility over 3.1 Face modeling form range data
“Envelope” control. The main drawback of the Skin
modifier is the lack of deformation functions of its rigging A neutral face scan was obtained using a structure-light
Envelopes, therefore it cannot provide realistic mesh based range scanner manufactured by InSpeck [15], as
deformations around joints. The Physique modifier is shown in Fig. 5 left. The scan data contain measurement
designed particularly for rigging the Biped to a humanoid errors and holes caused by light reflections during the scan
character. It allows deformation models to be applied to the process. The data were filtered and some small holes were
rigging Envelopes. fixed using software tools embedded in the InSpect system.
The distorted vertices were fixed manually with the aid of
“Soft Selection” in 3ds Max. The original scan was also
very large, almost 50,000 polygons. This amount of detail
is unmanageable for real-time facial animation. We used
the “MultiRes” modifier in 3ds Max to reduce the polygon
count to about 2200 as shown in Fig. 5 right.
3. Facial animation
358
bones via orientation constraints. Fig. 7 shows the skull shown in Fig. 9. Then with the aid of the ‘Morpher’ tool,
(red), neck (black), upper jaw (purple) low jaw (yellow) we blended multiple morphing targets to create novel faces.
and eyelid (grey) bones, and circle controllers used to
control the rotation of these bones. We attempted to use morphing to animate blinking eyelids.
However it caused serious mesh distortion. Therefore, we
used bones and the Skin modifier to animate eyelids, which
produced a much better result.
359
create hair animation without using procedural techniques,
and 3) simulate non-rigid motion for cloth.
360
in the animation, the character’s legs still pass through the References
dress due to ineffective collision detection, which would
require further fine turning to rectify. [1] C. Bregler, Motion capture technology for
entertainment, IEEE Signal Processing Magazine,
Body motion: The animation was created by blending 24(6), 2007, 158-160.
walking and dance MoCap clips, similar to the Male [2] D. Sturman, A brief history of motion capture for
Dancer. Keyframing and tweaking techniques were applied computer character animation, SIGGRAPH 94: Course
to the MoCap trials to remove MoCap errors and to 9.
incorporate some personalized features. A rendered image [3] M. Gleicher & N. Ferrier, Evaluating video-based
from the animation is shown in Fig. 13 left. motion capture, Proc. of Computer Animation, 2002,
75-80.
Another important topic of MoCap driven animation is lip- [4] T. Moeslund, A. Hilton & V. Kruger, A survey of
syncing facial animation, which has not been addressed in advances in vision-based human motion capture and
this project. We will work on this in the future. analysis, Computer Vision and Image Understanding,
104(2), 2006, 90-126.
4.4 Environmental modeling [5] L. Wang, W. Hu & T. Tan, Recent developments in
human motion analysis, Pattern Recognition, 36(3),
The ground was modeled using simple flat polygons and 2003, 585-601.
textured with a diffuse map. In ‘Humanoid Robot’ trial [6] J. Richards, The measurement of human motion: a
differed from the other two, a fog atmospheric effect was comparison of commercially available systems,
generated on the surface of the ground plane. This was Human Movement Science, 18 (5), 1999, 589–602.
achieved using a thin layer of mist, which both adds depth [7] D. Washburn. The quest for pure motion capture.
to the surface and compliments the rest of the scene. The Game Developer, 8(12), 2001, 24–31.
sky was constructed from a hollow sphere and then UVW [8] Vicon Motion System, www.vicon.com.
mapped with a sky. The lighting effect was provided from [9] Motion Analysis Corporation,
an omni-directional light source. To make the light appear www.motionanalysis.com.
more like a sun, a lens flare effect was applied to the [10] B. Li, Q. Meng & H. Holstein. Articulated pose
camera. identification with sparse point features. IEEE Trans.
Systems, Man and Cybernetics, 34(3), 2004, 1412-
5. Conclusion 1422.
[11] A. Kirk, J. O'Brien & D. Forsyth. Skeletal parameter
MoCap has been proven an effective way to generate estimation from optical motion capture data. IEEE
expressive character animation. MoCap technique has been Conf. on Computer Vision and Pattern Recognition,
widely recognized by animators. Handy toolkits for MoCap 2005, 782-788.
handling have been integrated within animation software [12] H. Shin, J. Lee, M. Gleicher & S. Shin. Computer
by the developers. However, many issues still exist. puppetry: an importance-based approach. ACM
Currently, most MoCap systems are very expensive and are Transactions on Graphics, 20(2), 2001, 67-94.
not affordable for average users. MoCap data [13] J. Noh & U. Neumann. A survey of facial modelling
standardization (e.g. marker attachment protocol, format) and animation techniques. Technical report, Univ. of
and usability need to be addressed. Comprehensive MoCap Southern California, 1998.
databases including a wide range of motion categories and [14] Z. Deng & J. Noh, Computer Facial Animation: A
configurations are still very limited. Motion synthesis from Survey, Data-Driven 3D Facial Animation, Springer
a restricted number of MoCap samples is highly expected London, 2007
to extend applications of MoCap. Great strides on the [15] Inspeck Inc, http://www.inspeck.com
development of new and improved methods for capturing [16] Universidade Lusófona MovLab,
and processing MoCap data, motion retargeting, tweaking, http://movlab.ulusofona.pt/FilesMocap.php?categoria=
deformation handling and real-time rendering, are 1
demanded to make MoCap a more viable tool for animation [17] CMU Graphics Lab Motion Capture Database,
production. In the future, mocap.cs.cmu.edu.
[18] Highend3d, “Character Rig With Facials and
Morphs”,http://www.highend3d.com/3dsmax/downloa
Acknowledgement ds/character_rigs.
Some MoCap data used in this tutorial were captured by a [19] 3dtotal, http://www.3dtotal.com
optical MoCap system — Vicon 512, installed at the Dept. [20] Ordix Interactive Tutorial, Transparency Mapping,
of Computer Science, Univ. of Wales, Aberystwyth. www.ordix.com/pfolio/tutorial/hair/transpmap01.ht
361