1
IN TERACTIVE VIRTU AL H AIR-D RESSIN G ROOM
Nadia Magnenat-Thalm ann, Melanie Montagnol, Rajeev Gupta, and Pascal Volino
MIRALab - University of Geneva
(thalm ann, m ontagnol, gupta, volino)@m iralab.unige.ch
ABSTRACT
Hair designing is one of crucial com ponents of the hair sim ulation tasks. The efficiency of hair
m odeling is very m uch determ ined by the interactivity and the ease-to-use the designing tools
within an application. This paper presents a unified fram ework that uses the various key
techniques developed for specific tasks in hair sim ulation and to realize the ultim ate goal of
‘virtual hair-dressing room ’ that is sim ple to use but quite effective for generating fast
hairstyles. Successful attempts have been m ade to handle the different challenging issues
involved in sim ulation of hair at interactive rates. Effort has been put in developing
m ethodologies for hair shape m odeling, hair dynam ics and hair rendering. A user friendly
interface controlled by a haptic device facilitates designer’s interactivity with the hairstyles.
Furtherm ore, designer’s visualization is enhanced by using real tim e animation and interactive
rendering. Anim ation is done using a m odified Free Form Deform ation (FFD) technique that
has been effectively adapted to various hairstyles. Hair Rendering is perform ed using an
efficient scattering based technique, displaying hair with its various optical effects.
Ke yw o rd s : Hair Sim ulation, Interactive hair Modeling, Force feedback Interaction, Volum e
deform ation, Real Tim e Rendering
1.
IN TROD U CTION
Hair is essential for characterizing a virtual hum an and so is the need to design a suitable and realistic looking
hairstyle for it quickly. Interactivity plays an im portant role in determ ining the efficiency of the design
application. The process is further facilitated by an interface that provides user with sim ple but essential tools for
perform ing interactions like for handling, cutting, and brushing hair displaying dynam ic and optical behavior at
interactive rates. Usually this a com plex task and the techniques developed have to com prom ise between
interactivity and realistic appearance.
The difficulties with hum an hair sim ulation arise due to the num ber, the structure and the interaction between
hair. On a scalp, hum an hair are typically 10 0 ,0 0 0 to 120 ,0 0 0 in num ber. With this num ber, even sim plified
sim ulation takes vast am ount of com puting resources. Moreover, the com plicated structure and interaction
between hairs present challenges in every aspect of the sim ulation. Furtherm ore, addressing these issues to
sim ulate hair accurately and realistically for real– tim e indeed m akes virtual hair sim ulation technically a
dem anding process.
(a)
(b)
(c)
(d)
Figure 1: Various hairstyles created, anim ated and rendered using our interactive “virtual hair-dressing room ” (a) is the initial
Hairstyle, (b),(c), and (d) are the hairstyles created using different in teractive tools (cutting, curling and brushing)
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
2
Geom etrically hair strands are long and thin curved cylinders. The strands of hair can have any degree of waviness
from straight to curly. Thus, a m odeling m ethod should provide the control over a large group of hair strands
effectively, but still allow control over detailed variations. The structure (or style) of hairs as a group is often m ore
m eaningful than the properties of individual hair strands. The interactions between hair play an important role in
generating these structures. Therefore, a m odeling m ethod should take into account the interactions as well as the
individual properties. Furtherm ore, from a designer’s aspect all these features should be available in an interactive
system for fast prototyping of various hairstyles. The sense of im m ersion is also increased if he/ she gets back
force feedback while perform ing different interactions with the available tools. This requires integrating a haptic
interface which provides user with increased degree of freedom as com pared to a m ouse and considerably
increases the com fort, perform ance and productivity.
Real time sim ulation is another im portant feature that increases the sense of involvement during hair designing.
It is obviously m ore realistic to see dynam ic changes in the hairstyle as the user perform s interactions. But, the
com plexity of hairstyles makes it a challenge for anim ating them on virtual characters with perform ances
com patible with real-tim e. Each hair strand has a high degree of flexibility and easily gets bent upon external
forces such as gravity and collisions, while it strongly resists linear stretches. The difficulty is actually to build a
m echanical model of the hairstyle which is sufficiently fast for real-tim e perform ance while preserving the
particular behavior of the hair m edium and m aintaining sufficient versatility for sim ulating any kind of com plex
hairstyles. The hair m otion is affected by m any subtle factors, such as wind forces, air drag, gravity, head m otion
and wetness. More interestingly, the static charge and friction between neighboring hairs often cause a group of
hairs to form a cluster that m oves coherently. Collisions between such clusters are im portant otherwise the hair
interpenetration will be quite unnatural and distracting, and overall affect the realism of the sim ulation. This
behavior is highly dependent on the type of hairstyle (straight or curly, long or short) and condition (stiff or soft,
dense or sparse, dry or wet).
The hairstyle also plays an important role in how the light interacts with the hair which in turn is essential to be
sim ulated because specifically self-shadow within hair gives im portant clue about the hair style itself. Rendering
hum an hair often requires painful labors as well as dem anding com putational power. Optically, hair has m any
interesting phenom ena such as anisotropic reflection, strong forward scattering, and translucency. A correct
reflectance m odel is thus needed to compute reflectance off the individual hair geom etry. While, a local shading
m odel approxim ates coloring of an individual hair strand, interactions between hairs require efficient m ethods to
com pute such global illum ination effects. In particular, the shadow between hair provide essential visual cues for
the structure of hair as a group. Hair is inherently volum etric and the internal structures can be correctly
illustrated only with proper shadow. Furtherm ore, m ost of the current hair rendering sim ulations lack physical
basis. All the opacity-based approaches consider a transm ittance function that com pute attenuation of light only
towards the hair strand for which shadow is being com puted. This is not physically correct as there is also an
attenuation of light when it is scattered from the hair strands towards the viewer, and m ust be incorporated for
the final hair color. Moreover, variations in shading due to anim ated hair also need to be considered for realistic
results.
Our work is m otivated by the need to have a unified m odel for facilitating hairstyling effectively utilizing various
m ethodologies, like interactive m odeling, sim plified but efficient animation, and optim ized yet realistic rendering
giving a real-tim e perform ance. The specific contributions of this paper are:
•
•
•
•
An interactive m odeling system to allow the user with an easy and flexible m odeling capability to design
any hairstyle with added features of fine-tuning. The system should perform all the hair m anipulation
tasks (e.g. cutting, curling) at interactive rates.
A m echanical m odel that efficiently sim ulates the m otion of individual hair strand as well as the
interactions between hairs them selves and the interaction between hairs and other objects.
A scattering based illum ination m odel that produces realistic rendered hair with various colors and
showing optical effects like shadow, specular highlights.
Im portantly, all these sim ulations are perform ed in real-tim e.
The rest of this paper is organized as follows: in the next section we give a brief overview of various hair
sim ulation techniques. In Section 3 we present various considerations and the ideas of our approach for styling,
anim ating and rendering. In Section 4 we present the different interaction tools developed for designing
hairstyles. Section 5 is dedicated to our lattice based m echanical m odel highlighting its key features and benefits.
Section 6 gives the description of our scattering based illum ination m odel discussing the various optim izations
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
3
m ade for rendering at interactive rates. We dem onstrate the results and discuss the perform ance for the unified
hair styling application in Section 7. We conclude with a look at avenues for future work in Section 8.
2.
PREVIOU S W ORK
Researchers have devoted a lot of tim e and effort for producing visually convin cing hair, giving m ore believability
to virtual hum ans. Though im plem enting a unified fram ework for creating hairstyles using dynam ic m odels along
with realistic rendering has not been actively considered, a num ber of algorithm s have been developed to resolve
m ost of the challenges in the three m ain hair sim ulation tasks - hair shape m odeling, hair anim ation and hair
rendering. Specific advancements have been m ade in each of these tasks.
2 .1 In te ractive H air Stylin g
There exist different techniques proposed in the literature in order to provide tools for interactively creating new
hairstyle. Most of these techniques involve designing an interactive 3D user interface giving user control to place
hair on any part of the 3D m odel. The hairstyling process in m ost cases is very m uch linked to hoe the hair are
m odeled. Use of cluster or wisp for representing hairstyles is a com m on approach in m ost system s. Chen et al [6]
presents a m odel that represents each cluster with a trigonal prism and the hair strands within a cluster is
m odeled by the distribution m ap defined on the cross-sections of the prism . Sim ilar to this approach, in [34] a
wisp m odel is used but represents a set of a 2D patch projected on a cylinder as boundary. The developed tool
allows user to choose num ber of patches, position them and then individually or collectively m odify param eters
such as length, stiffness, blending for the patches. More approaches using different wisp m odels have been
discussed in [24][26][28] and [39]. Lee et al [24] use a new hardware rotor platform to perform different styling
operations in order to increase the interactivity in 3D. Param eters like hair length, clum p-degree, wisp size and
clum p-rate can be m odified to obtain precise control on the hairstyle. In [26], the user defines the region on the
head scalp for hair growth and a silhouette of the target hairstyle which is used for autom atically generating a
“silhouette surface”. This is followed by autom atic creation of representative curves of the clusters later used for
cluster polygon generation. A sim ilar sketch based hair styling approach is presented by Malik et al [28]. Recently
a m ore user-friendly interactive tool called Hair Paint has been introduced by Heranandez et al [12]. The system is
based on a 2D paint program interface and color scale im ages to specify hair characteristics. Som e of the
researchers have also presented system s exploiting fluid flow [10 ] and vector fields [41] for explicit hair m odeling.
Other techniques [16][37] use hybrid m odels com bining benefits of wisp m odel and strand m odel. In general, all
these system s result in creating nice static hairstyles, but are too slow to be able to interact with hairstyles when
dynam ic behavior is added. Also utilizing force feedback devices for m ore interactive control of designing tools as
applied by Otaduy et al [31] is still to be explored for hair.
2 .2 H air An im atio n
Usual sim ulation techniques m ay consider any com prom ises between sim ulating individually each strand and
sim ulating a volum e m edium representing the com plete hairstyle. In the context of efficient sim ulation, reducing
this com plexity and the num ber of degrees of freedom of the m odel is essential for fast com putation. The first idea
is to consider that hairs of neighboring locations share sim ilar shapes and m echanical behaviors. In m any m odels,
such hairs are grouped into wisps, som etimes also called clusters. This technique was defined by Watanabe et al
[38], and has been frequently used since with m any variations in [5][6][32][40 ]. Another evolution is to replace
the hairs by approxim ate surfaces (strips or patches), as done by Koh et al [17][19], or even volum es that can easily
be m odeled as polygonal m eshes, such as the thin shell approach of Kim et al [14]. Anim ation is done by specific
m echanical models associated to these representations, which also include specific m ethods for handling
collisions efficiently as described by Lee et al [23]. Com bining various approaches can lead to Level-of-Detail
m ethods where the representation is chosen depending on the current requirem ent of accuracy and available
com putational resources. A good exam ple developed by Bertails et al [3] is based on wisps tree, and a sim ilar
approach from Kim et al [16] is based on m ultiresolution clusters. Advanced Level-of-Detail m ethods also include
com bination of strands, clusters and strips m odeled with subdivision schem es and anim ated using advanced
collision techniques, as developed by Ward et al [36]. In the specific context of fast real-tim e applications, cluster
and strip m odels are good candidates. However, they still suffer from various problem s, such as the necessity of
collision detection between hairs, and their inability to represent efficiently som e m echanical properties, such as
bending stiffness. Furtherm ore, these sim ulation techniques have not yet been considered for efficient use during
styling. Specific approaches are oriented toward realtime applications, such as the Loosely Connected Particles
approach described by Bando et al [2] and the real-time short hair m odel from Guang et al [8], but they suffer
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
4
from scalability problem s and hairstyle design constraints because of their underlying m echanical m odels which is
suitable to specific hairstyles, usually long and straight hair.
2 .3 H air Re n d e rin g
Hair rendering involves im m ense problems ranging from large data to com plex optical behavior and thus has
been a challenging yet captivating research topic since quite som e tim e. The research started with the lim ited
problem of fur rendering by Kajiya et al [13], who also derived a local anisotropic lighting m odel for hair that has
been widely adopted and extended in later illum ination m odels
[ 1][7][22]. Recently, in [29] a new physical scattering m odel for hair has been defined, giving a theoretical m odel
for scattering of light from hair fibers and predicts effects invisible in previous m odels. Though im pressive results
are obtained, this approach involves a lot of m athem atical com putations, m aking it unacceptable for interactive
purpose. Sim ulating self-shadows in hair has been an im portant contribution of [15][22][25]. Although, these
techniques produce excellent results but need large m em ory storage and processing time m aking the unsuitable
for real-tim e. The recent giant leap in perform ance of graphics card has m ade it a possibility to have faster and
realistic results. Koster et al [20 ] intensively utilize the GPU for shadow m ap com putation, though the approach
doesn't take anim ation of the hair into consideration. In [30 ] the issue of shadow in dynam ic hairs is efficiently
dealt by com puting a 3D density field and storing the samples of the hair density function in a compact way. The
use of density clustering gives an advantage of not explicitly com puting the entire transm ittance function for tim evarying hair data, resulting in better rendering speed and quality. More recently [4] presented an anim ated hair
rendering m odel based on a 3D light oriented density map. Unlike the previous interactive approaches, this
algorithm com putes accurate self-shadow on a standard CPU independent of any GPU at interactive rates. The
paper com bines a volum e representation of density with a light oriented sam pling for self-shadows.
3 . CON SID ERATION S AN D APPROACH OVERVIEW
Hair styling schem e can be divided in two parts: firstly, defining a set of general characteristics of a hair style,
and secondly, varying set of characteristics of an individual hair. We choose to m odel hair utilizing the data
representation of animation m odel
[35]. This hair m odeling technique
allows m odifying the geom etry of the
hair directly. We develop these
param eters in two phases. First, we
propose a set of static param eters for
hair
styling
(geom etric
m odifications), and then we anim ate
using these param eters (update
sim ulation).
In order to increase the interactivity
we have chosen to use a haptic
interface. With this device it is now
possible to see the virtual hair and
other objects, to m ove and finally to
touch
them
in
the
virtual
environm ent. The using of haptic
enables interaction with hair in real
tim e. We present a system that
Figure 2: Fram ework for Hair Dressing Room
provides the user with interactive
and easy-to-use tools for hairstyling. Even though we have utilized a m echanical m odel in our sim ulation, the
system is fast and the user can m ake m odifications at interactive rates. In addition, the dynam ic behavior results
in m ore accurate and realistic sim ulation as it include influence on the styles due to gravity and other external
forces.
For anim ation, the system should be com patible with m ost approaches used for hairstyle representation and realtim e rendering, and offer the designer the possibility of creating any kind of hairstyle that can robustly be
sim ulated in any anim ation context. We intend to achieve this through the design of hair anim ation model based
on volum e deform ations, with the aim of anim ating any com plex hairstyle design in real-tim e. The idea of our
m ethod is the following: rather than building a com plex mechanical m odel directly related to the structure of the
hair strands, we take advantage of a volum e free-form deform ation schem e. We detail the construction of an
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
5
efficient lattice m echanical deform ation m odel which represents the volum e behavior of the hair strands. The
lattice is deform ed as a particle system using state-of-the-art num erical m ethods, and animates the hairs using
quadratic B-Spline interpolation. Another issue that needs to be considered during hair anim ation is collision.
Since hair strands are deform ed through FFD, there is no real need of detecting collisions between the hair
strands, as they follow the volum e of deform ation of FFD which is rarely self-intersecting. However, we need to
take into account collisions between the hairstyle and the skull. The hairstyle reacts to the body skin through
collisions with a m etaball-based approximation. The m etaballs are represented as polynom ial forces that are
integrated in the m echanical m odel. The m odel is highly scalable and allows hairstyles of any com plexity to be
sim ulated in any rendering context with the appropriate tradeoff between accuracy and com putation speed, fitting
the need of Level-of-Detail optim ization schem es.
We also aim to have an optim ized technique that decreases the com plexities involved in hair rendering and
sim ulates optical effects while incorporating hair anim ations at interactive rates. Our rendering data
representation for hair volum e is quite sim ilar to the anim ation system [35]. Utilizing a sim ilar representation
also gives us an advantage of having a fast access to lot of inform ation from the underlying animation m odel for
rendering com putations. Based on these considerations we m odel our illum ination m odel as follows: The local
illum ination of hair is defined by a Gaussian function adapted to our strip representation of hair. The self-shadow
com putation for hair is sim ilar to the volum e rendering as we perform voxelization of hairstyle into a hair volum e
for opacity com putation. For the case of anim ated hair, the new vertex positions are calculated, and based on the
displacement of the hair vertex a refinement m ap is com puted. This takes care of the hair density variations within
the rendering lattice and is then used for ` refining' the shadow in anim ated hair. Our m ethod's efficiency towards
achieving fast updates is highly credited to the division of the pre-processing and the run-tim e tasks of the
sim ulation. The m odel has been optim ized to perform expensive com putations, m ainly involving physics, during
pre-processing that contributes to the speedy perform ance of our illum ination m odel without any significant
degradation in quality.
4.
U SER IN TERACTION S
We use PHANToM for the virtual m anipulation of hair. Essentially it is a force feedback device and we have
efficiently used it like a sim ple tool allowing interactions with virtual 3D objects. This haptic device replaces and
increases the functional of the m ouse. The force-feedback is used to detect collision of hair with the head, as well
as to interact with the cells in the lattice.
The system gives the user freedom to choose tools to m odify hair collectively (using guide hairs) or individually.
The user can choose hair via a 2D scalp m ap or m ake a m ore precise selection directly in 3D. Using the buttons
provided with the PHANToM, the user can choose the tool to apply on the selection. While building this new
hairstyle the user with his left can m odify the cam era position for better visualization, and with his right hand
select and apply a set of tools developed to modify the hairstyle. Integrating these tools with our m echanical m odel
produces realistic sim ulation that enhances the overall visualization during interaction. For efficiently realizing
this effect, we create a lattice for the selected hair for anim ating them during interaction with tools using
geom etric and physics-based m ethod.
4 .1 Se le ctin g H air
The first step in designing a hairstyle is to select a set of hair. There are different selection m odes that are offered
to the users. The interactivity is perform ed via the haptic device. We propose a point-based m ethod for selection
that involves im plem entation of 3 DOF force feedback. The selection is done by the position of the stylus of the
haptic. If the stylus intersects a cell in the lattice, all the hair in that box are selected. This selection m ode is linked
to the discretization of the lattice, and does not allow selecting a set of hair between two cells. But generally, the
hairdresser can select a set of hair as well as one by one via projection in the direction of the camera of the
position of the stylus. After this selection step the user can “add” a set of hair using copy and paste tool. The user
can change the position and the angles of rotation (via haptic) of the duplicate set of hair.
For hairstyling operators anim ation we have lim ited the anim ation to specific parts where m odifications have to
be m ade. Thus, we create a lattice just for the selected set of hair and only this set is later animated as shown in
green in figure 3. Lim iting the lattice to a specific section of hair decreases the com putational tim e and allows
efficiently using the geom etrics m ethods. These geom etrics m odifications don’t change the form of the lattice. This
technique also avoids a com putation of self-collision detection between strips and provides good result in
sim ulation.
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
6
Figure 3: Selecting and Holding a group of Hair
4 .2 Virtu al Scis s o r:
The m ost important tool for a hairdresser is probably the scissor. In our technique we first choose a cut plane for
m odifying the hair. Three planes are proposed; a horizontal plane, an inclined plane, and a curve plane like the
function sine. The choice of these planes is linked to the fact that with these planes, all type of cut can be realized.
A m ass-spring system representation is used for all strands in order to sim ulate dropping the hair cut during the
cutting process. The im plem entation of this m odeling is easy, and allows obtaining good results in short tim e. In
order to increase the interactivity, it is possible to m odify the position and the orientation of this cut plane by a
line-based m ethod (using 6DOF) via the haptic. The user can thus choose the cutting m ode either by using a
scissor (generally for a sm all set of hair) or by a cutting plane.
Figure 4: A sequence of Hair Cut in real tim e
4 .3 Virtu al Bru s h in g an d Cu rlin g
The virtual brushing and waviness tool involves a function of one param eter and allows m odifying the “curve” of
strands without using the position of points in the strand in 3D. We m odify the hair beginning from the tip in the
curving. Three param eters can be m odified: the radius, the length of curve, and the direction of the brushing.
Based on these param eters, a m athem atical function is applied on the last node of all the selected hair resulting in
curve hair. The choice of the m athem atical function is based on the action to be performed or the tool required.
When sim ulating waviness, the visual effect is sim ilar to a com pressed spring m ovem ent from top to bottom .
When brushing, the m ovem ent of lattice node is away from the head and following the direction of the brush. The
use of these techniques avoids us to com pute collision between hair and an object (tools, com b for exam ple). The
visual realism is m aintained because the simulation seem s to be real.
5.
LATTICE BASED H AIR AN IMATION
Realistic animation of hair requires advanced m odels that deal with the com plex strand structure of hum an
hairstyles. The challenge is to build a m echanical m odel of the hairstyle which is sufficiently fast for real-tim e
perform ance while preserving the particular behavior of the hair m edium and m aintaining sufficient versatility for
sim ulating any kind of com plex hairstyles. We overcom e the difficulties through the design of a hair anim ation
m odel based on volum e deform ations [35], with the aim of anim ating any com plex hairstyle design in real-tim e.
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
7
5.1 Th e Lattice Mo d e l
We construct a 3D lattice around the head that includes all the hair. This lattice is defined by the initial positions
of all its nodes. During anim ation, the lattice is m oved according to the head m otion. The resulting rigid-m otion
positions and velocities of the lattice nodes define the rigid m otion pattern of the hair, in which the hair follows
the head m otion without any deform ation.
In our approach however, the lattice is furtherm ore deform ed by m echanical com putation. The actual position
and speed of the lattice nodes are ruled by m echanical com putation iterations that use the rigid m otion states as
equilibrium states. This m echanical m odel, aim ed at providing the lattice with the volum ic behavior of the hair
contained in it, is constructed during preprocessing.
5.2 Th e Me ch an ical Mo d e l
Different approaches are possible for designing this m echanical lattice deform ation m odel. However, spring-m ass
approaches still seem to be the best candidate for designing really fast m echanical models. We have created a
particular kind of interaction force that uses as attachm ent points any arbitrary point of the lattice, defined by its
lattice coordinates. Hence, each attachm ent point corresponds to a weighted sum of a given num ber of lattice
nodes. Hence, the m echanical m odel of the hair is defined as a sum of linear viscoelastic lattice springs relating
the elasticity of each individual hair strand segm ent in the lattice m odel. Their viscoelastic param eters correspond
to those of the m odeled hair. The attachm ents of the hair extrem ities to the skull are m odeled by stiff viscoelastic
lattice attachm ents, which are positioned exactly at the end of each hair. In order to reduce the com plexity of the
m odel, we resam ple each hair during the m odel construction as segm ents defined by the intersection points of the
hair line on the lattice boundaries. We furtherm ore carry out
a sim plification of the m odel by decim ating the redundant
lattice stiffeners (hair springs as well as skull attachm ents)
until the expected num ber of stiffeners rem ain (Figure 5).
5.3 Lattice In te rp o latio n
As the lattice is deform ed during anim ation, another issue is
to recom pute the current position of each hair features for
each fram e. Depending on the selected rendering techniques
and optim izations, these features m ay either be individual
hair segm ents, or larger prim itives such as hair strips or
groups. For the best com prom ise between continuity and
computation speed, we have selected quadratic B-Spline
curves as interpolating shape functions, which offer secondorder interpolation continuity.
We can take advantage of the interpolation to enhance the
attachm ent of the hair on the skull through rigid m otion. For
each interpolated feature, a deform ation coefficient is defined
Figure 5: The decim ation process: From the stran ds of
which is a blending coefficient between the motion defined by
the hairstyle (up-left) is constructed the initial m odel
the rigid head m otion and the m otion defined by the
(up-right). The m odel can then be decim ated at will
interpolated lattice position. Hence, progressively anim ating
(down-left) (down-right), for better perform ance
the root of the hair through rigid head m otion rather than m echanics allows sim ulating bending stiffness of hair
near its root, and also removes the artifacts resulting from the im perfections of the mechanical attachment of the
hair roots to the m oving skull.
6.
SCATERRIN G BASED H AIR REN D ERIN G
Sim ulating fast and realistic com plex light effect within hair is one of the most challenging problem s in the field of
virtual hum ans, both in term s of research and of developm ent. Two main optical effects that contribute the most
to realistic hair are the anisotropic reflection and self-shadows. We have developed a scattering-based fast and
efficient algorithm that handles both the local specular highlights and global self-shadow in anim ated hair at
interactive rates. We use a fast refinement technique [9] for incorporating illum ination variations in anim ated
hair from the encoded hair density changes considering the spatial coherency in hair data. Our m ethod's efficiency
towards achieving fast updates is highly credited to the division of the pre-processing and the run-tim e tasks of
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
8
the sim ulation. At pre-processing we perform initializations that give us precise rendering inform ation for static
hair, and then perform an optim ized update of optical variations in hair based on the dynam ics of the guide hair,
taking into account coherency between the hair data. The various optim izations implemented result in interactive
sim ulation while m aintaining the aesthetic visual appearance of the hair.
6 .1 Scatte rin g-bas e d lo cal Illu m in atio n
One of the features of our local illum ination m odel is that it takes into the fresnel reflection and orientation of the
cuticles on the hair which gives a control on defining the anisotropy of the hair. In addition, it also involves a
gaussian function, which is not only im portant to provide physical basis to our m odel but m odifications within the
function also m ake it quite effective for our hair strips. We consider the gaussian function taking into account
slope variations both along the tangent and the bi-norm al of the strip vertices. The m odel takes care of both the
diffuse and the specular reflectance. Various param eters control the width and the shift of the specular highlights
as well as the sharpness and the anisotropy displayed by the strips.
6 .2 Scatte rin g-bas e d s e lf-s h ad o w in h a ir
Our scattering-based approach considers two functions for shadow contribution. One function gives a m easure of
the absorption of light reaching a hair vertex and the second function considers the scattering from the hair. Since
our shadowing m odel considers hair densities within the cells
rather than explicit hair geom etry, both the functions result in an
overall reduced intensity of the hair vertex as visible to the viewer,
which gives the shadowing effect within the hair. The two term s
(absorption and scattering) included in our shadow com putation
can be visualized in Figure 6.
Analytically, the absorption term generates self shadows due to its
geom etric and translucent nature while the scattering term is an
additional com ponent contributing to hair shading due to its
m aterial property. It is the collective effect of the two components
com puted for hair that gives light's intensity as perceived by the
viewer and is correlated to the hair's shadow color, giving it a
natural look.
6 .3 Sh ad o w Re fin e m e n t in An im ate d H air
During anim ation visual artifacts like irregular shadow regions can
Figure 6: Variations in local specular highlights
pop up as the hair vertex m ove from one cell to another if density
and the global self-shadows can be seen as the
variation within the cell is not properly dealt with. Our approach
hair are anim ated and the light position is
for incorporating variations in shadow in anim ated hair is based on
changed.
the assum ption that the hair vertices within a cell at initialization
always stay within one cell after displacement. This assum ption is valid not only from the physical aspect of the
FFD-based anim ation m odel but also from real hair animation considerations where there is coherence am ong
neighboring strands when hair is m oving. Our idea is to encode the displacem ent of the particle during anim ation
to give a m easure of variation in hair density in the cells of the rendering lattice.
7.
RESU LTS AN D D ISCU SSION
We have tested our “virtual hair-dressing tool” for creating various hairstyles as shown in Figure 1 and Figure 8.
The im plementations are done on a workstation with Intel Pentium 4 and 3.4 GHz processor with a 1 GB m ain
m em ory and a GeForce 680 0 graphics card. Our com putation algorithm is written in standard C++ and rendering
is done using the OpenGL API. The initial hairstyle has been created using our in hom e hairstyler based on fluid
dynam ics [10 ].
Once the initial hairstyle is loaded as a hybrid m odel, com putation is perform ed to prepare the initial m echanical
and rendering m odel. We ran our algorithm to create short, curly and brushed hairstyles using our unified
fram ework. We have tested the hairstyles with various com plexities of the m echanical representation and have
found that a mechanical m odel containing 10 0 attachm ents and 40 0 springs built on 343 lattice nodes is fairly
accurate and with fast sim ulation. The m odel reacts to collisions with the head and the shoulder using 7 m etaballs.
The com putations for local illum ination are done at run-tim e for each vertex using the graphics hardware. For
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
9
shadow com putation we found that for all the hairstyles a lattice size of 128 × 128 × 32 was optim al enough to
provide nice result at an interactive speed. In the whole sim ulation process rendering is the m ost tim e-consum ing
task. The initial hairstyle in Figure 1 consists of around 40 K segm ents. The rendering for static hair under a single
dynam ic light on GPU runs at around 14fps. Sim ulation of rendered anim ated hair runs at 9-10 fps.
While perform ing the user interactions, as the sim ulation is lim ited to only section of the hair to be m odified, the
average perform ance is better. As the interactions on the hairstyle are made the lattice for both anim ation and
rendering are dynam ically adapted to the m odified data. During cutting, the cells containing the cut hair are
deleted from the sim ulation thus further accelerating the process, although when brushing and curling the
recom putation is done for calculating shadows based on the m odified location of the hair segm ent which is slows
the process. For creating hairstyles in Figure 1, the average fram e rate was around 10 fps.
8.
CON CLU SION AN D FU TU RE W ORK
Our “interactive hair-dressing room ” highlights the various developm ents m ade for providing an easily usable
fram ework for hair styling. The use of a haptic interface is very advantageous during the designing process. It
provides the user with m ore interactivity. This interface uses a 3D m ouse for the aligning the head with the
hairstyle and the haptic device for m anaging the set of tools for m odifying the hairstyle. The overall perform ance
is also increased by the fact that the users can effectively utilize both their hands in order to create new hairstyles
by applying special virtual tool like cut plane. Im plem enting techniques with accurate force feed back during all
the hair styling operations (com bing, curling, cutting, and effects by cosm etic styling products) will provide our
hair dressing tool m ore realism . Though these devices are usually costly and also take tim e to adapting, but once a
user gets used to them , it increases the com fortability and saves a lot of time.
Figure 7: With the ease-to-use the application, designer’s can efficiently create hairstyles from im ages very quickly
Also, taking advantage of underlying FFD based m echanical m odel for dynam ic sim ulation and the optim ized
scattering based rendering m odel for optical effect sim ulation, the interface provides an enhanced visualization of
the hairstyling procedure. The FFD anim ation approach allows a clear distinction between the m echanical m odel
that anim ates the lattice and the actual objects that are deform ed by the lattice during the rendering. This m akes
the m odel highly scalable and versatile giving total freedom for hairstyle design, as hairs can be long or short,
curled or straight, and even contain specific features such as ponytails, as well as facilitate designing level-of-detail
schem es that can act independently on the m echanical sim ulation aspect and on the rendering aspect, without any
particular adaptation of the m odel. Sim ilarly, the rendering m odel is based on volum e rendering and is
independent of the geom etry used. This m akes it suitable for sim ulating self-shadows in any hairstyle. The various
optim izations and the extensive use of graphics hardware, speeds up the rendering process allowing interactive
hair m anipulation. For handling variations in rendering due to animation we introduce a refinement technique,
which is encoded based on the hair displacem ent vector. These m odels have been unified to successfully sim ulate
som e com m on user interactions with hair, like pick som e hair, cut them and brush them and create realistic
hairstyles.
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
10
With initial results of our unified fram ework being quite prom ising we look forward to adding m ore features for
better interactivity and enhanced visualization. The use haptic rendering algorithm increases the virtual
interactivity a lot. Currently, we im plement just rigid interactions within the different cells of the lattice. But we
are working on the interacting with deform able objects. The problem with these interactions is the com putational
tim e. We have yet decreased this tim e in m odeling hair like a lattice but the feed-back forces aren’t linked to the
hair “touch” but the boxes. If the cells becom e deform able (likes our hair) the feed-back forces will becom e m ore
linked to the reality. We are working on m aking our m echanical m odel m ore efficient and faster by m aking related
com putation on hardware especially for interpolation. We also plan to modify and sim ulate effects in hairstyles,
both dynam ically and optically, under in fluence of water and styling products.
Ackn o w le d ge m e n t
This project is funded by the Swiss National Research Foundation.
9.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10 ]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20 ]
REFEREN CES
ANJ YO K., USAMI Y., KURIHARA T.: A sim ple m ethod for extracting the natural beauty of hair. In Proc.
SIGGRAPH ’92 (J uly 1992), vol. 26, pp. 111– 120 .
BANDO Y., CHEN B.Y., NISHITA T., 20 0 3, Anim ating Hair with Loosely Connected Particles, Com puter
Graphics Forum , Proceedings of Eurographics 20 0 3, Blackwell, 22(3) p.411-418.
BERTAILS F., KIM T.-Y., CANI M.-P., NEUMANN U.: Adaptive wisp tree-a m ultiresolution control
structure for sim ulating dynam ic clustering in hair m otion. In Sym posium on Com puter Anim ation’0 3 (J ul
20 0 3), pp. 20 7-377.
BERTAILS F., MENIER C., CANI M. P.: A practical self-shadowing algorithm for interactive hair
anim ation. In Graphics Interface (May 20 0 5), pp. 71-78.
CHANG J ., J IN J ., YU Y.: A practical m odel for m utual hair interactions. In Proceedings of Sym posium on
Com puter Anim ation 20 0 2 (J ul 20 0 2), pp. 51-58.
CHEN L., SAEYOR S., DOHI H., ISHIZUKA M.: A system of 3d hairstyle synthesis based on the wisp
m odel. In The Visual Com puter (1999), pp. 159-170 .
DALDEGAN A., MAGNENAT-THALMANN N., KURIHARA T., THALMANN D.: An integrated system for
m odeling, anim ating and rendering hair. In Proc. of Eurographics ’93 (J uly 1993), pp. 211– 221.
GUANG Y., ZHIYONG H., A Method for Hum an Short Hair Modeling and Real-Tim e Animation,
Proceedings of Pacific Conference on Com puter Graphics and Applications, IEEE Com puter Press, 20 0 2
pp. 435-438.
GUPTA R., MAGNENAT-THALMANN N.: Scattering-Based Interactive Hair Rendering, In IEEE 9 th
International Conference on Com puter Aided Design an d Com puter Graphics’0 5, (Dec 20 0 5), pp. 489-494.
HADAP S., MAGNENAT-THALMANN N.: Interactive hair styler based on fluid flow. In Eurographics
Workshop on Com puter Anim ation and Sim ulation ’0 0 (Aug. 20 0 0 ), pp. 87– 99.
HADAP S., MAGNENAT-THALMANN N.: Modeling dynam ic hair as a continuum . In Proc. of
EuroGraphics ’0 1 (Sept. 20 0 1), Vol. 20 , Issue 3, pp. 329-338.
HERANANDEZ B., RUDOMIN I., Hair Paint, In IEEE Proc. of Com puter Graphics International 20 0 4
(CGI), Creta, Grecia, J unio 16 – 19, 20 0 4, pp. 578-581.
KAJ IYA J . T., KAY T. L.: Rendering fur with three dim ensional textures. In Proc. SIGGRAPH ’89 (J uly
1989), vol. 23, pp. 271– 280 .
KIM T., NEUMANN U.: A thin shell volum e for m odelling hum an hair. In IEEE Proc. of Com puter
Anim ation ’0 0 (20 0 0 ), pp. 10 4– 111.
KIM T., NEUMANN U.: Opacity shadow m aps. In Proc. of the Eurographics Rendering Workshop ’0 1
(20 0 1), pp. 177– 182.
KIM T.-Y., NEUMANN U.: Interactive m ultiresolution hair m odeling and editing. In Proc. of SIGGRAPH
20 0 2 , pp. 287– 294.
KOH C.K., HUANG Z., Real-Tim e Hum an Anim ation of Hair Modeled in Strips, In Eurographics Workshop
on Com puter Anim ation and Sim ulation, Springer-Verlag, 20 0 0 , pp. 10 1-110 .
KOH C.K., HUANG Z., Real-Tim e Hum an Hair Modeling and Animation, SIGGRAPH 20 0 0 Sketches and
Applications, 20 0 0 .
KOH C.K., HUANG Z., A Sim ple Physics Model to Anim ate Hum an Hair Modeled in 2D Strips in RealTim e, In Eurographics Workshop on Com puter Anim ation and Sim ulation, 20 0 1, pp. 127-138.
KOSTER M., HABER J ., SEIDEL H. P.: Real-tim e rendering of hum an hair using program m able graphics
hardware. In Proc. of Com puter Graphics In ternational (CGI’0 4) (20 0 4), pp. 248– 256.
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy
11
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30 ]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40 ]
[41]
KURIHARA T., ANJ YO K., THALMANN D., Hair Anim ation with Collision Detection, In Proceedings of
Com puter Anim ation 1993, IEEE Com puter Society Press, p.128-138.
LEBLANC A., TURNER R., THALMANN D., Rendering hair using pixel blending and shadow buffers. In
J ournal of Visualization and Com puter Animation ’91 (1991), vol. 2, pp. 92– 97.
LEE, D. W., KO H. S., Natural Hairstyle Modeling and Anim ation, 20 0 1, Graphical Models, 63(2), p67-85.
LEE C-Y., CHEN W-R., LEU E., OUHYOUNG M., A rotor Platform assisted System for 3D Hairstyles, Proc.
WSCG 20 0 2 (the 10 th International Conference in Central Europe on Com puter Graphics, Visualization and
Com puter Vision’0 2), Plzen, Czech Republic, February.
LOKOVIC T., VEACH E., Deep shadow m aps. In Proc. of SIGGRAPH ’0 0 (20 0 0 ), pp. 385– 392.
MAO X., IMAMIYA A., ANJ YO K., Sketch Interface Based Expressive Hairstyle Modeling and Rendering,
IEEE Proc. Of Com puter Graphics International (CGI’0 4).
MAGNENAT-THALMANN N., HADAP S., KALRA P.: State of the art in hair sim ulation. In International
Workshop on Hum an Modeling and Anim ation ’0 0 (J une 20 0 0 ).
MALIK S., A Sketching Interface for Modeling and Editing Hairstyles, Eurographics Workshop on Sketchbased Interfaces and Modeling, 20 0 5.
MARSCHNER S. R., J ENSEN H. W., CAMMARANO M., WORLEY S., HANRAHAN P.: Light scattering
from hum an hair fibers. In Proc. SIGGRAPH ’0 3 (J uly 20 0 3), vol. 22, pp. 780 – 791.
MERTENS T., KAUTZ J ., BEKAERT P., REETH F. V.: A self-shadow algorithm for dynam ic hair using
clustered densities. In Proc. of the Eurographics Sym posium on Rendering ’0 4 (J une 20 0 4), pp. 173– 178.
OTADUY M. A., J AIN N., SUD A., LIN M. C., Haptic Display of Interaction between Textured Models,
Proceedings of IEEE Visualization Conference, 20 0 4, pp. 297-30 4.
PLANTE E., CANI M.-P., POULIN P.: A layered wisp m odel for sim ulating interactions inside long hair. In
Proc. of EuroGraphics (sep 20 0 1)pp. 139-148 .
ROSENBLUM R., CARLSON W., TRIPP E., Sim ulating the Structure and Dynam ics of Hum an Hair:
Modeling, Rendering and Anim ation, The J ournal of Visualization and Com puter Anim ation, J . Wiley,
2(4), 1991, pp. 141-148 .
SCHMITH C., KOSTER M., HABER J ., SEIDEL H-P., Modeling Hair using a Wisp Hair Model, Research
Report MPI-I-20 0 4-4-0 0 1, 20 0 4,
http:/ / dom ino.m pi-sb.m pg.de/ internet/ reports.nsf/ 0 / d40 10 9e7b2b4bd12c1256e450 0 4ac335T
VOLINO P., MAGNENAT-THALMANN N., Anim ating com plex hairstyles in realtim e. In Proc. of ACM
Sym posium on Virtual Reality Software and Technology(VRST’0 4) (20 0 4), pp. 41-48.
WARD K., LIN M. C.: Adaptive grouping and subdivision for sim ulating hair dynam ics. In In Pacific
Graphics Conference on Com puter Graphics and Applications 20 0 3 (20 0 3), pp. 234– 243.
WARD K., LIN M. C., LEE J ., FISHER S., MACRI D.: Modeling hair using level-of-detail representations.
In Proceedings of Com puter Anim ation and Social Agents (CASA) 20 0 3 (20 0 3), pp. 41– 47.
WATANABE Y., SUENAGA Y.: Drawing hum an hair using the wisp m odel. In Special issue on Com puter
Graphics International’89 (CGI’89) (May 1989), vol. 7, pp. 97 – 10 3.
XUE Z., YANG X. D., V-HairStudio: An Interactive Tool for Hair Design , IEEE Com puter Graphics and
Applications, 21(3), 20 0 1, pp. 36-42.
YANG X.D., XU Z., YANG J ., WANG T., The Cluster Hair Model, Graphical Models, Elsevier, 62(2), 20 0 0 ,
pp. 85-10 3.
YU Y., Modeling realistic virtual hairstyles. In Proc. of Pacific Graphics 20 0 1 (20 0 1), pp. 295– 30 4.
Com puter-Aided Design & Applications, Vol. 3, Nos. 1-4, 20 0 6, pp xxx-yyy