Ray Tracing Maya Hair and Fur: Examensarbete
Ray Tracing Maya Hair and Fur: Examensarbete
Ray Tracing Maya Hair and Fur: Examensarbete
HENRIK RYDGÅRD
Examensarbete
Civilingenjörsprogrammet för datateknik
CHALMERS TEKNISKA HÖGSKOLA
Institutionen för data- och informationsteknik
Avdel ningen för datorteknik
Götebo rg 2007
Abstract
Rendering good-looking hair is regarded as a hard problem in realistic computer
graphics. Turtle, a commercial renderer written and sold by Illuminate Labs, is a
fast ray-tracer with powerful support for global illumination, that until now has
completely lacked the ability to render hair and fur. This thesis presents work
that was done to implement Maya’s interpretation of these things in Turtle, and
contains some discussion of the various methods and tradeoffs that were used,
and a number of suggestions for future work. The result was a working method
for rendering Maya Hair and most aspects of Maya Fur. Performance is decent,
although memory usage is on the high side.
Sammanfattning
Att rendera hår och päls anses vara ett relativt svårt problem inom realistisk da-
torgrafik. Turtle, Illuminate Labs’ kommersiella renderare, är en snabb raytracer
med avancerat stöd för global ljussättning, som tills nu har saknat möjligheter
att rendera hår och päls. Denna rapport presenterar arbetet som gjordes för att
implementera rendering av Maya’s hår- och pälssystem i Turtle, och innehåller
diskussion om olika tänkbara metoder och avvägningar som gjorts, plus lite
förslag på framtida utökningar. Resultatet blev en fungerande metod för att
rendera Maya Hair och de flesta aspekter av Maya Fur. Prestandan är ganska
god, men minnesförbrukningen är högre än önskvärt.
i
Preface
This report presents the implementation and results of a masters thesis at the
Computer Science and Engineering program at Chalmers University of Technol-
ogy in Gothenburg. The work has been performed at Illuminate Labs, Gothen-
burg.
I’d like to thank David Larson for inviting me to do the project, and the rest
of Illuminate Labs for all the support. I’d also like to thank Ulf Assarsson, the
examiner, for the feedback.
ii
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
1 Introduction 1
1.1 Organization of the report . . . . . . . . . . . . . . . . . . . . . . 1
1.2 3D rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Rasterization (Scanline rendering) . . . . . . . . . . . . . 1
1.2.2 Ray tracing . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 The eternal debate . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Maya . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Maya renderers . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Maya Hair . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.3 Maya Fur . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5.1 Categorisation . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5.2 Implementations . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.3 More academic papers, and custom movie industry work . 7
1.5.4 Renderers that do Maya Hair and Fur . . . . . . . . . . . . 7
1.5.5 Classification of this paper . . . . . . . . . . . . . . . . . . 8
2 Implementation 9
2.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Wish list . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.2 Programming language . . . . . . . . . . . . . . . . . . . 9
2.1.3 Modeling hair . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.4 Getting access to the hair and fur data from the 3D scene 9
2.1.5 The fur reimplementation . . . . . . . . . . . . . . . . . . 10
Fur placement . . . . . . . . . . . . . . . . . . . . . . . . 10
Missing features . . . . . . . . . . . . . . . . . . . . . . . 10
Features that are different . . . . . . . . . . . . . . . . . . 11
2.1.6 Intersecting hair . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.7 Hair data structure . . . . . . . . . . . . . . . . . . . . . . 11
2.1.8 Shading model . . . . . . . . . . . . . . . . . . . . . . . . 13
Hair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Fur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.9 Global illumination . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.1 Geometry aliasing . . . . . . . . . . . . . . . . . . . . . . 15
2.2.2 The LOD catch-22 . . . . . . . . . . . . . . . . . . . . . . 15
iii
A first attempt . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.3 Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Making it reasonably viable . . . . . . . . . . . . . . . . . 19
Other proposed acceleration structures . . . . . . . . . . . 20
2.2.4 Memory optimization . . . . . . . . . . . . . . . . . . . . 20
2.2.5 Implementation problems . . . . . . . . . . . . . . . . . . 20
2.2.6 Theoretical performance . . . . . . . . . . . . . . . . . . . 21
2.2.7 An antialiasing hack . . . . . . . . . . . . . . . . . . . . . 21
2.2.8 An idea for faster transparency . . . . . . . . . . . . . . . 21
A Ray differentials 32
iv
Chapter 1
Introduction
• Introduction
Provides some background, in the form of a quick introduction to the
differences between ray tracing and rasterization, a look at the various
renderers for Maya, and some discussion on previous work in the area.
• Implementation
Discusses various possible implementations and their problems
• Results
Presents some example images using the code added to Turtle in this thesis
work
• Conclusions
Quick summary of what was done
• Future work
Discusses the results and proposes further areas of work
1.2 3D rendering
Rendering is the process of taking numerical information describing a three-
dimensional world, and turning it into (ideally) realistic images. The two main
ways of doing this are rasterization (scanline rendering) and ray tracing.
1
Figure 1.1: Scanline rendering (rasterization)
Ray tracing generates images by shooting imaginary rays into the scene and
computing where they will hit (figure 1.2), and then use derived information
like the surface normal and the locations of light sources to compute the color of
the surface at the hit points. If the user is willing to spend more time generating
images, ray tracing makes it easy to add secondary effects such as reflections,
shadows and global illumination by simply shooting more rays in smart ways.
These effects are much harder or even impossible to do correctly in a scanline
renderer.
Ray tracers generally achieve much lower levels of memory coherency than
rasterizers do, because neighbouring rays often spawn sub-rays that take com-
pletely different paths through the scene, and objects are generally not textured
and shaded in order, leading to complexity and cache thrashing. Also, a general
ray tracer must keep the entire scene in memory (there are ways around this,
but they are less than elegant and have a large performance cost).
2
Figure 1.2: Ray tracing
Since ray tracing speed, with proper acceleration structures, is O(log n) to the
number of primitives in the scene, while rasterization is generally O(n), ray
tracing proponents have long said that it will eventually overtake rasterization
as the fastest and most popular rendering algorithm. Not only because of its
inherent computational complexity advantage, but also the ease with which
complicated optical effects can be implemented in ray tracers.
However, the incredible advances of the graphics hardware industry over the
last few years have prevented this from happening. Rasterization is still several
orders of magnitudes faster for game-type interactive scenes, with raytracers
only beating rasterizers in extreme cases of static high-poly scenes, like render-
ing a 350 million polygon model of an entire Boeing 777 airplane [2], where
the O(n) performance of rasterizers really hurts compared to the O(log n) per-
formance of ray tracers.
The transition to ray tracing, or various combinations of rasterization and
ray tracing, is however already happening in the offline rendering scene, where
high real-time performance is not necessary, but where rendering quality is the
most important goal. Rasterization is still in widespread use, however.
The ray tracing proponents who emphasize the complexity advantage of ray
tracing are also often forgetting one important thing: The idea that ray tracing
is O(log n) only applies when rendering a static scene, since all ray tracing accel-
eration structures are per definition at least O(n) to build (they must contain all
the geometry of the scene). Some structures do support incremental updates,
however, making many kinds of dynamic scenes more viable.
Recently, there has been some research into implementing hardware to ac-
celerate ray tracing, such as [21]. This hardware is of course also affected by
the problem of acceleration structure building.
3
Figure 1.3: The Maya main window
1.3 Maya
Alias Systems’ Maya (figure 1.3) is one of the most popular 3D modelling and
rendering software packages. It’s used by many big special effects houses, in
game development studios, in advertising companies, and more. Many spe-
cial effects in blockbuster feature films are designed and rendered using Maya.
Some examples of films where Maya has been used are The Fifth Element, The
Matrix, Shrek and Final Fantasy - The Spirits Within.
4
and results of these features is what this report is about. A proof of concept
implementation of getting particle data out of Maya and then rendering purple
placeholder spheres was also done, but is out of the scope of this report.
1.5.1 Categorisation
Previous approaches to rendering hair and fur can be classified into a few dif-
ferent categories:
5
Figure 1.4: Shells + fins fur rendering in Shadow of the Colossus on the PS2 - nowhere
near movie-quality rendering, but thanks to good artistry it easily looks good enough for
a game and runs at realtime framerates
1.5.2 Implementations
Most papers focus either on hair or on fur. Since fur can be regarded as short
uniform hair, in many cases decent fur can in theory be rendered by many of
these methods.
Almost all previous implementations that belong to the third category actu-
ally render fur by rasterizing various varieties of paint strokes or layers. Usually,
hairs are splines that are split up into short straight-line segments for easy ren-
dering.
[13] is a combination of categories 2 and 3. They render invididual hairs up
close, and ”clumps” when less detail is needed, and wide ”strips” as the lowest
level of detail, used for example at large distances. They motivate the choice
to ”clump” with hair’s natural tendency to do just that in the presence of static
electricity, and due to the properties of the oils naturally present on hair strands.
Hoppe[9] proposed a fur rendering algorithms that simply uses a large num-
ber of transparent texture layers containing dots, extending out from the sur-
face of the model, called ”shells”. He also added ”fins” extending out from
the edges between triangles, to improve the appearance from grazing view an-
gles. The result is a very fast way to blast low-quality but decent looking short
fur onto a model using graphics hardware, an ideal scenario for games. This
method clearly belongs to category 2. It has been successfully used in a num-
ber of games. The method can be spotted in for example Halo 2, Conker - Live
and Reloaded and Shadow of the Colossus (figure 1.4). This technique requires
nearly no extra geometry storage on hardware that supports ”vertex shading”,
like all modern consoles except the Nintendo Gamecube and Wii, but consumes
6
huge amount of texture mapping fill rate. Fortunately, many modern game con-
soles and all new PC graphics cards have enourmous fill rate, often a number of
gigapixels per second.
Joe Alter’s Shave and a Haircut is a commercial fur and hair generator, which
has its own renderer that is meant to be used as a post-process, but can also
integrate into Mental Ray. The rendering techniques it uses are basically the
same as what Maya Software and Mental Ray do with Maya Fur, but it appears
to be highly regarded for its intuitive hair sculpting and combing tools. Maya
Fur also has such tools, but they appear to be quite clumsy in comparison.
7
1.5.5 Classification of this paper
Turtle is fundamentally a ray tracer and as such, a method for ray tracing hair
and fur is required for this project. It would certainly be feasible to add raster-
ized hair as a post process, like Maya Software, although there is no infrastruc-
ture in Turtle for doing this at present, and it also would negate the fact that
Turtle is usually used where realistic rendering of secondary lighting (see 2.1.9)
is required. Therefore, ray tracing was chosen, unlike most other papers.
The methods described in this paper mostly belong to the third category
(1.5.1). A variable-width cylinder segment primitive is integrated into Turtle,
and it’s used to intersect and render both segments of hair and fur strands. The
shading is not yet very sophisticated, it’s essentially a variation of the anisotropic
shading method that was first introduced by Kajiya in [11]. ”Level of Detail”
through stochastic pruning, as introduced by Pixar in [17], is used to reduce
aliasing and shimmering, giving the method somewhat of a category 2 flavor.
8
Chapter 2
Implementation
2.1 Design
2.1.1 Wish list
• Ability to render all scenes with hair and fur that Maya Software and
Mental Ray can handle
• Preferably do so with better speed and/or quality than Maya Software and
Mental Ray.
2.1.4 Getting access to the hair and fur data from the 3D
scene
Fortunately, Maya Hair has a public API for accessing data like coordinates and
colors of each simulated hair.
9
There is, however, no publicly available API for accessing the individual hairs
of Maya Fur. But through some complicated, and unfortunately somewhat risky,
Maya API usage, it is possible to access all the parameters and attribute maps,
and reimplement the fur generator from scratch. This is what was done. The
method that was designed for intersecting Maya Hair works just as well for fur
hairs, and a similar lighting equation was be used.
Fur placement
When generating fur, we need a way to place the hairs on the surface of the
model. It is fortunately pretty easy to place large amounts of hairs reasonably
uniformly across the surface of triangle meshes:
Missing features
10
Features that are different
11
(a) Maya SW (b) Turtle
Figure 2.1: Closeup of some hair. Note the uniform shading across the width of the hairs,
and the less than perfect shapes, caused by approximations in intersection formulas, or
in the case of Maya Software, a rasterization algorithm that cheats
12
the comparatively high speed of computation of modern CPU:s, reducing the
memory consumption can directly improve performance, more than covering
the costs of the additional computation. This means that in many cases, what
first looks like a speed/size tradeoff is often a win/win situation. However, this
reasoning ignores the additional development costs, of course.
1 Rescaling to length 1
13
Figure 2.2: Computing the hair vertex tangents used for shading
Hair
Fur
It’s not actually correct to pass the hair tangent into Shade, but it does de-
liver okay-looking results, since for many types of lights the normal is mostly
irrelevant in this step, and a wrong but continuous normal is better than no or
random normals...
14
It is quite slow, though, as is to be expected with all the detail. Also, the hemi-
sphere sampling, currently biasing directions near the normal as more impo-
rant, is wrong for hair. There was not enough time to find a way to integrate a
different final gather sampling method.
2.2 Problems
2.2.1 Geometry aliasing
As mentioned in Shading Model above, hair is composed of thousands of small
primitives. Any high detail geometry is prone to geometry aliasing, where the
point sampling that ray tracing results in will often miss primitives because the
distance between two neighbour rays may be larger than the entire width of
a strand of hair, as shown in figure2.3. Adaptive supersampling can help to
reduce, but not completely work around, the problem.
There are other possible ways to work around this problem. Pixar call their
method Stochastic Pruning [17]: When the primitives are small enough on-
screen, replace every N of them with one single primitive, upscaled N times.
This is also the solution that was tried. Pixar also proposes reducing the contrast
of the primitive colors, the motivation for this can be found in their paper. This
worked well for their bush models with highly varying shades of green, but the
shades of the hairs generated by Maya tend to be more uniform, and I’m not
sure how much of a quality benefit this would produce. There is never enough
time, and other features were prioritized.
15
Figure 2.3: With standard regular grid sampling, rays are infinitely thin and only fired
through pixel centers, so entire objects can be missed completely. Note that the pattern
of dark dots does not resemble the underlying geometry at all. This phenomenon, as
predicted by the Nyquist-Shannon sampling theorem, is called aliasing, and is caused by
the underlying data containing frequencies exceeding half the sample rate (in this case,
the ray density).
16
Figure 2.4: Stepwise ray shooting vs bounding volume ray shooting
A first attempt
The first approach was to build eight grids of the scene, each with lower LOD
than the previous. This alone still doesn’t help as we need a method to choose
which grid to shoot each ray in. Fortunately, Turtle supports the concept of ray
differentials, and by using those it’s possible to compute how ”thick” a ray is at
a certain distance. When shooting rays in Turtle, it’s also possible to limit the
length of the ray.
Combining these two features, we can do the following:
The obvious problem with this solution is that you can’t expect ”correct” LOD
selection when you have several different hair systems, each with a different
hair thickness, in the same scene. This problem is basically unavoidable when
using a straight-forward nested grid, like Turtle does in the faster small-scene
mode.
Figure 2.5 shows a color visualization of the switching of LOD levels.
17
(a) Normal rendering (b) LOD color visualization
Figure 2.5: LOD level switching by distance. Note that the green hairs are just as thick
as the yellow hairs in image space, but there are more of them. Without the colorization,
it’s quite difficult to spot this.
2.2.3 Pruning
Now that we have a working, although costly, system for storing and ray tracing
different levels of detail of the same scene, we have to decide when it’s time
to switch to a smaller level of detail. An intuitive measure would be to switch
level of detail when the width of a hair goes under about a pixel. This is also
the result we will get if we switch to a lower level of detail (with thicker hairs)
when the ray grows thicker than a hair at the current level of detail. See figure
2.6.
To generate a LOD using the pruning method, the model now has to be
decimated. One simple approach is to double the hair width for each LOD, and
also halve the hair count by randomly removing hairs. This means that since we
use 8 levels, 28 = 256 hairs will be replaced with a single, massively thick one
at the lowest level. This will generally only happen when viewing a head from
a large distance, shrinking it into a few pixels in screen space, where any detail
is useless anyway. One could also reduce the number of linear segments each
hair is made up of in each lod level. This hasn’t been implemented.
On the other hand, in many scenes such a large range of possible pruning
amount may not be wanted or necessary, so a softer slope could be used, making
the transitions less noticable. We could name the proposed approach the d = 2.0
method, and call d the decimation coefficient, for example. d = 1.5 would be
possible, and is the default value for this parameter in the implementation.
Also, it’s not necessarily correct to replace say 5 hairs with a single 5 times
thicker one. The 5 hairs will often overlap, thus reducing the screen space
area covered. Replacing them with a single thick one may therefore increase
the perceived density of the hair, and decrease the amount of light that passes
through cracks. This effect is visible in figure 2.6. A potential solution would
be to add a configurable option, specifying a percentage (such as 80%, for a 4
times thicker hair instead of 5 times) of ”thickness compensation”.
The pruning of hair can be viewed as a generalization of the multiple rep-
resentations proposed in [13]. It shares many of the same issues, like visible
transitions between levels of detail. However, due to the way the LOD switch
happens at an exact distance for each ray sample rendered, not on the model
18
(a) Maximum pruning (b) Less pruned
Figure 2.6: Different pruned LOD levels, rendered at the same scale. At a distance, they
look almost identical, with the most pruned one the least aliased.
Having to build 8 full grids of the entire scene, even though the only difference
is the hair detail, is prohibitely expensive. A solution that makes the cost less
outrageous is to keep hair in a separate chain of grids, and shoot rays both into
the hair grid chain and the normal scene grid, and always select the closest
hit. Although there is a performance cost, this method works well and was
implemented.
Turtle has another rendering mode called Large-Scene, where every object is
enclosed in its own grid. In the future, the hair and fur code could be extended
to support this system, which would also solve the problem that you can really
only have one hair thickness in each scene act correctly together with the LOD
scheme, because all hair systems would have their own set of LOD grids.
19
Other proposed acceleration structures
A group at the University of Texas has developed [6] a lazily2 built multireso-
lution kD-tree [3] acceleration structure. However, it’s dependent on the scene
geometry consisting mostly of displacement-mapped lowpoly models, which is
certainly not true for any sort of majority of scenes. Something similar could be
implemented into Turtle though, in another project.
An alternate system, known as R-LOD [20], has been proposed. They re-
duce complex triangle meshes to node-filling planes in a kD-tree. This work is
very interesting for triangle mesh LOD, but unfortunately hair can’t really be
approximated very well using only planes. At a distance it is not inconceivable,
but actually performing the reduction would not be easy, and would only work
somewhat decently for dense, uniform hair. Curly hair would completely break
such a system.
A standard kD-tree could also be tried, to see how well it would compete
with Turtle’s grids. However, there was not enough time to implement and
benchmark this.
20
Disassembling and analyzing the Maya Fur plugin binary is another possible
approach, but doing that would be extremely time consuming due to its large
size, and likely illegal in many countries.
The basics of the MEL scripting language [7] had to be studied to be able
to integrate some UI for configuring the hair rendering into Maya. Using pre-
viously existing code for other Turtle parameters as a guide, it wasn’t a very
difficult undertaking.
21
(from cumulative inverse transparency so far) and whatever else is nec-
essary) and keep walking the acceleration structure, until the ray hits an
opaque object.
• Sort the list by depth (or assume that hits will arrive roughly in depth
order)
• Throw away all but the N closest hits
• Reweight them to a sum of 1
Since this would avoid restarting the ray all the time, which has considerable
overhead in Turtle, it would likely be much faster than casting new rays at every
transparent hit.
22
Chapter 3
3.1 Results
3.1.1 Comparisons with Mental Ray and Maya Software
In some corner cases, Turtle augmented with the new hair and fur code was
able to outperform both Maya Software and Mental Ray. Most of the time,
however, Maya Software would be ahead, Turtle second and Mental Ray last,
set at comparable antialiasing quality. Maya Software has a huge advantage in
that it does not perform ray tracing, but simply rasterizes the hairs. It appears
that Mental Ray has some of this advantage too, as it does not appear to render
fur or hair in reflections! However, fur and hair does seem to affect global
illumination (although hair is not lit correctly by it), so the fur rendering of
Mental Ray could be a weird hybrid of some sort.
In the comparisons, Turtle has an unfair advantage over Mental Ray when
rendering fur - it wasn’t possible to implement transparency efficiently in Turtle
(see 2.2.7), so that effect is normally simply skipped, while Mental Ray imple-
ments it with reasonable quality. This is also some of the reason that Mental
Ray’s fur has a somewhat softer look than that of Turtle.
All in all, it’s very hard to compare performance numbers of the three render-
ers since they all implement different feature sets with different quality trade-
offs.
It bears repeating that that the main focus of this work was not to create
innovative new hair rendering techniques, but to find reasonable ways to render
Maya Hair and Fur in Turtle,
23
Figure 3.1: Maya UI customization for hair
24
3.2 Performance
All performance statistics are from a single-core Athlon 64 3000+, running 32-
bit Windows XP and Maya 7. Keep Turtle’s inherent advantage mentioned in
3.1.1 in mind when evaluating the fur rendering numbers.
25
(a) Maya SW (scanline renderer): 8s (b) Turtle (raytraced): 20s (The white sparkles
are specular highlights. These need another
viewing angle to be visible in the other two ren-
derers)
3.3 Conclusions
Maya Hair and Maya Fur can now be rendered decently with Turtle, with a few
missing features and somewhat different lighting. Large amounts of fur still
consume way too much memory, although there is lots of room for reducing the
memory consumption.
There are still many possible improvements left to make, as outlined in 3.4.
26
(a) Maya SW (scanline renderer): 14s (b) Turtle (raytraced): 39s
Figure 3.4: The Rings of Fur (Maya Fur). From left to right: Duckling, Squirrel, Llama
fur presets
27
The missing features mentioned in 2.1.5 could be implemented.
The way to do transparency can be discussed. LeBlanc et al[1] notes that
layer upon layer of transparent hair will ”saturate” and not let any light through,
except possibly very strong HDR style lights. (The rest of their paper details a
conventional rasterization approach).
The shading model is still substantially different to the Maya one. It can
be argued which one is more correct, but Maya’s fur shading model usually
looks better and more varied. On the other hand, it appears to approximate all
kinds of lights with standard directional lights. For the hair shading model, the
differences mostly manifest themselves as differently positioned highlights. A
more advanced direct shading model, like the one described in [19], could also
be implemented. Alternatively, a more expensive path tracing method such as
[15], simulating the multiple scattering of light in hair, would provide yet more
realistic results, at high performance and complexity costs.
A softer look could be achieved with transparency, by using the method
described in 2.2.8.
Shadowing is a problem that hasn’t been investigated very deeply. Using
raytraced shadows works, but they will be hard (sharp), which doesn’t look very
good on hair, which in reality transmits quite a lot of light and generally casts
fairly fuzzy shadows. Many other renderers use things like deep shadow maps
instead. There are also some possible approaches based on voxel grids that track
the hair density in regions of space, such as [16], and a similar method in [4],
although some rather large architectural changes to Turtle may be necessary to
integrate such techniques.
Recently, integrating some simple texture space ray tracing into the pixel
processing of rasterizers has become a popular method to add fake displacement
mapping to textures in games. It’s possible that Kajiyas old [12] algorithm could
be resurrected to run on the newest graphics cards.
28
Figure 3.5: Hair, a mirror, and fur rendered with global illumination (final gather).
Render time: 30 min.
29
Bibliography
[1] Daniel Thalmann André M. LeBlanc, Russel Turner. Rendering Hair using
Pixel Blending and Shadow Buffers. Computer Graphics Laboratory, Swiss
Federal Institute of Technology, 1991.
[2] Philipp Slusallek Andreas Dietrich, Ingo Wald. Interactive visualization
of exceptionally complex industrial cad datasets. In Proceedings of ACM
SIGGRAPH 2004, 2004.
[3] Jon Louis Bentley. Multidimensional binary search trees used for associative
searching. ACM Press, 1975.
[4] Marie-Paule Cani Florence Bertails, Clément Ménier. A Practical
Self-Shadowing Algorithm for Interactive Hair Animation. GRAVIR -
IMAG/INRIA, Grenoble, France, 2005.
[5] Dan B Goldman. Fake fur rendering. http://www.cs.washington.edu/
homes/dgoldman/fakefur/, 1997.
[6] Peter Djeu Rui Wang Ikrima Elhassan Gordon Stoll, William R. Mark. Ra-
zor: An architecture for dynamic multiresolution ray tracing. Technical
Report 06, 2006.
[7] David A. D. Gould. Complete Maya Programming. Morgan Kaufmann Pub-
lishers/Elsevier Science, 2003.
[8] Homan Igehy. Tracing ray differentials. In Proceedings of ACM SIGGRAPH
1999, 1999.
[9] A. Finkelstein; H. Hoppe. J. Lengyel, E. Praun. Real-Time Fur over Arbitrary
Surfaces. ACM Press, ACM Symposium on Interactive 3D Graphics 2001,
227-232., 2001.
[10] Jr Joseph M. Cychosz, Warren N. Waggenspack. Intersecting a ray with a
cylinder (Graphics Gems IV). Academic Press, 1994.
[11] James Kajiya. Anisotropic reflection models. In Proceedings of ACM SIG-
GRAPH 1985, 1985.
[12] Timothy L. Kay Kajiya, James T. Rendering fur with three dimensional
textures. In Proceedings of ACM SIGGRAPH 89. Addison Wesley, 1989.
[13] Joohi Lee. Susan Fisher. Dean Macri. Kelly Ward, Ming C. Lin. Model-
ing hair using level-of-detail representations. In Computer Animation and
Social Agents, 2003.
30
[14] Hans-Peter Seidel Martin Koster, Jörg Haber. Real-Time Rendering of
Human Hair using Programmable Graphics Hardware. MPI Informatik,
Saarbrücken, Germany.
[15] Jonathan T. Moon and Stephen R. Marschner. Simulating multiple scat-
tering in hair using a photon mapping approach. In Proceedings of ACM
SIGGRAPH 2006, 2006.
[16] N Magnenat-Thalmann R Gupta. Scattering-based interactive hair render-
ing. http://www.miralab.unige.ch/papers/376.pdf, 2005.
[17] John Halstead Robert L. Cook. Stochastic Pruning. Pixar Animation Stu-
dios, 2006.
[18] Thorsten Scheuermann. Hair Rendering and Shading (GDC presentation).
ATI Research, Inc, 2004.
[19] Mike Cammarano Steve Worley Stephen R Marshner, Henrik Wann Jensen
and Pat Hanrahan. Light scattering from human hair fibers. http://
graphics.stanford.edu/papers/hair/, 2003.
[20] Dinesh Manocha Sung-Eui Yoon, Christian Lauterbach. R-lods: Fast lod-
based ray tracing of massive models. http://gamma.cs.unc.edu/RAY/,
2006.
[21] Jörg Schmittler Sven Woop and Philipp Slusallek. Rpu: A programmable
ray processing unit for realtime ray tracing. In Proceedings of ACM SIG-
GRAPH 2005, July 2005.
31
Appendix A
Ray differentials
32
Figure A.1: Primary rays crossing the image plane, with the ray x direction differential
drawn
33