Volume Modelling: Representations and Advanced Operations
V.V. Savchenko1, A.A. Pasko1, A.I. Sourin2 , T.L. Kunii3
1)
2)
The University of Aizu, Aizu-Wakamatsu City, Fukushima, 965-80, Japan,
Nanyang Technological University, Singapore, 3) Hosei University, Tokyo, Japan
E-mail:
[email protected]
Abstract
This paper presents our approach to volume modelling
which combines volume representations by voxel data and
by continuous real functions. We discuss the main differences between direct volume visualization and modelling
with voxel data, questions of conversion between two representations including volume reconstruction from contour data. We illustrate the approach by several advanced
operations on a volumetric object: set-theoretic operations, sweeping, hypertexturing, feature-based sculpting,
splitting, spatial and temporal transformations.
1. Introduction
Increasingly developing volume graphics provides tools
for visualization and manipulation of voxel data represented by scalar values in the nodes of a regular 3D grid.
Its perspective is to replace "polygon pushers" by rendering hardware based on a volume buffer. In addition to
rendering, volume graphics provides few operations on
volumetric objects. They are Boolean operations and
metamorphosis in discrete space. However, "very little has
yet been done on the topic of volume modelling" [1].
There are such problems here as pointed by Kaufman et
al. [2]:
• Sculpturing in discrete space.
• Feature mapping and warping.
• Morphing and changing of the model.
• Intermixing volumetric and analytically defined geometric objects.
One can observe these topics being intensively investigated in geometric modelling. Above mentioned operations and many others like blending, sweeping, hypertexturing have found quite general solutions for geometric
solids represented by continuous real functions as
f ( x , y , z ) ≥ 0 . This representation is usually called an
implicit model or a function representation. Set-theoretic
solids have been successfully included in this representation with the application of R-functions and their modifications [3-11].
The paper addresses the following questions:
• How to include a volumetric object as a primitive
when using or designing a solid modeller?
• How to visualize a functionally defined object using
volume visualization?
• Why does intermixing voxel objects and analytically
defined geometric objects need voxelization of the
latter ones resulting in the loss of accuracy?
• How to apply well-known and recently proposed advanced geometric operations to volumetric objects?
This paper mainly presents the state of our volume
modelling project. However, we describe here our new
results on function-to-voxel conversion, sweeping by
volumetric objects, and improved hair modelling technique. In the following sections, we discuss the main differences between direct volume visualization and modelling with voxel data, describe methods of voxel-tofunction and function-to-voxel conversion and connected
problems, illustrate our approach by some advanced operations on a volumetric object.
2. Volume visualization and modelling
Volume visualization is one of the most interesting and
fast-growing areas in computer graphics today. In terms of
its impact on modern computer graphics, it is sometimes
compared to "raster graphics revolution" that happened in
the early 1980s. Today's volume visualization system
creating high-quality images from scalar and vector multidimensional data are actually using the model that was
classified as spatial occupancy enumeration more than 15
years ago [12]. Most volume visualization techniques are
based on one of about five foundation algorithms [13].
Some graphics workstations recently arrived in the market
can perform high-speed volume rendering that could be
considered as a step toward replacing "polygon pushers"
with "voxel pushers".
By comparing raster graphics with its "next of kin"
volume visualization, we would not avoid identifying
some problem that is common for both of them. This
problem is a limited set of operations that can be directly
applied to pixels and voxels. To perform some complex
geometric operation, one has to raise the level of abstraction and consider other geometric models that then can be
visualized with the raster/voxel model.
Volume visualization can be thought as mapping 3D
objects onto 2D images. The scope of modelling is quite
different. It starts with some 3D objects, manipulates them
and results in new 3D objects. A generated object model
can be used further in different ways:
• Visualization.
• Integral properties calculations (volume, center of
inertia, and so on).
• Rapid prototyping.
• Manufacturing.
• Application problems solving.
There is a trend in volume graphics to distinguish between modelling and visualization. The research in
modelling includes scattered data interpolation [19],
transformation from one voxel data structure to another by
manual 3D painting [14,15], Boolean operations and linear transformations [16,17], and metamorphosis [18].
We believe that the spatial occupancy enumeration requires unification with another model that would imply
more sophisticated operations and ability to use voxel
model together with another solid model. This higher
level geometric abstraction would allow one to use all
abilities of solid modelling with the advantages of volume
visualization.
Such fruitful alliance could be achieved if we combine
together spatial occupancy enumeration and the function
representation of solid objects. Here, some real function
f ( x , y , z , vol ) ≥ 0 can be introduced, where vol is a
voxel data parameter and x, y, z are Cartesian coordinates.
This function has positive values for the internal points,
equal to zero for the boundary points and less than zero
for the points outside the solid.
Elsewhere in [4], we show how this function representation (F-rep) can be used in solid modelling. In particular, we discussed that some solid models like Constructive
Solid Geometry (CSG), sweeping, and skeleton-based
implicits can be unified under the platform of F-rep.
Modelling CSG objects, we define solid primitives with
the inequalities f ( x , y , z ) ≥ 0 and use exact analytic
representation for set-theoretic operations either in
min/max form or in more complex one with R-functions
[3,4]. The latter form provides any desirable continuity of
the resulting function. Different solid object metamorpho-
sis and sophisticated operations can be defined as function
superposition aiming to get a result that is in turn defined
with the real function f ( x , y , z ) ≥ 0 . This approach
allowed us to attend and often to find quite interesting
solutions to the problems of nonlinear metamorphosis,
sculpting, sweeping by arbitrary CSG solid, NC machining, and hypertexturing.
Using F-rep together with voxel data allows us to raise
the level of abstraction from the direct rendering as used
in volume visualization to modelling with voxel data. We
are able to transform the existing voxel data and create
new volumes applying geometric operations. In the following sections, we combine our F-rep model with spatialoccupancy enumeration technique and implement some
advanced operations.
3. Conversions between object representations
In this section, we discuss conversions between two
representations:
• Voxel-to-function conversion is useful when applying
advanced operations to the object, especially while
intermixing voxel data with functionally defined objects.
• Function-to-voxel conversion can be applied to reduce
time of function evaluations and to use volume visualization in applications.
3.1 Voxel-to-function conversion
In this section, we observe methods of converting voxel
data to a function, and discuss connected problems. Our
concept is to treat volumetric objects as any other functionally represented primitives while applying advanced
operations. The difference between traditional primitives
(sphere, torus, blobby objects) and voxel objects is that the
latter ones are defined by an unknown function of three
variables sampled at the nodes of the regular grid or even
at scattered points and its values are usually positive integers.
To provide function continuity, one can apply the following well-known procedures to voxel data:
Interpolation and smoothing. This can include the
simple trilinear interpolation for a regular grid. In the case
of scattered data, one can apply volume splines, multiquadrics and other methods [19]. Here, the problem of
choosing an adequate function for local or global interpolation arises. To perform any further operations, a modeller has to keep rough data.
Filtering technique. Rough data are processed by different kinds of filters. The modeller has to keep rough
data for the function evaluation. If an approximate de-
scription of data is known and it has high-frequently error
distribution, filtering with the convolution can be guaranteed. These algorithms are simple and fast.
Spectral algorithms. Voxel data can be replaced by
analytical descriptions with the use of the Fourier transformation or wavelets [20]. This allows the modeller do
not keep rough data but calculations become much slower.
After getting a continuous positive function, a given
level value is subtracted to make the defining function
change its sign on the certain isosurface. After these procedures are done, the defining function can be used in settheoretic and other operations. However, several conceptual problems exist:
• Discontinuous density function. In practice, discrete
voxel data in some applications represent object's
density. A density function of a 3D object is discontinuous on the object boundary. If we are given the
exact density function (emergence of such models is
predicted in [21]), some special measures will be necessary to convert it to the continuous defining function.
• Distance property. Some operations, i.e., blending,
require a defining function to have a distance property, which means monotonous descent according to
the distance from the object. To provide this property
outside the volume, one can intersect the object with
the bounding volume using R-functions. However, to
provide this property everywhere outside the object its
bounding volume has to be as close to the surface as
possible, at least in the area of interest. Note that in
the conversion from a function to voxel data, it is
preferable to generate not binary but integer voxel
data to preserve the distance property to some extent.
• Internal zeroes. The voxel-to-function conversion can
result in nested disjoint objects. Moreover, isolated
points, lines and surfaces with zero function value can
appear inside the object. This is known as the
"internal zeroes" problem, which can cause incorrect
point membership classification. For example, if only
function value is used for the classification, an
"internal zero" point can be classified as a boundary
point. Usually internal zeroes are avoided in constructing defining functions and special techniques
(segmentation or space filling) have to be applied for
this in the voxel-to-function conversion.
3.2 Function-to-voxel conversion: case study of reconstruction
It can be necessary to convert a functionally represented object to voxel data. The simplest way to do this is
to generate binary voxel data by assigning the value of 1
to the point where the defining function is not negative
and the value of 0 otherwise. This leads to so-called ob-
ject-space aliasing and requires a filtering technique to be
applied to improve the surface quality [22]. On the other
hand, having a real continuous function, one can try to use
it to improve the resulting data. If the range [Fmin, Fmax] of
the function values is known, the following scaling function gives one byte positive integers for the voxelization:
I = 255 (F − Fmin ) (Fmax − Fmin )
The level value on the object surface corresponds to F = 0.
In this section we describe our case study reconstruction of an object from cross-section contour points and its
further voxelization for manipulation and rendering reconstructed objects stored in a volume raster. Here, the
reconstruction includes the following steps:
1. Retrieving and editing contours represented by 2D
point coordinates;
2. Calculation of parameters of a defining function for
each contour;
3. Function evaluation on a sparse grid;
4. Sample-based approximation of function values on
the given raster;
5. Generation of voxel data.
Traditionally, CSG uses simple geometric objects for a
base model which can be further changed by applying
such operations like set-theoretic operations, blending or
offsetting. We consider the problem of reconstruction of
solids from cross-sectional contours as an example of
creating a complex base model. In addition, it seems to us
that the combination of real cross-sectional contours extracted from CT data and some additional, say handdrawn or user edited, contours looks promising and can
bring benefit in practical applications. For the reconstruction/voxelization we use the combination of contour points
from femur CT data and extra points simulating interior
boundary of the femur to construct its real defining function and then to voxelize the object. Further, we demonstrate some operations on this voxel object. For the visualization of voxel objects, we use a simple ray-marching
algorithm, however widely used volume-rendering tools
can be exploited. Wang and Kaufman noticed in the paper
[22] that due to the discrete nature of the model representation, volume graphics suffer some disadvantages. Particularly, if not properly synthesized, voxelized volume
models are imprecise, they generate jagged surfaces. In
our experiments, we confirmed by visual inspection that
the 2D sample-based interpolation is faithful to avoid object-space aliasing.
We examine here two our reconstruction algorithms:
- reconstruction using monotone formula (see 3.2.1);
- reconstruction using volume splines (see 3.2.2).
Since the evaluation of defining functions is time consuming for both algorithms, we also discuss in 3.2.3 the
accelerated voxelization using a sample-based interpolation scheme.
A number of researchers have investigated the problem
of the reconstruction of surfaces from cross-sectional
contours. A brief review can be found in [5]. The main
difficulty in these algorithms is that data are presented as
polygonal contours. Connecting them into surface polygons is a very difficult combinatorial problem. The surface
generated by the triangulation method depends heavily on
the choice of the nodes for each contour. The main problem of this approach is the case where different numbers
of contours appear in different cross-sections.
A generalized model, called the homotopy model, was
presented by Shinagawa and Kunii [23] to reconstruct
surfaces from cross-sectional data of objects. Homotopy is
used for the reconstruction of parametric surfaces with the
help of the toroidal graph representation.
Intuitively, it is clear that transition and scaling between the cross-sectional shapes can be defined using interpolation or homotopy map between functions. If we
apply one-dimensional interpolation along the z-axis, the
object can be described as
F(x,y,z) = (1-g(z))F1(x,y) + g(z)F2(x,y),
Figure 1b shows an object reconstructed from three
polygonal cross-sections. Two triangles and one square
contour are used, Figure 1a.
Let a two-dimensional simple polygon is defined by a
finite set of segments. The segments are the edges and
their extremes are the vertices of the polygon. A polygon
is simple if there is no pair of nonadjacent edges sharing a
point and convex if its interior is a convex set. We consider the problem of representation of polygons by continuous real functions F(x,y) taking zero values at polygon
edges. The algorithm has to satisfy the following requirements:
It should provide exact polygon description as a zero
set of a real function;
No points with zero function value should be inside or
outside a polygon;
It should allow processing a simple arbitrary polygon
without any additional information.
Rvachev [26] and Peterson [27] independently proposed to represent a concave polygon by a set-theoretic
formula where each of the supporting half-planes appears
exactly once and no additional half-plane is used. This
approach does not generate any internal or external zeroes
when applying R-functions.
where g(z) is a blending function with range [0,1]. Note
that this method produces a connected surface if the areas
F1(x,y) ≥ 0 and F2(x,y) ≥ 0 have nonempty intersection.
The proposed algorithm can generate highly concave and
branching objects. In the case of two or more branches,
the cross-section is represented as a union of several regions separately converted to their defining functions.
This method can be also applied for the reconstruction
from nonparallel cross-sections [5].
(a)
(b)
3.2.1 Reconstruction with a monotone formula
The reconstruction from cross-sections requires a defining function for each individual cross-section. This function can be expressed by R-functions based on a monotone
formula for a polygon in a cross-section.
(a)
(b)
Figure 1. Reconstruction of a solid from
three parallel polygonal cross-sections.
Figure 2. A concave polygon (a) and a tree
(b) representing its monotone boolean formula.
The monotone formula for the defining function of the
polygon in Figure 2a results in the tree structure shown in
Figure 2b. To evaluate the defining function value at the
given point, the algorithm traces the tree from the leaves
to the root, evaluates defining functions of half-planes and
applies corresponding R-function to them. The details of
the algorithm are given elsewhere [6].
∫∑
Ω
m!/α! (Dαu)2dΩ→ min,
|α |=m
where m is a parameter of the variational functional.
The zero set of the function
f(x,y)=U(x,y) - fc(x,y)
Figure 3. Femur reconstructed from 53 polygonal slices.
As an illustration of the implementation of the monotone formula discussed above, Figure 3 presents the reconstruction of a femur from a set of 53 parallel crosssections. We apply the linear interpolation between real
functions generated for cross-sections by our algorithm.
Note the non smooth femur surface resulting from the
polygonal nature of the cross-sectional data. The next approach lets us to provide a more smooth shape of an individual cross-section and as the result a more smooth
overall shape.
3.2.2 Reconstruction with a volume spline
Consider the formulation of the reconstruction problem. Let Ω be an n-dimensional domain of arbitrary shape
that contains a set of points Pi = (xi 1, xi 2, ..., xi n): i =
1,2,...,N. We assume that the points Pi are distinct and lie
on some surface. The goal of the reconstruction is to find a
smooth function f(x1, x2, ..., xn) that approximately describes the surface. In the two-dimensional case, a contour
is represented by the equation f(x,y) = 0.
The general idea of our algorithm [5] is to introduce a
carrier solid with a defining function fc and to construct a
volume spline U interpolating values of the function fc in
the points Pi . The algebraic difference between U and fc
describes a reconstructed solid.
The algorithm consists of two steps. At the first step,
we introduce a carrier solid object which is an initial approximation of the object being searched. In the simplest
case, it can be a ball. Then data set r associated with the
points {ri = fc(Pi ): i = 1,2, ..., N} is calculated at all
given points. At the second step, these values are approximated by the volume spline derived for random or
unorganized points [28].
The problem is to construct the interpolation spline
function U(Pi ) ∈ W2m(Ω), where W2m is the set of all
functions whose squares of all derivatives of order ≤
m are integrable over Rn, so that U(Pi ) = ri , i = 1,2,...,N,
and has minimum energy of all functions that interpolate
the values ri .
This conforms to the following minimum condition
which defines operator T:
for n = 2 approximates the unknown contour.
3.2.3 Sample-based voxelization
In the function-to-voxel conversion, we face the problem of evaluating the function at grid points. The work
required is proportional to the number of the grid nodes
and the number of scattered points in the cross-section.
The amount of computations becomes significant, even for
moderate nodes number when the number of scattered
points in the cross-section is increasing. The special
methods to reduce processing time were developed for thin
plate splines [25]. However, we need a more general acceleration scheme to be applied to scattered points with
values of different functions (see 3.2.1 and 3.2.2). A possible way to reduce the computation cost is to use a finite
element method (FEM) approximation. We do not consider this problem in detail, but only sketch a basic idea of
the variational schema and describe our software algorithm.
The problem of constructing interpolating functions by
FEM for 2D space Ω is stated as follows: given data
points Pi (x,y), i = 1,2,...,N, are scattered in the sense that
there are no assumptions about the disposition of the independent data and data set r is associated with the
points. In our case, we solve the general problem of
smooth approximation of the scattered data (see [28]).
We have to construct a smooth function σh(x,y) which
takes on the value ri at points Pi (x,y) ∈ Ω, if it is possible,
or satisfies the condition:
||Aσh - r||2 = min,
where A is some linear bounded operator.
Along with this condition the function σh(x,y) has
minimum energy of all functions that interpolate the values ri . This conforms to the following minimum condition:
∫
Ω
[(∂σh/∂x)2 + (∂σh/∂y)2]d Ω = min,
where the integrating is taken over the two-dimensional
space Ω.
We use the piecewise linear approximation over the
rectangular mesh. The bilinear basis function
ωij(x,y) = ωi (x)ωj(y) corresponds to each node of the rectangular mesh. In fact, it is possible to use more smooth
basis functions and more complicated minimum conditions. The solution of the data approximation can be found
in the form:
σh(x,y) =
∑ σ ω (x,y).
7. Quadratic interpolation over the voxel space. The
object shape between cross-sections is defined with the
quadratic interpolation of the grid values for three neighboring slices.
Retrieval of contour data &
construction of volume splines
Sorting
Approximation of FEM matrix
Factorization
Function evaluation in 53 slices
Voxel approximation for 150 voxel layers
11.14
2.18
1.09
1.91
84.73
46.23
ij ij
i, j
The voxelization algorithm consists of the following
steps:
1. Generation of original scattered data. The processed
number of function evaluations in our experiments is 2601
for one cross-section or slice. The number of crosssections is equal 53. The size of voxel data is the
150x150x150.
2. Sorting the data. At this stage, we put calculated
data on the grid of 50x50 size according to the chosen
network pattern. For instance, a 3x3 pixel area applied in
halftone dithering can be used. One point network pattern
scheme was employed in the presented experiments to fill
the 3x3 areas and to construct the logical structure of the
FEM matrix.
During the summation for calculating the matrix, all
points with the coordinates which do not belong to the
intersection of basis functions should be discarded. To
avoid the multiple verification of the point membership in
this collection, it is reasonable to provide sorting the initial data set that is to place points according to the cells
numeration. Such sorting allows us to effectively calculate the components of the right part of the system of linear equations and to calculate the function values for each
point in the slice.
3. Numerical assembly. This step provides calculating
the values of all elements of the FEM matrix according to
the logical structure.
4. Cholesky factorization. The problem of approximating the data comes to the solution of the system of linear
algebraic equations: Ax = b, where A is a band matrix of
coefficients, x is a vector of unknown node values and b is
a vector of right parts. Cholesky decomposition constructs
a lower triangular matrix L whose transpose Lt can itself
serve as the upper triangular part.
5. Calculating the right part for the resulting linear algebraic system, solving the system by backsubstitution.
6. Interpolation over the mesh. The presented techniques thus far assumed that the array being calculated is
smaller than the voxel array being displayed. To this end,
the problem is to calculate the function values for each
point in the slice.
Table 1. Processing time.
All described parts of our software tool have been implemented in C language with double precision. The
measured processing time on the Silicon Graphics Indigo2
workstation for the above mentioned steps are presented
in Table 1; time is given in seconds.
We see from Table 1 that the most expensive part is
evaluating the functions at the chosen network pattern
points. The function evaluation includes constructing of
the right parts and solving systems of linear algebraic
equations, however we have to notice that this fraction of
processing time is constant for each slice and this calculations take 0.21 seconds for each slice. In fact, as we
mentioned above, the function evaluation depends on the
number of nodes in a cross-section and the amount of
computations can be significant. This algorithm allows us
to decrease the number of expensive function evaluations
approximately by a factor of nine with acceptable accuracy.
(a)
(b)
(c)
Figure 3. (a) 150x150x150 voxelization of a
reconstructed femur with one point network
pattern scheme; (b) splitting of the reconstructed femur; (c) blending between
the split parts.
Figure 3a shows a femur reconstructed using volume
splines (3.2.2) and voxelized with the sample-based interpolation as described above. Figure 3b presents two separated parts of the reconstructed femur. Figure 3c demonstrates the result of blending [7] between two separated
parts. It shows that the proposed voxelization procedure
keeps the distance property of defining functions that is
required for further operations. We have to underline that
with the binary voxelized objects, blending and several
other operations do not produce expected results.
In the paper [5] we have investigated the question of
the numerical error estimation. Our experiment lets us
draw a conclusion that the error of the reconstruction with
using volume splines and the linear interpolation between
slices is about two percent. FEM interpolation gives the
error approximately 0.7 percent for the worst case of interpolating extreme points of the voxel space. We suppose
that it is sufficient for the voxelization of free forms in
practical applications.
Temporal transformation or metamorphosis can be
useful in different applications, for instance, artistic animation or recognition tasks. Figure 5 illustrates shapes
constructed by the linear interpolation of two shapes with
using a linear blending function which specify the relative
contribution of each shape on the resulting blended shape
(examples of using nonlinear blending function are given ,
for example, in [29]). More complicated transformation
will be considered later.
4. Advanced operations on volumetric
objects
We describe and illustrate the following operations on
volumetric objects: set-theoretic operations, metamorphosis and sweeping by a volumetric object, hypertexturing,
feature-based sculpting and splitting.
A volumetric object was converted to a continuous real
function by the trilinear interpolation and level value subtraction. All operations result in a new procedurally defined real function of three variables.
4.1 Set-theoretic operations and metamorphosis
In Figure 4, we create a complex object made of a
volumetric head and simple CSG primitives. The head is
applied set-theoretic operations and affine transformations
to make a drawer that is filled with CSG primitives.
Figure 6. Volumetric head cut with a moving
CSG cutter.
In [8], we study sweeping defined with real functions.
We managed to solve several problems that could not find
general solution in one model before. There, we were able
to define a complex swept with any arbitrary CSG object
moving by any complex trajectory with self-intersections.
In this paper, we apply the same model to volume data. In
Figure 6, we cut volumetric head with a CSG cutter that
was implemented as subtraction a swept solid created by
moving cutter from the head. We used trilinear interpolation to get a real function defining the head and we did
not identify any specific problems here compared to the
previously studied set-theoretic operations over CSG and
swept object.
4.2 Sweeping by a volumetric object
Figure 4. Volumetric head is applied settheoretic operations and unified with CSG
primitives.
Figure 5. Metamorphosis between a volumetric head and a union of three analytically
defined objects.
Continuing our research on sweeping by an arbitrary
functionally defined solid, we expand our consideration to
sweeping by a volumetric object. To achieve this goal, we
defined a generator, being a volumetric object, with real
functions as it is described in the previous paragraph.
Here, we met some problem that lies in the nature of
volumetric data used. These data, normally, do not have a
distance property that is very important for the implementation of our method of sweeping. To achieve it, we set up
this distance property artificially so that values of function
outside the object obey some simple rule of quadric behavior defined as follows:
New_value = Old_value*(1-x*x-y*y-z*z).
Figure 7. Sweeping by a volumetric head.
This rule, applied after the trilinear interpolation is
made, does provide a distance property for any point outside the volume. After that, the method described in [8] is
applied. The results of sweeping by a volumetric head is
shown in Figure 7.
Figure 8. Hair styling. Hair strands are unified with a volumetric head.
In Figure 8, the hair before and after styling are shown.
4.3 Hypertexturing
Elsewhere in [9, 10], we present our approach to hypertexturing. We define the hypertexture with a real function
f ( x , y , z ) ≥ 0 and then unify it with some functionally
defined object where the texture should be applied. With
this method, we could define moss, corrosion, snow, fur,
and hair. We extended this approach to volume data and
here present the improved hair modelling technique that
allows us to grow and style solid hair on a volumetric
head.
The hair is thought as a set of functionally defined generalized solid cylinders. Their union gives the hair that
can be then unified with any other solid as it is done in
CSG.
The process of hair growing and styling is done as follows. We select some point in 3D that is supposed to be an
origin for the local coordinate system associated with the
hair. Then, for any arbitrary point we define its direction
cosines α , β , γ . Next, we find the direction cosines for
the single hair strand nearest to this point. The hair
strands are defined as long solid cylinders passing through
the nodes of a pseudo-random spherical parametric grid.
After the direction cosines of the nearest cylinder-strand
are found, we just check if the tested point belongs to this
cylinder. The radius of the hair strand and the angular
distance between the strands are defined by the user.
Styling and cutting hair are achieved with nonlinear
coordinate transformations and cutting operations done
with subtraction of different CSGs or swept volumes. For
hair behavior we used pseudo-physical laws based on different formulae defining distance properties for the given
head.
Figure 9. Volumetric head and hair unified
with a pillar.
Figure 9 presents another example of intermixing
volumetric and analytically defined geometric objects to
form a solid object with curved surfaces. In this example
we use volume data of a voxel head with simulated hair in
synthetic carving on a pillar.
4.4 Feature-based sculpting and separation
Feature-based sculpting provides deformation defined
by displacements of control points linked to the object
features. Two sets of points Q and D are given. Points Q
are defined for the non deformed object. Points D result
from Q when the deformation is applied to the initial object. Control points can be freely chosen by the user since
the algorithm does not use any regular grid. It can be
thought as a nonlinear 3D space mapping defined in a
finite set of points [11]. The problem is to find a smooth
mapping function that approximately describes the spatial
transformation. We define the inverse mapping needed to
transform our object as
Q = U(D)+D,
where the volume spline U(D) generates displacements of
the initial points Q. We use a volume spline interpolating
scattered data as mentioned above.
- calculate a mapping function and apply it to the reference half-space;
- perform a set-theoretic subtraction of a deformed reference half-space from the initial object.
Figure 10. Volumetric head deformed using
space mapping.
Figure 10 shows the volumetric head deformed using
space mapping. Only four control points have been used
for the deformation.
Figure 12. Separating the volumetric head
with a curved half-space.
The separated 3D object is shown in Figure 12.
5. Conclusions
Figure 11. Combined mapping in 3D space:
spatial and temporal transformations.
Previously considered example (see Figure 5) of primitive metamorphosis between objects having different genus shows that it is a difficult problem to reach visual effects of smooth transition between the shapes because they
are not homeomorphic. In Figure 11, we illustrate the 3D
application of combined mapping, that is spatial and temporal transformations. The problem is to obtain the visually smooth transformation between the initial ellipsoidal
shape (genus 0) and the final shape (genus 1) in presence
of obstacles. A defining function for a final shape of a
block with hole was obtained using set-theoretic operations on 3D primitives. To solve the problem we use a
combination of the linear interpolation or temporal transformation of two shapes and nonlinear space mapping
discussed above. We define the space mapping by displacement of control points in the local coordinate system
of the moving object.
The same nonlinear space mapping approach can be
applied to design a curved half-space separating two parts
of a volumetric object. Control points for partition or separation of 3D geometric object are placed on or near the
surface and the following scheme is used:
- specify a defining function for the object;
- define one-to-one projections of control points onto a
reference half-space, for example a plane;
An approach to volume modelling combining representations by voxel data and by real functions of three variables is presented. Questions of conversion between two
representations are discussed. We illustrate our approach
by several advanced operations on a volumetric object.
The traditional areas such as solid modelling and volume graphics also can benefit from this approach:
Solid modelling. A new source of shapes appears. Instead of voxelization of exactly represented objects such as
spheres, tori and others (as discussed in [24]), a solid
modeller keeps as high accuracy as possible while dealing
with volumetric objects. Research results can be examined
in new application fields such a surgery planning, prosthesis design and even virtual hairdressing.
Volume graphics. The rich set of operations can be
used for modelling volumes. Whatever complex operations have been applied to a volume, resulting voxel data
can be generated.
Acknowledgments
This research was supported in part by the MOVE consortium.
References
[1] G.M. Nielson, Visualization takes its place in the scientific
community, IEEE Transactions on Visualization and Computer Graphics, 1(2):97-98, 1995.
[2] A. Kaufman, K.H. Hohne, W. Kruger, L. Rosenblum and P.
Shroder, Research issues in volume visualization, IEEE
Computer Graphics and Applications, 14(2):63-67, 1993.
[3] V. Shapiro, Real functions for representation of rigid solids,
Computer Aided Geometric Design, 11(2):153-175, 1994.
[4] A. Pasko, V. Adzhiev, A. Sourin, V. Savchenko, Function
representation in geometric modeling: concepts, implementation and applications, The Visual Computer, 11(6):429-446,
1995.
[5] V. Savchenko, A. Pasko, O. Okunev and T. Kunii, Function
representation of solids reconstructed from scattered surface
points and countours, Computer Graphics Forum, 14(4): 181188, 1995.
[6] A. Pasko , A. Savchenko, V. Savchenko, Polygon-to-function
conversion for sweeping, Eurographics Workshop Implicit
Surfaces ‘96, J. Hart and K. van Overveld (Eds.), Eindhoven
University of Technology:163-171, 1996.
[7] A.A. Pasko and V.V. Savchenko, Blending operations for the
functionally based constructive geometry”, Set-theoretic Solid
Modelling: Techniques and Applications, CSG 94 Conference Proceedings, Information Geometers, Winchester,
UK:151-161, 1994.
[8] A. Sourin, A. Pasko, Function representation for sweeping by
a moving solid, IEEE Transactions on Visualization and
Computer Graphics, 2(1):11-18, 1996.
[9] A. Pasko, V. Savchenko, Solid noise in the Constructive
Solid Geometry, Proc of "Compugraphics'93", 5-10 December 1993, Alvor, Algarve, Portugal:351-357, 1993.
[10] A. Sourin, A. Pasko, V. Savchenko, Using real functions
with application to hair modelling, Computer & Graphics,
20(1):11-19, 1996.
[11] V.V. Savchenko, A.A. Pasko, T.L. Kunii, A.V. Savchenko,
Feature based sculpting of functionally defined 3D geometric
objects, Multimedia Modeling. Towards Information Superhighway, T.S.Chua, H.K.Pung and T.L.Kunii (Eds.), World
Scientific, Singapore:341-348, 1995.
[12] A.A.G. Requicha, Representations of rigid solids: theory,
methods, and systems, Computing Surveys, 12(4):437-464,
1980.
[13] T. Todd Elvins, A survey of algorithms for volume visualization, Computer Graphics, 26(3):194-201, 1992.
[14] T.A. Galyean, J.F. Hughes, Sculpting: an interactive volumetric modeling technique, SIGGRAPH'91, Computer
Graphics Proceedings, 25(4):267-274, 1991.
[15] D.R. Ney, E.K. Fishman, Editing tools for 3D medical imaging, IEEE Computer Graphics and Applications, 11(6):6371, 1991.
[16] K.J. Udupa, D. Odhner, Fast visualization, manipulation,
and analysis of binary volumetric objects, IEEE Computer
Graphics and Applications, 11(6):53-62, 1991.
[17] V. Chandru, S. Manohar, C.E. Prakash, Voxel-based
modeling for layered manufacturing, IEEE Computer
Graphics and Applications, 15(6):42-47, 1995.
[18] A. Lerios, C.D. Garfinkle, M. Levoy, Feature-based volume
metamorphosis, SIGGRAPH'95, Computer Graphics Proceedings:449-456, 1995.
[19] G.M. Nielson, T.A. Foley, B. Hamann, D. Lane, Visualizing and modeling scattered multivariate data, IEEE Computer Graphics and Applications, 11(3):47-55, 1991.
[20] S. Muraki, “Multiscale volume representation by a DOG
wavelet”, IEEE Transactions on Visualization and Computer Graphics, 1(2):109-116, 1995.
[21] G.M. Nielson, P. Brunet, M. Gross, H. Hagen, S.V. Klimenko, Research issues in data modeling for scientific
visualization, IEEE Computer Graphics and Applications,
14(2):70-73, 1994.
[22] S.W. Wang, A.E. Kaufman, Volume-sampled 3D modeling,
IEEE Computer Graphics and Applications, 14(5):26-32,
1994.
[23] Y. Shinagawa and T.L. Kunii, The Homotopy Model: a
generalized Model for Smooth Surface Generation from
Cross Sectional Data, The Visual Computer, 7(2-3):72-86,
1991.
[24] N. Shareef and R. Yagel, Rapid previewing via volumebased solid modeling, Third Symposium on Solid Modeling
and Applications, C.Hoffmann and J.Rossignac (Eds.), Salt
Lake City, Utah, USA (May 17-19, 1995), ACM Press:281291, 1995.
[25] R.K. Beatson and W.A. Light, Fast evaluation of radial
basis functions: Methods for 2-D polyharmonic splines,
Mathematics department, Univ. of Canterbury, Christchurch,
New Zealand, Tech. Rep. 119, Dec. 1994.
[26] V.L. Rvachev, Methods of Logic Algebra in Mathematical
Physics, Naukova Dumka, Kiev, 1974.
[27] D. Peterson, Halfspace representation of extrusions, solids
of revolution, and pyramids, SANDIA Report SAND840572, Sandia National Laboratories, Albuquerque, NM,
1984.
[28] V.A. Vasilenko, Spline-functions: Theory, Algorithms,
Programs, Nauka Publishers, Novosibirsk, 1983.
[29] D. DeCarlo and D. Metaxas, Blended deformable models,
IEEE Transaction on Pattern Analysis and Machine Intelligence, 18(4):443-448, 1996.