CG-6-Three-Dimensional CG

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

MCA 4.3: Computer Graphics Dr. Ravindra S.

Hegadi

6 – Three dimensional Computer Graphics

Methods for geometric transformations and object modeling in three dimensions are
extended from two-dimensional methods by including considerations for the z coordinate.
We now translate an object by specifying a three-dimensional translation vector, which
determines how much the object is to be moved in each of the three coordinate directions.
Similarly, we scale an object with three coordinate scaling factors. The extension for
three-dimensional rotation is less straightforward. When we discussed two-dimensional
rotations in the xy plane, we needed to consider only rotations about axes that were
perpendicular to the xy plane. In three-dimensional space, we can now select any spatial
orientation for the rotation axis. Most graphics packages handle three-dimensional
rotation as a composite of three rotations, one for each of the three Cartesian axes.
Alternatively, a user can easily set up a general rotation matrix, given the orientation of
the axis and the required rotation angle. As in the two-dimensional case, we express
geometric transformations in matrix form. Any sequence of transformations is then
represented as a single matrix, formed by concatenating the matrices for the individual
transformations in the sequence.

Translation:
In a three-dimensional homogeneous coordinate representation, a point is translated (Fig.
6-1) from position P = (x, y, z) to position P' = (x', y', z') with the matrix operation
⎡ x ′ ⎤ ⎡1 0 0 tx ⎤ ⎡x ⎤
⎢ y ′⎥ ⎢ 0 1 0 t y ⎥⎥ ⎢ y ⎥
⎢ ⎥=⎢ ⋅⎢ ⎥ (6-1)
⎢ z ′ ⎥ ⎢0 0 1 tz ⎥ ⎢z ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 ⎦ ⎢⎣0 0 0 1 ⎥⎦ ⎣1 ⎦
or
P´ = T · P (6-2)

Fig. 6-1: Translating a point with translation Fig. 6-2: Translating an object with translation vector T
vector V = (tx, ty, tz)

Parameters tx, ty, and tz, specifying translation distances for the coordinate directions x, y,
and z, are assigned any real values. The matrix representation in Eq. 6-1 is equivalent to
the three equations

-58-
MCA 4.3: Computer Graphics Dr. Ravindra S. Hegadi

x' = x + tx, y' = y + ty, z' = z + tz


An object is translated in three dimensions by transforming each of the defining points of
the object. For an object represented as a set of polygon surfaces, we translate each vertex
of each surface (Fig. 6-2) and redraw the polygon facets in the new position.
We obtain the inverse of the translation matrix in Eq. 6-1 by negating the translation
distances tx, ty, and tz. This produces a translation in the opposite direction, and the
product of a translation matrix and its inverse produces the identity matrix.

Rotation:
To generate a rotation transformation for an object, we must designate an axis of rotation
(about which the object is to be rotated) and the amount of angular rotation. Unlike two-
dimensional applications, where all transformations are carried out in the xy plane, a
three-dimensional rotation can be specified around any line in space. The easiest rotation
axes to handle are those that are parallel to the coordinate axes. Also, we can use
combinations of coordinate-axis rotations (along with appropriate translations) to specify
any general rotation.
By convention, positive rotation angles produce counterclockwise rotations about a
coordinate axis, if we are looking along the positive half of the axis toward the coordinate
origin (Fig. 6-3). This agrees with our earlier discussion of rotation in two dimensions,
where positive rotations in the xy plane are counter-clockwise about axes parallel to the z
axis.

Co-ordinate axis rotation:

Fig. 6-3: Positive rotation directions about the coordinate axes are counterclockwise, when looking toward the origin
from a positive coordinate position on each axis

xˊ = x cosθ - y sinθ
yˊ = x sinθ + y cosθ (6-3)
zˊ = z
Parameter θ specifies the rotation angle. In homogeneous coordinate form, the three-
dimensional z-axis rotation equations are expressed as

-59-
MCA 4.3: Computer Graphics Dr. Ravindra S. Hegadi

⎡ x ′ ⎤ ⎡cos θ − sin θ 0 0⎤ ⎡ x ⎤
⎢ y ′⎥ ⎢sin θ cos θ 0 0⎥⎥ ⎢⎢ y ⎥⎥
⎢ ⎥=⎢ ⋅ ````````````````````````````
⎢z′ ⎥ ⎢ 0 0 1 0 ⎥ ⎢z ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 ⎦ ⎣ 0 0 0 1 ⎦ ⎣1 ⎦
which we can write more compactly as
Pˊ = Rz(θ) • P
Figure below illustrates rotation of an object about the z axis.

Fig. 6-4: Rotation of an object about z axis

Transformation equations for rotations about the other two coordinate axes can be
obtained with a cyclic permutation of the coordinate parameters x, y, and z in Eqs. 6-3.
That is, we use the replacements
x → y →z → x
Substituting above permutationsin Eqs. 6-3, we get the equations for an x-axis rotation:
yˊ = y cosθ - z sinθ
zˊ = y sinθ + z cosθ (6-4)
xˊ = x
which can be written in the homogeneous coordinate form
⎡ x′ ⎤ ⎡ 1 0 0 0⎤ ⎡ x ⎤
⎢ y ′⎥ ⎢ 0 cos θ − sin θ 0 ⎥⎥ ⎢⎢ y ⎥⎥
⎢ ⎥=⎢ ⋅
⎢ z ′ ⎥ ⎢0 sin θ cos θ 0 ⎥ ⎢z ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 ⎦ ⎣ 0 0 0 1 ⎦ ⎣1 ⎦
or
P = Rx(θ) • P

-60-
MCA 4.3: Computer Graphics Dr. Ravindra S. Hegadi

Fig. 6-5: Cyclic permutation of the Cartesian-coordinate axes to produce the three sets of coordinate-axis rotation
equations.

Cyclically permuting coordinates in Eqs. 6-4 gives us the transformation equations for a
y-axis rotation:
zˊ = z cosθ - x sinθ
xˊ = z sinθ + x cosθ (6-4)
yˊ = y
The matrix representation for y-axis rotation is
⎡ x′ ⎤ ⎡ cosθ 0 sin θ 0 ⎤ ⎡x ⎤
⎢ y ′⎥ ⎢ 0 1 0 0 ⎥⎥ ⎢⎢ y ⎥⎥
⎢ ⎥=⎢ ⋅
⎢ z ′ ⎥ ⎢− sin θ 0 cosθ 0 ⎥ ⎢z ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 ⎦ ⎣ 0 0 0 1 ⎦ ⎣1 ⎦
or
P = Ry(θ) • P
An inverse rotation matrix is formed by replacing the rotation angle θ by -θ. Negative
values for rotation angles generate rotations in a clockwise direction, so the identity
matrix is produced when any rotation matrix is multiplied by its inverse. Since only the
sine function is affected by the change in sign of the rotation angle, the inverse matrix
can also be obtained by interchanging rows and columns. That is, we can calculate the
inverse of any rotation matrix R by evaluating its transpose (R-1 = RT). This method for
obtaining an inverse matrix holds also for any composite rotation matrix.

Scaling:
The matrix expression tor the scaling transformation of a position P = (x, y, z) relative to
the coordinate origin can be written as
⎡ x′ ⎤ ⎡ s x 0 0 0 ⎤ ⎡x ⎤
⎢ y ′⎥ ⎢ 0 sy 0 0 ⎥⎥ ⎢ y ⎥
⎢ ⎥=⎢ ⋅⎢ ⎥ (6-5)
⎢z′ ⎥ ⎢ 0 0 sz 0 ⎥ ⎢z ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣1 ⎦ ⎣⎢ 0 0 0 1 ⎦⎥ ⎣1 ⎦
or
P=S•P

-61-
MCA 4.3: Computer Graphics Dr. Ravindra S. Hegadi

Fig. 6.6: Doubling the size of an object with transformation Eqn. 6-5
also moves the object further from the origin

where scaling parameters sx, sy, and sz are assigned any positive values. Explicit
expressions for the coordinate transformations for scaling relative to the origin are
x' = x • sx, y' = y • sy, z' = z • sz
Scaling an object with transformation Eqn. 6-5 changes the size of the object and
repositions the object relative to the coordinate origin. Also, if the transformation
parameters are not all equal, relative dimensions in the object are changed. We preserve
the original shape of an object with a uniform scaling (sx = sy = sz). The result of scaling
an object uniformly with each scaling parameter set to 2 is shown in Fig. 6-5.

Fig. 6-7: Scaling an object relative to a selected fixed point is equivalent to the sequence of transformation shown

Scaling with respect to a selected fixed position (xj, yj, zj) can be represented with the
following transformation sequence:

-62-
MCA 4.3: Computer Graphics Dr. Ravindra S. Hegadi

1. Translate the fixed point to the origin.


2. Scale the object relative to the coordinate origin using Eq. 6-5.
3. Translate the fixed point back to its original position.

This sequence of transformations is demonstrated in Fig. 6-7. The matrix representation


for an arbitrary fixed-point scaling can then be expressed as the concatenation of these
translate-scale-translate transformations as
⎡ sx 0 0 (1 − s x ) xt ⎤
⎢0 sy 0 (1 − s y ) yt ⎥⎥
T(xt, yt, zt) • S(sx, sy, sy)•T(-xt, -yt, -zt)= ⎢
⎢0 0 sz (1 − s z ) z t ⎥
⎢ ⎥
⎢⎣ 0 0 0 1 ⎥⎦
We form the inverse scaling matrix for either Eq. 6-5 or Eq. 6-6 by replacing the scaling
parameters sx, sy and sz, with their reciprocals. The inverse matrix generates an opposite
scaling transformation, so the concatenation of any scaling matrix and its inverse
produces the identity matrix.

Hidden line removal


We can also clarify depth relationships in a wireframe display by identifying visible lines
in some way. The simplest method is to highlight the visible lines or to display them in a
different color. Another technique, commonly used for engineering drawings, is to
display the hidden lines as dashed lines. Another approach is to simply remove the
hidden lines, as in Figs. 6-8(b) and 6-8(c). But removing the hidden lines also removes
information about the shape of the back surfaces of an object. These visible-line methods
also identify the visible
surfaces of objects.
When objects are to be
displayed with color or
shaded surfaces, we
apply surface-rendering
procedures to the
visible surfaces so that
Fig. 6-8: The wireframe representation of the pyramid in (a) contains no depth the hidden surfaces are
information to indicate whether the viewing direction is (b) downward obscured. Some visible
from a position above the apex or (c) upward from a position below the base. surface algorithms
establish visibility pixel
by pixel across the viewing plane; other algorithms determine visibility for object
surfaces as a whole.
When only the outline of an object is to be displayed, visibility tests are applied to
surface edges. Visible edge sections are displayed, and hidden edge sections can either be
eliminated or displayed differently from the visible edges. For example, hidden edges
could be drawn as dashed lines, or we could use depth cueing to decrease the intensity of
the lines as a linear function of distance from the view plane. Procedures for determining
visibility of object edges are referred to as wireframe-visibility methods. They are also

-63-
MCA 4.3: Computer Graphics Dr. Ravindra S. Hegadi

called visible-line detection methods or hidden-line detection methods. Special


wireframe-visibility procedures have been developed, but some of the visible-surface
methods can also be used to test for edge visibility.
A direct approach to identifying the visible lines in a scene is to compare each line to
each surface. The process involved here is similar to clipping lines against arbitrary
window shapes, except that we now want to determine which sections of the lines are
hidden by surfaces. For each line, depth values are compared to the surfaces to determine
which line sections are not visible. We can use coherence methods to identify hidden line
segments without actually testing each coordinate position. If both line intersections with
the projection of a surface boundary have greater depth than the surface at those points,
the line segment between the intersections is completely hidden, as in Fig. 6-9(a). This is
the usual situation in a scene, but it is also possible to have lines and surfaces intersecting
each other. When a line has greater depth at one boundary intersection and less depth
than the surface at the other boundary intersection, the line must penetrate the surface
interior, as in Fig. 6-9(b). In this case, we calculate the intersection point of the line with
the surface using the plane
equation and display only
the visible sections.
Some visible-surface
methods are readily
adapted to wireframe
visibility testing. Using a
back-face method, we
could identify all the back
surfaces of an object and
Fig. 6-9: Hidden-line sections (dashed) for a line that (a) passes behind a
display only the boundaries surface and (b) penetrates a surface.
for the visible surfaces.
With depth sorting, surfaces can be painted into the refresh buffer so that surface interiors
are in the background color, white boundaries are in the foreground color. By processing
the surfaces from back to front, hidden lines are erased by the nearer surfaces. An area-
subdivision method can be adapted to hidden-line removal by displaying only the
boundaries of visible surfaces. Scan-line methods can be used to display visible lines by
setting points along the scan line that coincide with boundaries of visible surfaces. Any
visible-surface method that uses scan conversion can be modified to an edge-visibility
detection method in a similar way.
Often, three-dimensional graphics packages accommodate several visible-surface
detection procedures, particularly the back-face and depth-buffer methods. A particular
function can then be invoked with the procedure name, such as back-Face or depth
Buffer.
In general programming standards, such as GKS and PHIGS, visibility methods are
implementation-dependent. A table of available methods is listed at each installation, and
a particular visibility-detection method is selected with the hidden-line-hidden-surface-
removal (HLHSR) function:
setHLHSRidentifier (visibilityFunctionIndex)
Parameter visibilityFunctionlndex is assigned an integer code to identify the visibility
method that is to be applied to subsequently specified output primitives.

-64-

You might also like