Acknowledgements To: RD TH
Acknowledgements To: RD TH
Acknowledgements To: RD TH
Three-Dimensional Viewing
4.1 Overview of Three-Dimensional Viewing Concepts
When we model a three-dimensional scene, each object in the scene is typically defined
with a set of surfaces that form a closed boundary around the object interior.
In addition to procedures that generate views of the surface features of an object, graphics
packages sometimes provide routines for displaying internal components or cross-
sectional views of a solid object.
Many processes in three-dimensional viewing, such as the clipping routines, are similar
to those in the two-dimensional viewing pipeline.
But three-dimensional viewing involves some tasks that are not present in
twodimensional Viewing
This coordinate reference defines the position and orientation for a view plane (or
projection plane) that corresponds to a camera film plane as shown in below figure.
Three parallel-projection views of an object, showing relative proportions from different viewing
positions
causes objects farther from the viewing position to be displayed smaller than objects of
the same size that are nearer to the viewing position
Depth Cueing
Depth information is important in a three-dimensional scene so that we can easily
identify, for a particular viewing direction, which is the front and which is the back of
each displayed object.
There are several ways in which we can include depth information in the two-
dimensional representation of solid objects.
A simple method for indicating depth with wire-frame displays is to vary the brightness
of line segments according to their distances from the viewing position which is termed
as depth cueing.
The lines closest to the viewing position are displayed with the highest intensity, and
lines farther away are displayed with decreasing intensities.
Depth cueing is applied by choosing a maximum and a minimum intensity value and a
range of distances over which the intensity is to vary.
Another application of depth cuing is modeling the effect of the atmosphere on the
perceived intensity of objects
nonvisible lines as dashed lines. Or we could remove the nonvisible lines from the
display
Surface Rendering
We set the lighting conditions by specifying the color and location of the light sources,
and we can also set background illumination effects.
Surface properties of objects include whether a surface is transparent or opaque and
whether the surface is smooth or rough.
We set values for parameters to model surfaces such as glass, plastic, wood-grain
patterns, and the bumpy appearance of an orange.
Figure above shows the general processing steps for creating and transforming a three-
dimensional scene to device coordinates.
Once the scene has been modeled in world coordinates, a viewing-coordinate system is
selected and the description of the scene is converted to viewing coordinates
A right-handed viewing-coordinate system, with axes x view, y view, and z view, relative to a right-handed world-
coordinate frame.
An additional scalar parameter is used to set the position of the view plane at some
coordinate value zvp along the zview axis,
This parameter value is usually specified as a distance from the viewing origin along the
direction of viewing, which is often taken to be in the negative zview direction.
Vector N can be specified in various ways. In some graphics systems, the direction for N
is defined to be along the line from the world-coordinate origin to a selected point
position.
Other systems take N to be in the direction from a reference point Pref to the viewing
origin P0,
Specifying the view-plane normal vector N as the direction from a selected reference point Pref to the viewing-
coordinate origin P0.
Because the view-plane normal vector N defines the direction for the zview axis, vector V
should be perpendicular to N.
But, in general, it can be difficult to determine a direction for V that is precisely
perpendicular to N.
Therefore, viewing routines typically adjust the user-defined orientation of vector V,
With a left-handed system, increasing zview values are interpreted as being farther from
the viewing position along the line of sight.
But right-handed viewing systems are more common, because they have the same
orientation as the world-reference frame.
Because the view-plane normal N defines the direction for the zview axis and the view-up
vector V is used to obtain the direction for the yview axis, we need only determine the
direction for the xview axis.
Using the input values for N and V,we can compute a third vector, U, that is
perpendicular to both N and V.
Vector U then defines the direction for the positive xview axis.
We determine the correct direction for U by taking the vector cross product of V and N
so as to form a right-handed viewing frame.
The vector cross product of N and U also produces the adjusted value for V,
perpendicular to both N and U, along the positive yview axis.
Following these procedures, we obtain the following set of unit axis vectors for a right-
handed viewing coordinate system.
The coordinate system formed with these unit vectors is often described as a uvn
viewing-coordinate reference frame
For the rotation transformation, we can use the unit vectors u, v, and n to form the
composite rotation matrix that superimposes the viewing axes onto the world frame. This
transformation matrix is
where the elements of matrix R are the components of the uvn axis vectors.
The coordinate transformation matrix is then obtained as the product of the preceding
translation and rotation matrices:
Translation factors in this matrix are calculated as the vector dot product of each of the u,
v, and n unit vectors with P0, which represents a vector from the world origin to the
viewing origin.
Front, side, and rear orthogonal projections of an object are called elevations; and a top
orthogonal projection is called a plan view
The edges of the clipping window specify the x and y limits for the part of the scene that
we want to display.
These limits are used to form the top, bottom, and two sides of a clipping region called
the orthogonal-projection view volume.
Because projection lines are perpendicular to the view plane, these four boundaries are
planes that are also perpendicular to the view plane and that pass through the edges of the
clipping window to form an infinite clipping region, as in Figure below.
These two planes are called the near-far clipping planes, or the front-back clipping
planes.
The near and far planes allow us to exclude objects that are in front of or behind the part
of the scene that we want to display.
When the near and far planes are specified, we obtain a finite orthogonal view volume
that is a rectangular parallelepiped, as shown in Figure below along with one possible
placement for the view plane
Objects are then displayed with foreshortening effects, and projections of distant objects
are smaller than the projections of objects of the same size that are closer to the view
plane
The projection line intersects the view plane at the coordinate position (xp, yp, zvp), where
zvp is some selected position for the view plane on the zview axis.
We can write equations describing coordinate positions along this perspective-projection
line in parametric form as
On the view plane, z’ = zvp and we can solve the z’ equation for parameter u at this
position along the projection line:
Substituting this value of u into the equations for x’ and y’, we obtain the general
perspective-transformation equations
Case 2:
Sometimes the projection reference point is fixed at the coordinate origin, and
(xprp, yprp, zprp) = (0, 0, 0) :
Case 3:
If the view plane is the uv plane and there are no restrictions on the placement of the
projection reference point, then we have
zvp = 0:
Case 4:
With the uv plane as the view plane and the projection reference point on the zview axis,
the perspective equations are
xprp = yprp = zvp = 0:
The view plane is usually placed between the projection reference point and the scene,
but, in general, the view plane could be placed anywhere except at the projection point.
If the projection reference point is between the view plane and the scene, objects are
inverted on the view plane (refer below figure)
Perspective effects also depend on the distance between the projection reference point
and the view plane, as illustrated in Figure below.
If the projection reference point is close to the view plane, perspective effects are
emphasized; that is, closer objects will appearmuchlarger thanmore distant objects of the
same size.
Similarly, as the projection reference point moves farther from the view plane, the
difference in the size of near and far objects decreases
The displayed view of a scene includes only those objects within the pyramid, just as we
cannot see objects beyond our peripheral vision, which are outside the cone of vision.
By adding near and far clipping planes that are perpendicular to the zview axis (and
parallel to the view plane), we chop off parts of the infinite, perspectiveprojection view
volume to form a truncated pyramid, or frustum, view volume
But with a perspective projection, we could also use the near clipping plane to take out
large objects close to the view plane that could project into unrecognizable shapes within
the clipping window.
Similarly, the far clipping plane could be used to cut out objects far from the projection
reference point that might project to small blots on the view plane.
Where,
Ph is the column-matrix representation of the homogeneous point (xh, yh, zh, h) and
P is the column-matrix representation of the coordinate position (x, y, z, 1).
Second, after other processes have been applied, such as the normalization transformation
and clipping routines, homogeneous coordinates are divided by parameter h to obtain the
true transformation-coordinate positions.
The following matrix gives one possible way to formulate a perspective-projection
matrix.
Parameters sz and tz are the scaling and translation factors for normalizing the projected
values of z-coordinates.
Specific values for sz and tz depend on the normalization range we select.
Because the frustum centerline intersects the view plane at the coordinate location (xprp,
yprp, zvp), we can express the corner positions for the clipping window in terms of the
window dimensions:
For a given projection reference point and view-plane position, the field-of view angle
determines the height of the clipping window from the right triangles in the diagram of
Figure below, we see that
Therefore, the diagonal elements with the value zprp −zvp could be replaced by either of
the following two expressions
In this case, we can first transform the view volume to a symmetric frustum and then to a
normalized view volume.
An oblique perspective-projection view volume can be converted to a sym metric frustum
by applying a z-axis shearing-transformation matrix.
This transformation shifts all positions on any plane that is perpendicular to the z axis by
an amount that is proportional to the distance of the plane from a specified z- axis
reference position.
The computations for the shearing transformation, as well as for the perspective and
normalization transformations, are greatly reduced if we take the projection reference
point to be the viewing-coordinate origin.
Taking the projection reference point as (xprp, yprp, zprp) = (0, 0, 0), we obtain the elements
of the required shearing matrix as
Similarly, with the projection reference point at the viewing-coordinate origin and with
the near clipping plane as the view plane, the perspective-projection matrix is simplified
to
Concatenating the simplified perspective-projection matrix with the shear matrix we have
Because the centerline of the rectangular parallelepiped view volume is now the zview
axis, no translation is needed in the x and y normalization transformations: We require
only the x and y scaling parameters relative to the coordinate origin.
The scaling matrix for accomplishing the xy normalization is
And the elements of the normalized transformation matrix for a general perspective-
projection are
In normalized coordinates, the znorm = −1 face of the symmetric cube corresponds to the
clipping-window area. And this face of the normalized cube is mapped to the rectangular
viewport, which is now referenced at zscreen = 0.
Thus, the lower-left corner of the viewport screen area is at position (xvmin, yvmin, 0) and
the upper-right corner is at position (xvmax, yvmax, 0).
glMatrixMode (GL_MODELVIEW);
a matrix is formed and concatenated with the current modelview matrix, We set the
modelview mode with the statement above
gluLookAt (x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);
Viewing parameters are specified with the above GLU function.
This function designates the origin of the viewing reference frame as the world-
coordinate position P0 = (x0, y0, z0), the reference position as Pref =(xref, yref, zref), and the
view-up vector as V = (Vx, Vy, Vz).
If we do not invoke the gluLookAt function, the default OpenGL viewing parameters are
P0 = (0, 0, 0)
Pref = (0, 0, −1)
V = (0, 1, 0)
#include <GL/glut.h>
GLint winWidth = 600, winHeight = 600; // Initial display-window size.
GLfloat x0 = 100.0, y0 = 50.0, z0 = 50.0; // Viewing-coordinate origin.
GLfloat xref = 50.0, yref = 50.0, zref = 0.0; // Look-at point.
GLfloat Vx = 0.0, Vy = 1.0, Vz = 0.0; // View-up vector.
/* Set coordinate limits for the clipping window: */
GLfloat xwMin = -40.0, ywMin = -60.0, xwMax = 40.0, ywMax = 60.0;
/* Set positions for near and far clipping planes: */
GLfloat dnear = 25.0, dfar = 125.0;
We can simplify the back-face test by considering the direction of the normal vector N
for a polygon surface. If Vview is a vector in the viewing direction from our camera
position, as shown in Figure below, then a polygon is a back face if
Vview . N > 0
In a right-handed viewing system with the viewing direction along the negative zv axis
(Figure below), a polygon is a back face if the z component, C, of its normal vector N
satisfies C < 0.
Also, we cannot see any face whose normal has z component C = 0, because our viewing
direction is grazing that polygon. Thus, in general, we can label any polygon as a back
face if its normal vector has a z component value that satisfies the inequality
C <=0
Similar methods can be used in packages that employ a left-handed viewing system. In
these packages, plane parameters A, B, C, and D can be calculated from polygon vertex
coordinates specified in a clockwise direction.
Inequality 1 then remains a valid test for points behind the polygon.
By examining parameter C for the different plane surfaces describing an object, we can
immediately identify all the back faces.
For other objects, such as the concave polyhedron in Figure below, more tests must be
carried out to determine whether there are additional faces that are totally or partially
obscured by other faces
In general, back-face removal can be expected to eliminate about half of the polygon
surfaces in a scene from further visibility tests.
Figure above shows three surfaces at varying distances along the orthographic projection line
from position (x, y) on a view plane.
Depth-Buffer Algorithm
1. Initialize the depth buffer and frame buffer so that for all buffer positions (x, y),
depthBuff (x, y) = 1.0, frameBuff (x, y) = backgndColor
2. Process each polygon in a scene, one at a time, as follows:
• For each projected (x, y) pixel position of a polygon, calculate the depth z (if not already
known).
• If z < depthBuff (x, y), compute the surface color at that position and set
depthBuff (x, y) = z, frameBuff (x, y) = surfColor (x, y)
After all surfaces have been processed, the depth buffer contains depth values for the visible
surfaces and the frame buffer contains the corresponding color values for those surfaces.
Given the depth values for the vertex positions of any polygon in a scene, we can
calculate the depth at any other point on the plane containing the polygon.
At surface position (x, y), the depth is calculated from the plane equation as
If the depth of position (x, y) has been determined to be z, then the depth z’ of the next
position (x + 1, y) along the scan line is obtained as
The ratio −A/C is constant for each surface, so succeeding depth values across a scan line
are obtained from preceding values with a single addition.
We can implement the depth-buffer algorithm by starting at a top vertex of the polygon.
Then, we could recursively calculate the x-coordinate values down a left edge of the
polygon.
The x value for the beginning position on each scan line can be calculated from the
beginning (edge) x value of the previous scan line as
If we are processing down a vertical edge, the slope is infinite and the recursive
calculations reduce to
One slight complication with this approach is that while pixel positions are at integer (x,
y) coordinates, the actual point of intersection of a scan line with the edge of a polygon
may not be.
As a result, it may be necessary to adjust the intersection point by rounding its fractional
part up or down, as is done in scan-line polygon fill algorithms.
An alternative approach is to use a midpoint method or Bresenham-type algorithm for
determining the starting x values along edges for each scan line.
The method can be applied to curved surfaces by determining depth and color values at
each surface projection point.
In addition, the basic depth-buffer algorithm often performs needless calculations.
Objects are processed in an arbitrary order, so that a color can be computed for a surface
point that is later replaced by a closer surface.
We can also apply depth-buffer visibility testing using some other initial value for
themaximumdepth, and this initial value is chosen with theOpenGLfunction:
glClearDepth (maxDepth);
Parameter maxDepth can be set to any value between 0.0 and 1.0.
Projection coordinates in OpenGL are normalized to the range from −1.0
to 1.0, and the depth values between the near and far clipping planes are
further normalized to the range from 0.0 to 1.0.
As an option, we can adjust these normalization values with
glDepthRange (nearNormDepth, farNormDepth);
By default, nearNormDepth = 0.0 and farNormDepth = 1.0.
But with the glDepthRange function, we can set these two parameters to
any values within the range from 0.0 to 1.0, including nearNormDepth >
farNormDepth
Another option available in OpenGL is the test condition that is to be used for the depth-
buffer routines.We specify a test condition with the following function:
glDepthFunc (testCondition);
o Parameter testCondition can be assigned any one of the following eight symbolic
constants: GL_LESS, GL_GREATER, GL_EQUAL, GL_NOTEQUAL,
GL_LEQUAL, GL_GEQUAL, GL_NEVER (no points are processed), and
GL_ALWAYS.
o The default value for parameter testCondition is GL_LESS.
We can also set the status of the depth buffer so that it is in a read-only state or in a read-
write state. This is accomplished with
glDepthMask (writeStatus);
o When writeStatus = GL_TRUE (the default value), we can both read from and
write to the depth buffer.
o With writeStatus = GL_FALSE, the write mode for the depth buffer is disabled
and we can retrieve values only for comparison in depth testing.