Hear50265 ch03

Download as pdf or txt
Download as pdf or txt
You are on page 1of 88

hearn-50265; ISBN: 0-13-015390-7

book

August 8, 2003

14:53

CHAPTER

Graphics Output Primitives

A scene from the wolfman video. The animated gure of this primitive lycanthrope is modeled with 61 bones and eight layers of fur. Each frame of the computer animation contains 100,000 surface polygons. (Courtesy of the NVIDIA Corporation.)

84

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-1 Coordinate Reference Frames 3-2 Specifying a Two-Dimensional 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-13

World-Coordinate Reference Frame in OpenGL OpenGL Point Functions OpenGL Line Functions Line-Drawing Algorithms Parallel Line Algorithms Setting Frame-Buffer Values OpenGL Curve Functions Circle-Generating Algorithms Ellipse-Generating Algorithms Other Curves Parallel Curve Algorithms Pixel Addressing and Object Geometry

3-17 3-18 3-19 3-20 3-21 3-22 3-23 3-24

Functions OpenGL Vertex Arrays Pixel-Array Primitives OpenGL Pixel-Array Functions Character Primitives OpenGL Character Functions Picture Partitioning OpenGL Display Lists OpenGL Display-Window Reshape Function 3-25 Summary

3-14 Fill-Area Primitives 3-15 Polygon Fill Areas 3-16 OpenGL Polygon Fill-Area

general software package for graphics applications, sometimes referred to as a computer-graphics application programming interface (CG API), provides a library of functions that we can use within a programming language such as C++ to create pictures. As we noted in Section 2-8, the set of library functions can be subdivided into several categories. One of the rst things we need to do when creating a picture is to describe the component parts of the scene to be displayed. Picture components could be trees and terrain, furniture and walls, storefronts and street scenes, automobiles and billboards, atoms and molecules, or stars and galaxies. For each type of scene, we need to describe the structure of the individual objects and their coordinate locations within the scene. Those functions in a graphics package that we use to describe the various picture components are called the graphics output primitives, or simply primitives. The output primitives describing the geometry of objects are typically referred to as geometric primitives. Point positions and straight-line segments are the simplest geometric primitives. Additional geometric primitives that can be available in a graphics package include circles and other conic sections, quadric surfaces, spline curves and surfaces, and polygon color areas. And most graphics systems provide some functions for displaying character strings. After the geometry of a picture has been specied within a selected coordinate reference frame, the output primitives are projected to a twodimensional plane, corresponding to the display area of an output device, and scan converted into integer pixel positions within the frame buffer. In this chapter, we introduce the output primitives available in OpenGL, and we also discuss the device-level algorithms for implementing the primitives. Exploring the implementation algorithms for a graphics library will give us valuable insight into the capabilities of these packages. It will also provide us with an understanding of how the functions work, perhaps how they could be improved, and

85

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

86

CHAPTER 3

Graphics Output Primitives

how we might implement graphics routines ourselves for some special application. Research in computer graphics is continually discovering new and improved implementation techniques to provide us with methods for special applications, such as Internet graphics, and for developing faster and more realistic graphics displays in general.

3-1

COORDINATE REFERENCE FRAMES

To describe a picture, we rst decide upon a convenient Cartesian coordinate system, called the world-coordinate reference frame, which could be either twodimensional or three-dimensional. We then describe the objects in our picture by giving their geometric specications in terms of positions in world coordinates. For instance, we dene a straight-line segment with two endpoint positions, and a polygon is specied with a set of positions for its vertices. These coordinate positions are stored in the scene description along with other information about the objects, such as their color and their coordinate extents, which are the minimum and maximum x , y, and z values for each object. A set of coordinate extents is also described as a bounding box for an object. For a two-dimensional gure, the coordinate extents are sometimes called an objects bounding rectangle. Objects are then displayed by passing the scene information to the viewing routines, which identify visible surfaces and ultimately map the objects to positions on the video monitor. The scan-conversion process stores information about the scene, such as color values, at the appropriate locations in the frame buffer, and the objects in the scene are displayed on the output device.
y

Screen Coordinates
Locations on a video monitor are referenced in integer screen coordinates, which correspond to the pixel positions in the frame buffer. Pixel coordinate values give the scan line number (the y value) and the column number (the x value along a scan line). Hardware processes, such as screen refreshing, typically address pixel positions with respect to the top-left corner of the screen. Scan lines are then referenced from 0, at the top of the screen, to some integer value, ymax , at the bottom of the screen, and pixel positions along each scan line are numbered from 0 to xmax , left to right. However, with software commands, we can set up any convenient reference frame for screen positions. For example, we could specify an integer range for screen positions with the coordinate origin at the lower-left of a screen area (Fig. 3-1), or we could use noninteger Cartesian values for a picture description. The coordinate values we use to describe the geometry of a scene are then converted by the viewing routines to integer pixel positions within the frame buffer. Scan-line algorithms for the graphics primitives use the dening coordinate descriptions to determine the locations of pixels that are to be displayed. For example, given the endpoint coordinates for a line segment, a display algorithm must calculate the positions for those pixels that lie along the line path between the endpoints. Since a pixel position occupies a nite area of the screen, the nite size of a pixel must be taken into account by the implementation algorithms. For the present, we assume that each integer screen position references the center of a pixel area. (In Section 3-13, we consider alternative pixel-addressing schemes.) Once pixel positions have been identied for an object, the appropriate color values must be stored in the frame buffer. For this purpose, we will assume that

5 4 3 2 1 0 0 1 2 3 4 5 x

FIGURE 3-1 Pixel positions referenced with respect to the lower-left corner of a screen area.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-2

Specifying a Two-Dimensional World-Coordinate Reference Frame in OpenGL

87

we have available a low-level procedure of the form


setPixel (x, y);

This procedure stores the current color setting into the frame buffer at integer position (x , y), relative to the selected position of the screen-coordinate origin. We sometimes also will want to be able to retrieve the current frame-buffer setting for a pixel location. So we will assume that we have the following low-level function for obtaining a frame-buffer color value.
getPixel (x, y, color);

In this function, parameter color receives an integer value corresponding to the combined RGB bit codes stored for the specied pixel at position (x , y). Although, we need only specify color values at (x , y) positions for a twodimensional picture, additional screen-coordinate information is needed for three-dimensional scenes. In this case, screen coordinates are stored as threedimensional values, where the third dimension references the depth of object positions relative to a viewing position. For a two-dimensional scene, all depth values are 0.

Absolute and Relative Coordinate Specications


So far, the coordinate references that we have discussed are stated as absolute coordinate values. This means that the values specied are the actual positions within the coordinate system in use. However, some graphics packages also allow positions to be specied using relative coordinates. This method is useful for various graphics applications, such as producing drawings with pen plotters, artists drawing and painting systems, and graphics packages for publishing and printing applications. Taking this approach, we can specify a coordinate position as an offset from the last position that was referenced (called the current position). For example, if location (3, 8) is the last position that has been referenced in an application program, a relative coordinate specication of (2, 1) corresponds to an absolute position of (5, 7). An additional function is then used to set a current position before any coordinates for primitive functions are specied. To describe an object, such as a series of connected line segments, we then need to give only a sequence of relative coordinates (offsets), once a starting position has been established. Options can be provided in a graphics system to allow the specication of locations using either relative or absolute coordinates. In the following discussions, we will assume that all coordinates are specied as absolute references unless explicitly stated otherwise.

3-2

SPECIFYING A TWO-DIMENSIONAL WORLD-COORDINATE REFERENCE FRAME IN OpenGL

In our rst example program (Section 2-9), we introduced the gluOrtho2D command, which is a function we can use to set up any two-dimensional Cartesian reference frame. The arguments for this function are the four values dening the x and y coordinate limits for the picture we want to display. Since the

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

88

CHAPTER 3

Graphics Output Primitives

Vide

o Sc

reen

Display Window

ymax

ymin

FIGURE 3-2

World-coordinate limits for a display window, as specied in the glOrtho2D function.

xmin xmax

gluOrtho2D function species an orthogonal projection, we need also to be sure that the coordinate values are placed in the OpenGL projection matrix. In addition, we could assign the identity matrix as the projection matrix before dening the world-coordinate range. This would ensure that the coordinate values were not accumulated with any values we may have previously set for the projection matrix. Thus, for our initial two-dimensional examples, we can dene the coordinate frame for the screen display window with the following statements.
glMatrixMode (GL_PROJECTION); glLoadIdentity ( ); gluOrtho2D (xmin, xmax, ymin, ymax);

The display window will then be referenced by coordinates (x min, ymin) at the lower-left corner and by coordinates (xmax, ymax) at the upper-right corner, as shown in Fig. 3-2. We can then designate one or more graphics primitives for display using the coordinate reference specied in the gluOrtho2D statement. If the coordinate extents of a primitive are within the coordinate range of the display window, all of the primitive will be displayed. Otherwise, only those parts of the primitive within the display-window coordinate limits will be shown. Also, when we set up the geometry describing a picture, all positions for the OpenGL primitives must be given in absolute coordinates, with respect to the reference frame dened in the gluOrtho2D function.

3-3

OpenGL POINT FUNCTIONS

To specify the geometry of a point, we simply give a coordinate position in the world reference frame. Then this coordinate position, along with other geometric descriptions we may have in our scene, is passed to the viewing routines. Unless we specify other attribute values, OpenGL primitives are displayed with a default size and color. The default color for primitives is white and the default point size is equal to the size of one screen pixel.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-3

OpenGL Point Functions

89

We use the following OpenGL function to state the coordinate values for a single position
glVertex* ( );

where the asterisk (*) indicates that sufx codes are required for this function. These sufx codes are used to identify the spatial dimension, the numerical data type to be used for the coordinate values, and a possible vector form for the coordinate specication. A glVertex function must be placed between a glBegin function and a glEnd function. The argument of the glBegin function is used to identify the kind of output primitive that is to be displayed, and glEnd takes no arguments. For point plotting, the argument of the glBegin function is the symbolic constant GL POINTS. Thus, the form for an OpenGL specication of a point position is
glBegin (GL_POINTS); glVertex* ( ); glEnd ( );

Although the term vertex strictly refers to a corner point of a polygon, the point of intersection of the sides of an angle, a point of intersection of an ellipse with its major axis, or other similar coordinate positions on geometric structures, the glVertex function is used in OpenGL to specify coordinates for any point position. In this way, a single function is used for point, line, and polygon specicationsand, most often, polygon patches are used to describe the objects in a scene. Coordinate positions in OpenGL can be given in two, three, or four dimensions. We use a sufx value of 2, 3, or 4 on the glVertex function to indicate the dimensionality of a coordinate position. A four-dimensional specication indicates a homogeneous-coordinate representation, where the homogeneous parameter h (the fourth coordinate) is a scaling factor for the Cartesian-coordinate values. Homogeneous-coordinate representations are useful for expressing transformation operations in matrix form, and they are discussed in detail in Chapter 5. Since OpenGL treats two dimensions as a special case of three dimensions, any (x , y) coordinate specication is equivalent to (x , y, 0) with h = 1. We need to state also which data type is to be used for the numerical-value specications of the coordinates. This is accomplished with a second sufx code on the glVertex function. Sufx codes for specifying a numerical data type are i (integer), s (short), f (oat), and d (double). Finally, the coordinate values can be listed explicitly in the glVertex function, or a single argument can be used that references a coordinate position as an array. If we use an array specication for a coordinate position, we need to append a third sufx code: v (for vector ). In the following example, three equally spaced points are plotted along a two-dimensional straight-line path with a slope of 2 (Fig. 3-3). Coordinates are given as integer pairs.
glBegin (GL_POINTS); glVertex2i (50, 100); glVertex2i (75, 150); glVertex2i (100, 200); glEnd ( );

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

90

CHAPTER 3

Graphics Output Primitives


y 200 150 100 50

FIGURE 3-3 Display of three point positions generated with glBegin (GL POINTS).

50

100

150

Alternatively, we could specify the coordinate values for the preceding points in arrays such as
int point1 [ ] = {50, 100}; int point2 [ ] = {75, 150}; int point3 [ ] = {100, 200};

and call the OpenGL functions for plotting the three points as
glBegin (GL_POINTS); glVertex2iv (point1); glVertex2iv (point2); glVertex2iv (point3); glEnd ( );

And here is an example of specifying two point positions in a threedimensional world reference frame. In this case, we give the coordinates as explicit oating-point values.
glBegin (GL_POINTS); glVertex3f (-78.05, 909.72, 14.60); glVertex3f (261.91, -5200.67, 188.33); glEnd ( );

We could also dene a C++ class or structure (struct) for specifying point positions in various dimensions. For example,
class wcPt2D { public: GLfloat x, y; };

Using this class denition, we could specify a two-dimensional, world-coordinate point position with the statements
wcPt2D pointPos; pointPos.x = 120.75; pointPos.y = 45.30; glBegin (GL_POINTS); glVertex2f (pointPos.x, pointPos.y); glEnd ( );

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-4

OpenGL Line Functions

91

And we can use the OpenGL point-plotting functions within a C++ procedure to implement the setPixel command.

3-4

OpenGL LINE FUNCTIONS

Graphics packages typically provide a function for specifying one or more straight-line segments, where each line segment is dened by two endpoint coordinate positions. In OpenGL, we select a single endpoint coordinate position using the glVertex function, just as we did for a point position. And we enclose a list of glVertex functions between the glBegin/glEnd pair. But now we use a symbolic constant as the argument for the glBegin function that interprets a list of positions as the endpoint coordinates for line segments. There are three symbolic constants in OpenGL that we can use to specify how a list of endpoint positions should be connected to form a set of straight-line segments. By default, each symbolic constant displays solid, white lines. A set of straight-line segments between each successive pair of endpoints in a list is generated using the primitive line constant GL LINES. In general, this will result in a set of unconnected lines unless some coordinate positions are repeated. Nothing is displayed if only one endpoint is specied, and the last endpoint is not processed if the number of endpoints listed is odd. For example, if we have ve coordinate positions, labeled p1 through p5, and each is represented as a twodimensional array, then the following code could generate the display shown in Fig. 3-4(a).
glBegin (GL_LINES); glVertex2iv (p1); glVertex2iv (p2); glVertex2iv (p3); glVertex2iv (p4); glVertex2iv (p5); glEnd ( );

Thus, we obtain one line segment between the rst and second coordinate positions, and another line segment between the third and fourth positions. In this case, the number of specied endpoints is odd, so the last coordinate position is ignored. With the OpenGL primitive constant GL LINE STRIP, we obtain a polyline.
p3 p3 p3

p1

p5

p1

p5

p1

p2 (a)

p4

p2 (b)

p4

p2 (c)

p4

Line segments that can be displayed in OpenGL using a list of ve endpoint coordinates. (a) An unconnected set of lines generated with the primitive line constant GL LINES. (b) A polyline generated with GL LINE STRIP. (c) A closed polyline generated with GL LINE LOOP.

FIGURE 3-4

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

92

CHAPTER 3

Graphics Output Primitives

In this case, the display is a sequence of connected line segments between the rst endpoint in the list and the last endpoint. The rst line segment in the polyline is displayed between the rst endpoint and the second endpoint; the second line segment is between the second and third endpoints; and so forth, up to the last line endpoint. Nothing is displayed if we do not list at least two coordinate positions. Using the same ve coordinate positions as in the previous example, we obtain the display in Fig. 3-4(b) with the code
glBegin (GL_LINE_STRIP); glVertex2iv (p1); glVertex2iv (p2); glVertex2iv (p3); glVertex2iv (p4); glVertex2iv (p5); glEnd ( );

The third OpenGL line primitive is GL LINE LOOP, which produces a closed polyline. An additional line is added to the line sequence from the previous example, so that the last coordinate endpoint in the sequence is connected to the rst coordinate endpoint of the polyline. Figure 3-4(c) shows the display of our endpoint list when we select this line option.
glBegin (GL_LINE_LOOP); glVertex2iv (p1); glVertex2iv (p2); glVertex2iv (p3); glVertex2iv (p4); glVertex2iv (p5); glEnd ( );

As noted earlier, picture components are described in a world-coordinate reference frame that is eventually mapped to the coordinate reference for the output device. Then the geometric information about the picture is scan converted to pixel positions. In the next section, we take a look at the scan-conversion algorithms for implementing the OpenGL line functions.

3-5

LINE-DRAWING ALGORITHMS

A straight-line segment in a scene is dened by the coordinate positions for the endpoints of the segment. To display the line on a raster monitor, the graphics system must rst project the endpoints to integer screen coordinates and determine the nearest pixel positions along the line path between the two endpoints. Then the line color is loaded into the frame buffer at the corresponding pixel coordinates. Reading from the frame buffer, the video controller plots the screen pixels. This process digitizes the line into a set of discrete integer positions that, in general, only approximates the actual line path. A computed line position of (10.48, 20.51), for example, is converted to pixel position (10, 21). This rounding of coordinate values to integers causes all but horizontal and vertical lines to be displayed with a stair-step appearance (the jaggies), as represented in Fig. 3-5. The characteristic stair-step shape of raster lines is particularly noticeable on systems with low resolution, and we can improve their appearance somewhat by displaying them

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-5

Line-Drawing Algorithms

93

FIGURE 3-5

Stair-step effect (jaggies) produced when a line is generated as a series of pixel positions.

on high-resolution systems. More effective techniques for smoothing a raster line are based on adjusting pixel intensities along the line path (Section 4-17).

Line Equations
We determine pixel positions along a straight-line path from the geometric properties of the line. The Cartesian slope-intercept equation for a straight line is y=mx+b (3-1)

yend y0

with m as the slope of the line and b as the y intercept. Given that the two endpoints of a line segment are specied at positions (x0 , y0 ) and (xend , yend ), as shown in Fig. 3-6, we can determine values for the slope m and y intercept b with the following calculations: m= yend y0 xend x0 b = y0 m x0 (3-2) (3-3)

x0

xend

FIGURE 3-6 Line path between endpoint positions (x0 , y0 ) and (xend , yend ).

Algorithms for displaying straight lines are based on the line equation 3-1 and the calculations given in Eqs. 3-2 and 3-3. For any given x interval x along a line, we can compute the corresponding y interval y from Eq. 3-2 as y = m x Similarly, we can obtain the x interval x corresponding to a specied y as y (3-5) m These equations form the basis for determining deection voltages in analog displays, such as a vector-scan system, where arbitrarily small changes in deection voltage are possible. For lines with slope magnitudes |m| < 1, x can be set proportional to a small horizontal deection voltage, and the corresponding vertical deection is then set proportional to y as calculated from Eq. 3-4. For lines whose slopes have magnitudes |m| > 1, y can be set proportional to a small vertical deection voltage with the corresponding horizontal deection voltage set proportional to x , calculated from Eq. 3-5. For lines with m = 1, x = y and the horizontal and vertical deections voltages are equal. In each case, a smooth line with slope m is generated between the specied endpoints. On raster systems, lines are plotted with pixels, and step sizes in the horizontal and vertical directions are constrained by pixel separations. That is, we must sample a line at discrete positions and determine the nearest pixel to the line at each sampled position. This scan-conversion process for straight lines is illustrated in Fig. 3-7 with discrete sample positions along the x axis. x = (3-4)

yend y0

x0

xend

FIGURE 3-7 Straight-line segment with ve sampling positions along the x axis between x0 and xend .

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

94

CHAPTER 3

Graphics Output Primitives

DDA Algorithm
The digital differential analyzer (DDA) is a scan-conversion line algorithm based on calculating either y or x , using Eq. 3-4 or Eq. 3-5. A line is sampled at unit intervals in one coordinate and the corresponding integer values nearest the line path are determined for the other coordinate. We consider rst a line with positive slope, as shown in Fig. 3-6. If the slope is less than or equal to 1, we sample at unit x intervals ( x = 1) and compute successive y values as yk +1 = yk + m (3-6)

Subscript k takes integer values starting from 0, for the rst point, and increases by 1 until the nal endpoint is reached. Since m can be any real number between 0.0 and 1.0, each calculated y value must be rounded to the nearest integer corresponding to a screen pixel position in the x column we are processing. For lines with a positive slope greater than 1.0, we reverse the roles of x and y. That is, we sample at unit y intervals ( y = 1) and calculate consecutive x values as 1 (3-7) m In this case, each computed x value is rounded to the nearest pixel position along the current y scan line. Equations 3-6 and 3-7 are based on the assumption that lines are to be processed from the left endpoint to the right endpoint (Fig. 3-6). If this processing is reversed, so that the starting endpoint is at the right, then either we have x = 1 and xk +1 = xk + yk +1 = yk m or (when the slope is greater than 1) we have y = 1 with 1 (3-9) m Similar calculations are carried out using equations 3-6 through 3-9 to determine pixel positions along a line with negative slope. Thus, if the absolute value of the slope is less than 1 and the starting endpoint is at the left, we set x = 1 and calculate y values with Eq. 3-6. When the starting endpoint is at the right (for the same slope), we set x = 1 and obtain y positions using Eq. 3-8. For a negative slope with absolute value greater than 1, we use y = 1 and Eq. 3-9 or we use y = 1 and Eq. 3-7. This algorithm is summarized in the following procedure, which accepts as input two integer screen positions for the endpoints of a line segment. Horizontal and vertical differences between the endpoint positions are assigned to parameters dx and dy. The difference with the greater magnitude determines the value of parameter steps. Starting with pixel position (x0, y0), we determine the offset needed at each step to generate the next pixel position along the line path. We loop through this process steps times. If the magnitude of dx is greater than the magnitude of dy and x0 is less than xEnd, the values for the increments in the x and y directions are 1 and m, respectively. If the greater change is in the x direction, but x0 is greater than xEnd, then the decrements 1 and m are used to generate each new point on the line. Otherwise, we use a unit increment (or 1 decrement) in the y direction and an x increment (or decrement) of m . xk +1 = xk (3-8)

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-5

Line-Drawing Algorithms

95

#include <stdlib.h> #include <math.h> inline int round (const float a) { return int (a + 0.5); }

void lineDDA (int x0, int y0, int xEnd, int yEnd) { int dx = xEnd - x0, dy = yEnd - y0, steps, k; float xIncrement, yIncrement, x = x0, y = y0; if (fabs (dx) > fabs (dy)) steps = fabs (dx); else steps = fabs (dy); xIncrement = float (dx) / float (steps); yIncrement = float (dy) / float (steps); setPixel (round (x), round (y)); for (k = 0; k < steps; k++) { x += xIncrement; y += yIncrement; setPixel (round (x), round (y)); }

13 12

Specified Line Path

The DDA algorithm is a faster method for calculating pixel positions than one that directly implements Eq. 3-1. It eliminates the multiplication in Eq. 3-1 by making use of raster characteristics, so that appropriate increments are applied in the x or y directions to step from one pixel position to another along the line path. The accumulation of round-off error in successive additions of the oating-point increment, however, can cause the calculated pixel positions to drift away from the true line path for long line segments. Furthermore, the rounding operations and oating-point arithmetic in this procedure are still time consuming. We can improve the performance of the DDA algorithm by separating the increments 1 m and m into integer and fractional parts so that all calculations are reduced 1 to integer operations. A method for calculating m increments in integer steps is discussed in Section 4-10. And in the next section, we consider a more general scan-line approach that can be applied to both lines and curves.

11 10 10 11 12 13

FIGURE 3-8 A section of a display screen where a straight-line segment is to be plotted, starting from the pixel at column 10 on scan line 11.

Bresenhams Line Algorithm


In this section, we introduce an accurate and efcient raster line-generating algorithm, developed by Bresenham, that uses only incremental integer calculations. In addition, Bresenhams line algorithm can be adapted to display circles and other curves. Figures 3-8 and 3-9 illustrate sections of a display screen where straightline segments are to be drawn. The vertical axes show scan-line positions, and the horizontal axes identify pixel columns. Sampling at unit x intervals in these examples, we need to decide which of two possible pixel positions is closer to the line path at each sample step. Starting from the left endpoint shown in Fig. 3-8, we need to determine at the next sample position whether to plot the pixel at position (11, 11) or the one at (11, 12). Similarly, Fig 3-9 shows a negative-slope line path

50 49 48 50 51 52 53 Specified Line Path

FIGURE 3-9 A section of a display screen where a negative slope line segment is to be plotted, starting from the pixel at column 50 on scan line 50.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

96

CHAPTER 3

Graphics Output Primitives

yk 3 yk 2 yk 1 yk xk xk 1 xk 2 xk 3 y mx b

FIGURE 3-10 A section of the screen showing a pixel in column xk on scan line yk that is to be plotted along the path of a line segment with slope 0 < m < 1.

starting from the left endpoint at pixel position (50, 50). In this one, do we select the next pixel position as (51, 50) or as (51, 49)? These questions are answered with Bresenhams line algorithm by testing the sign of an integer parameter whose value is proportional to the difference between the vertical separations of the two pixel positions from the actual line path. To illustrate Bresenhams approach, we rst consider the scan-conversion process for lines with positive slope less than 1.0. Pixel positions along a line path are then determined by sampling at unit x intervals. Starting from the left endpoint (x0 , y0 ) of a given line, we step to each successive column (x position) and plot the pixel whose scan-line y value is closest to the line path. Figure 3-10 demonstrates the k th step in this process. Assuming we have determined that the pixel at (xk , yk ) is to be displayed, we next need to decide which pixel to plot in column xk +1 = xk + 1. Our choices are the pixels at positions (xk + 1, yk ) and (xk + 1, yk + 1). At sampling position xk + 1, we label vertical pixel separations from the mathematical line path as dlower and dupper (Fig. 3-11). The y coordinate on the mathematical line at pixel column position xk + 1 is calculated as y = m(xk + 1) + b Then dlower = y yk (3-10)

yk 1 y yk xk 1

d upper dlower

= m(xk + 1) + b yk and dupper = ( yk + 1) y = yk + 1 m(xk + 1) b

(3-11)

(3-12)

FIGURE 3-11 Vertical distances between pixel positions and the line y coordinate at sampling position xk + 1.

To determine which of the two pixels is closest to the line path, we can set up an efcient test that is based on the difference between the two pixel separations: dlower dupper = 2m(xk + 1) 2 yk + 2b 1 (3-13) A decision parameter pk for the k th step in the line algorithm can be obtained by rearranging Eq. 3-13 so that it involves only integer calculations. We accomplish this by substituting m = y/ x , where y and x are the vertical and horizontal separations of the endpoint positions, and dening the decision parameter as pk = x (dlower dupper ) (3-14) = 2 y xk 2 x yk + c

The sign of pk is the same as the sign of dlower dupper , since x > 0 for our example. Parameter c is constant and has the value 2 y + x (2b 1), which is independent of the pixel position and will be eliminated in the recursive calculations for pk . If the pixel at yk is closer to the line path than the pixel at yk + 1 (that is, dlower < dupper ), then decision parameter pk is negative. In that case, we plot the lower pixel; otherwise we plot the upper pixel. Coordinate changes along the line occur in unit steps in either the x or y directions. Therefore, we can obtain the values of successive decision parameters using incremental integer calculations. At step k + 1, the decision parameter is evaluated from Eq. 3-14 as pk +1 = 2 y xk +1 2 x yk +1 + c

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-5

Line-Drawing Algorithms

97

Subtracting Eq. 3-14 from the preceding equation, we have pk +1 pk = 2 y(xk +1 xk ) 2 x ( yk +1 yk ) But xk +1 = xk + 1, so that pk +1 = pk + 2 y 2 x ( yk +1 yk ) (3-15) where the term yk +1 yk is either 0 or 1, depending on the sign of parameter pk . This recursive calculation of decision parameters is performed at each integer x position, starting at the left coordinate endpoint of the line. The rst parameter, p0 , is evaluated from Eq. 3-14 at the starting pixel position (x0 , y0 ) and with m evaluated as y/ x : p0 = 2 y x (3-16)

We summarize Bresenham line drawing for a line with a positive slope less than 1 in the following outline of the algorithm. The constants 2 y and 2 y 2 x are calculated once for each line to be scan converted, so the arithmetic involves only integer addition and subtraction of these two constants.

Bresenhams Line-Drawing Algorithm for |m | < 1.0


1. 2. 3. Input the two line endpoints and store the left endpoint in (x0 , y0 ). Set the color for frame-buffer position (x0 , y0 ); i.e., plot the rst point. Calculate the constants x , y, 2 y, and 2 y 2 x , and obtain the starting value for the decision parameter as p0 = 2 y 4. x

At each xk along the line, starting at k = 0, perform the following test. If pk < 0, the next point to plot is (xk + 1, yk ) and p k +1 = p k + 2 y Otherwise, the next point to plot is (xk + 1, yk + 1) and pk +1 = pk + 2 y 2 x

5.

Perform step 4

x 1 times.

EXAMPLE 3-1 Bresenham Line Drawing To illustrate the algorithm, we digitize the line with endpoints (20, 10) and (30, 18). This line has a slope of 0.8, with x = 10, y=8 The initial decision parameter has the value: p0 = 2 y x =6 and the increments for calculating successive decision parameters are 2 y = 16, 2 y 2 x = 4

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

98

CHAPTER 3

Graphics Output Primitives

We plot the initial point (x0 , y0 ) = (20, 10), and determine successive pixel positions along the line path from the decision parameter as: k
0 1 2 3 4

pk
6 2 2 14 10

( x k +1 , y k +1 )
(21, 11) (22, 12) (23, 12) (24, 13) (25, 14)

k
5 6 7 8 9

pk
6 2 2 14 10

(xk +1 , yk +1 )
(26, 15) (27, 16) (28, 16) (29, 17) (30, 18)

18

15

FIGURE 3-12 Pixel positions along the line path between endpoints (20, 10) and (30, 18), plotted with Bresenhams line algorithm.

10 20 21 22 25 30

A plot of the pixels generated along this line path is shown in Fig. 3-12. An implementation of Bresenham line drawing for slopes in the range 0 < m < 1.0 is given in the following procedure. Endpoint pixel positions for the line are passed to this procedure, and pixels are plotted from the left endpoint to the right endpoint.
#include <stdlib.h> #include <math.h> /* Bresenham line-drawing procedure for |m| < 1.0. */ void lineBres (int x0, int y0, int xEnd, int yEnd) { int dx = fabs (xEnd - x0), dy = fabs(yEnd - y0); int p = 2 * dy - dx; int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx); int x, y; /* Determine which endpoint to use as start position. if (x0 > xEnd) { x = xEnd; y = yEnd; xEnd = x0; } */

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-6 else { x = x0; y = y0; } setPixel (x, y); while (x < xEnd) { x++; if (p < 0) p += twoDy; else { y++; p += twoDyMinusDx; } setPixel (x, y); }

Parallel Line Algorithms

99

Bresenhams algorithm is generalized to lines with arbitrary slope by considering the symmetry between the various octants and quadrants of the xy plane. For a line with positive slope greater than 1.0, we interchange the roles of the x and y directions. That is, we step along the y direction in unit steps and calculate successive x values nearest the line path. Also, we could revise the program to plot pixels starting from either endpoint. If the initial position for a line with positive slope is the right endpoint, both x and y decrease as we step from right to left. To ensure that the same pixels are plotted regardless of the starting endpoint, we always chose the upper (or the lower) of the two candidate pixels whenever the two vertical separations from the line path are equal (dlower = dupper ). For negative slopes, the procedures are similar, except that now one coordinate decreases as the other increases. Finally, special cases can be handled separately: Horizontal lines ( y = 0), vertical lines ( x = 0), and diagonal lines (| x | = | y|) can each be loaded directly into the frame buffer without processing them through the line-plotting algorithm.

Displaying Polylines
Implementation of a polyline procedure is accomplished by invoking a linedrawing routine n 1 times to display the lines connecting the n endpoints. Each successive call passes the coordinate pair needed to plot the next line section, where the rst endpoint of each coordinate pair is the last endpoint of the previous section. Once the color values for pixel positions along the rst line segment have been set in the frame buffer, we process subsequent line segments starting with the next pixel position following the rst endpoint for that segment. In this way, we can avoid setting the color of some endpoints twice. We discuss methods for avoiding overlap of displayed objects in more detail in Section 3-13.

3-6

PARALLEL LINE ALGORITHMS

The line-generating algorithms we have discussed so far determine pixel positions sequentially. Using parallel processing, we can calculate multiple pixel positions along a line path simultaneously by partitioning the computations

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

100

CHAPTER 3

Graphics Output Primitives

among the various processors available. One approach to the partitioning problem is to adapt an existing sequential algorithm to take advantage of multiple processors. Alternatively, we can look for other ways to set up the processing so that pixel positions can be calculated efciently in parallel. An important consideration in devising a parallel algorithm is to balance the processing load among the available processors. Given n p processors, we can set up a parallel Bresenham line algorithm by subdividing the line path into n p partitions and simultaneously generating line segments in each of the subintervals. For a line with slope 0 < m < 1.0 and left endpoint coordinate position (x0 , y0 ), we partition the line along the positive x direction. The distance between beginning x positions of adjacent partitions can be calculated as xp = x + np 1 np (3-17)

where x is the width of the line, and the value for partition width x p is computed using integer division. Numbering the partitions, and the processors, as 0, 1, 2, up to n p 1, we calculate the starting x coordinate for the k th partition as xk = x0 + k x p (3-18)

As an example, if we have n p = 4 processors, with x = 15, the width of the partitions is 4 and the starting x values for the partitions are x0 , x0 + 4, x0 + 8, and x0 + 12. With this partitioning scheme, the width of the last (rightmost) subinterval will be smaller than the others in some cases. In addition, if the line endpoints are not integers, truncation errors can result in variable width partitions along the length of the line. To apply Bresenhams algorithm over the partitions, we need the initial value for the y coordinate and the initial value for the decision parameter in each partition. The change yp in the y direction over each partition is calculated from the line slope m and partition width x p : yp = m x p At the k th partition, the starting y coordinate is then yk = y0 + round(k yp ) (3-20) (3-19)

The initial decision parameter for Bresenhams algorithm at the start of the k th subinterval is obtained from Eq. 3-14: pk = (k x p )(2 y) round(k yp )(2 x ) + 2 y x (3-21)

Each processor then calculates pixel positions over its assigned subinterval using the preceding starting decision parameter value and the starting coordinates (xk , yk ). Floating-point calculations can be reduced to integer arithmetic in the computations for starting values yk and pk by substituting m = y/ x and rearranging terms. We can extend the parallel Bresenham algorithm to a line with slope greater than 1.0 by partitioning the line in the y direction and calculating beginning x values for the partitions. For negative slopes, we increment coordinate values in one direction and decrement in the other. Another way to set up parallel algorithms on raster systems is to assign each processor to a particular group of screen pixels. With a sufcient number of processors, we can assign each processor to one pixel within some screen region. This

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-7

Setting Frame-Buffer Values

101

approach can be adapted to line display by assigning one processor to each of the pixels within the limits of the coordinate extents of the line and calculating pixel yend distances from the line path. The number of pixels within the bounding box of a line is x y (Fig. 3-13). Perpendicular distance d from the line in Fig. 3-13 to a pixel with coordinates (x , y) is obtained with the calculation d = Ax + B y + C where A= B= C= with linelength = x2 + y2 y linelength x linelength x0 y y0 x linelength (3-22)
y0 x

x0

xend

FIGURE 3-13

Bounding box for a line with endpoint separations x and y.

Once the constants A, B , and C have been evaluated for the line, each processor must perform two multiplications and two additions to compute the pixel distance d. A pixel is plotted if d is less than a specied line thickness parameter. Instead of partitioning the screen into single pixels, we can assign to each processor either a scan line or a column of pixels depending on the line slope. Each processor then calculates the intersection of the line with the horizontal row or vertical column of pixels assigned to that processor. For a line with slope |m| < 1.0, each processor simply solves the line equation for y, given an x column value. For a line with slope magnitude greater than 1.0, the line equation is solved for x by each processor, given a scan line y value. Such direct methods, although slow on sequential machines, can be performed efciently using multiple processors.

3-7

SETTING FRAME-BUFFER VALUES

A nal stage in the implementation procedures for line segments and other objects is to set the frame-buffer color values. Since scan-conversion algorithms generate pixel positions at successive unit intervals, incremental operations can also be used to access the frame buffer efciently at each step of the scan-conversion process. As a specic example, suppose the frame buffer array is addressed in rowmajor order and that pixel positions are labeled from (0, 0) at the lower-left screen corner to (xmax , ymax ) at the top-right corner (Fig. 3-14). For a bilevel system (one bit per pixel), the frame-buffer bit address for pixel position (x , y) is calculated as addr(x , y) = addr(0, 0) + y(xmax + 1) + x (3-23) Moving across a scan line, we can calculate the frame-buffer address for the pixel at (x + 1, y) as the following offset from the address for position (x , y): addr(x + 1, y) = addr(x , y) + 1 (3-24) Stepping diagonally up to the next scan line from (x , y), we get to the frame-buffer

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

102
ymax

CHAPTER 3

Graphics Output Primitives

(x, y) 0 0 Screen xmax (0, 0) (1, 0) (2, 0) (xmax, 0) (0, 1)

(xmax, ymax)

addr (0, 0) Frame Buffer

addr (x, y)

frame buffer.

FIGURE 3-14

Pixel screen positions stored linearly in row-major order within the

address of (x + 1, y + 1) with the calculation addr(x + 1, y + 1) = addr(x , y) + xmax + 2 (3-25) where the constant xmax + 2 is precomputed once for all line segments. Similar incremental calculations can be obtained from Eq. 3-23 for unit steps in the negative x and y screen directions. Each of the address calculations involves only a single integer addition. Methods for implementing these procedures depend on the capabilities of a particular system and the design requirements of the software package. With systems that can display a range of intensity values for each pixel, frame-buffer address calculations include pixel width (number of bits), as well as the pixel screen location.

3-8

OpenGL CURVE FUNCTIONS

Routines for generating basic curves, such as circles and ellipses, are not included as primitive functions in the OpenGL core library. But this library does contain functions for displaying B ezier splines, which are polynomials that are dened with a discrete point set. And the OpenGL Utility (GLU) has routines for threedimensional quadrics, such as spheres and cylinders, as well as routines for producing rational B-splines, which are a general class of splines that include the simpler B ezier curves. Using rational B-splines, we can display circles, ellipses, and other two-dimensional quadrics. In addition, there are routines in the OpenGL Utility Toolkit (GLUT) that we can use to display some three-dimensional quadrics, such as spheres and cones, and some other shapes. However, all these routines are more involved than the basic primitives we introduce in this chapter, so we defer further discussion of this group of functions until Chapter 8. Another method we can use to generate a display of a simple curve is to approximate it using a polyline. We just need to locate a set of points along the curve path and connect the points with straight-line segments. The more line sections we include in the polyline, the smoother the appearance of the curve. As an example, Fig. 3-15 illustrates various polyline displays that could be used for a circle segment. A third alternative is to write our own curve-generation functions based on the algorithms presented in the following sections. We rst discuss efcient methods for circle and ellipse generation, then we take a look at procedures for displaying other conic sections, polynomials, and splines.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-9

Circle-Generating Algorithms

103

(a)

(b)

(c)

FIGURE 3-15 A circular arc approximated with (a) three straight-line segments, (b) six line segments, and (c) twelve line segments.

3-9

CIRCLE-GENERATING ALGORITHMS

Since the circle is a frequently used component in pictures and graphs, a procedure for generating either full circles or circular arcs is included in many graphics packages. And sometimes a general function is available in a graphics library for displaying various kinds of curves, including circles and ellipses.
(x, y) u

Properties of Circles
A circle (Fig. 3-16) is dened as the set of points that are all at a given distance r from a center position (xc , yc ). For any circle point (x , y), this distance relationship is expressed by the Pythagorean theorem in Cartesian coordinates as (x xc )2 + ( y yc )2 = r 2 (3-26)

r yc xc

We could use this equation to calculate the position of points on a circle circumference by stepping along the x axis in unit steps from xc r to xc + r and calculating the corresponding y values at each position as y = yc r 2 (xc x )2 (3-27)

Circle with center coordinates (xc , yc ) and radius r .

FIGURE 3-16

But this is not the best method for generating a circle. One problem with this approach is that it involves considerable computation at each step. Moreover, the spacing between plotted pixel positions is not uniform, as demonstrated in Fig. 3-17. We could adjust the spacing by interchanging x and y (stepping through y values and calculating x values) whenever the absolute value of the slope of the circle is greater than 1. But this simply increases the computation and processing required by the algorithm. Another way to eliminate the unequal spacing shown in Fig. 3-17 is to calculate points along the circular boundary using polar coordinates r and (Fig. 3-16). Expressing the circle equation in parametric polar form yields the pair of

FIGURE 3-17

Upper half of a circle plotted with Eq. 3-27 and with (xc , yc ) = (0, 0).

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

104

CHAPTER 3

Graphics Output Primitives

equations x = xc + r cos y = yc + r sin (3-28)

(y, x) (x, y)

(y, x) (x, y) (x, y) (y, x)

45

(x, y) (y, x)

FIGURE 3-18

Symmetry of a circle. Calculation of a circle point (x , y) in one octant yields the circle points shown for the other seven octants.

When a display is generated with these equations using a xed angular step size, a circle is plotted with equally spaced points along the circumference. To reduce calculations, we can use a large angular separation between points along the circumference and connect the points with straight-line segments to approximate the circular path. For a more continuous boundary on a raster display, we can 1 set the angular step size at r . This plots pixel positions that are approximately one unit apart. Although polar coordinates provide equal point spacing, the trigonometric calculations are still time consuming. For any of the previous circle-generating methods, we can reduce computations by considering the symmetry of circles. The shape of the circle is similar in each quadrant. Therefore, if we determine the curve positions in the rst quadrant, we can generate the circle section in the second quadrant of the xy plane by noting that the two circle sections are symmetric with respect to the y axis. And circle sections in the third and fourth quadrants can be obtained from sections in the rst and second quadrants by considering symmetry about the x axis. We can take this one step further and note that there is also symmetry between octants. Circle sections in adjacent octants within one quadrant are symmetric with respect to the 45 line dividing the two octants. These symmetry conditions are illustrated in Fig. 3-18, where a point at position (x , y) on a one-eighth circle sector is mapped into the seven circle points in the other octants of the xy plane. Taking advantage of the circle symmetry in this way, we can generate all pixel positions around a circle by calculating only the points within the sector from x = 0 to x = y. The slope of the curve in this octant has a magnitude less than or equal to 1.0. At x = 0, the circle slope is 0, and at x = y, the slope is 1.0. Determining pixel positions along a circle circumference using symmetry and either Eq. 3-26 or Eq. 3-28 still requires a good deal of computation. The Cartesian equation 3-26 involves multiplications and square root calculations, while the parametric equations contain multiplications and trigonometric calculations. More efcient circle algorithms are based on incremental calculation of decision parameters, as in the Bresenham line algorithm, which involves only simple integer operations. Bresenhams line algorithm for raster displays is adapted to circle generation by setting up decision parameters for nding the closest pixel to the circumference at each sampling step. The circle equation 3-26, however, is nonlinear, so that square root evaluations would be required to compute pixel distances from a circular path. Bresenhams circle algorithm avoids these square-root calculations by comparing the squares of the pixel separation distances. However, it is possible to perform a direct distance comparison without a squaring operation. The basic idea in this approach is to test the halfway position between two pixels to determine if this midpoint is inside or outside the circle boundary. This method is more easily applied to other conics; and for an integer circle radius, the midpoint approach generates the same pixel positions as the Bresenham circle algorithm. For a straight-line segment, the midpoint method is equivalent to the Bresenham line algorithm. Also, the error involved in locating pixel positions along any conic section using the midpoint test is limited to onehalf the pixel separation.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-9

Circle-Generating Algorithms

105

Midpoint Circle Algorithm


As in the raster line algorithm, we sample at unit intervals and determine the closest pixel position to the specied circle path at each step. For a given radius r and screen center position (xc , yc ), we can rst set up our algorithm to calculate pixel positions around a circle path centered at the coordinate origin (0, 0). Then each calculated position (x , y) is moved to its proper screen position by adding xc to x and yc to y. Along the circle section from x = 0 to x = y in the rst quadrant, the slope of the curve varies from 0 to 1.0. Therefore, we can take unit steps in the positive x direction over this octant and use a decision parameter to determine which of the two possible pixel positions in any column is vertically closer to the circle path. Positions in the other seven octants are then obtained by symmetry. To apply the midpoint method, we dene a circle function as f circ (x , y) = x 2 + y2 r 2 (3-29) Any point (x , y) on the boundary of the circle with radius r satises the equation f circ (x , y) = 0. If the point is in the interior of the circle, the circle function is negative. And if the point is outside the circle, the circle function is positive. To summarize, the relative position of any point (x , y) can be determined by checking the sign of the circle function: < 0, if (x , y) is inside the circle boundary f circ (x , y) = 0, if (x , y) is on the circle boundary (3-30) > 0, if (x , y) is outside the circle boundary The tests in 3-30 are performed for the midpositions between pixels near the circle path at each sampling step. Thus, the circle function is the decision parameter in the midpoint algorithm, and we can set up incremental calculations for this function as we did in the line algorithm. Figure 3-19 shows the midpoint between the two candidate pixels at sampling position xk + 1. Assuming that we have just plotted the pixel at (xk , yk ), we next need to determine whether the pixel at position (xk + 1, yk ) or the one at position (xk + 1, yk 1) is closer to the circle. Our decision parameter is the circle function 3-29 evaluated at the midpoint between these two pixels: pk = f circ xk + 1, yk = (xk + 1)2 + yk 1 2 1 2
2

yk yk 1

x2 y2 r 2 0 Midpoint xk xk 1 xk 2

Midpoint between candidate pixels at sampling position xk + 1 along a circular path.

FIGURE 3-19

r2

(3-31)

If pk < 0, this midpoint is inside the circle and the pixel on scan line yk is closer to the circle boundary. Otherwise, the midposition is outside or on the circle boundary, and we select the pixel on scan line yk 1. Successive decision parameters are obtained using incremental calculations. We obtain a recursive expression for the next decision parameter by evaluating the circle function at sampling position xk +1 + 1 = xk + 2: pk +1 = f circ xk +1 + 1, yk +1 = [(xk + 1) + 1]2 + 1 2 1 2
2

yk +1

r2

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

106

CHAPTER 3

Graphics Output Primitives

or
2 2 pk +1 = pk + 2(xk + 1) + yk +1 yk ( yk +1 yk ) + 1

(3-32)

where yk +1 is either yk or yk 1, depending on the sign of pk . Increments for obtaining pk +1 are either 2xk +1 + 1 (if pk is negative) or 2xk +1 + 1 2 yk +1 . Evaluation of the terms 2xk +1 and 2 yk +1 can also be done incrementally as 2xk +1 = 2xk + 2 2 yk +1 = 2 yk 2 At the start position (0, r ), these two terms have the values 0 and 2r , respectively. Each successive value for the 2xk +1 term is obtained by adding 2 to the previous value, and each successive value for the 2 yk +1 term is obtained by subtracting 2 from the previous value. The initial decision parameter is obtained by evaluating the circle function at the start position (x0 , y0 ) = (0, r ): 1 p0 = f circ 1, r 2 = 1+ r or p0 = 1 2
2

r2

5 r 4

(3-33)

If the radius r is specied as an integer, we can simply round p0 to p0 = 1 r (for r an integer)

since all increments are integers. As in Bresenhams line algorithm, the midpoint method calculates pixel positions along the circumference of a circle using integer additions and subtractions, assuming that the circle parameters are specied in integer screen coordinates. We can summarize the steps in the midpoint circle algorithm as follows.

Midpoint Circle Algorithm


1. Input radius r and circle center (xc , yc ), then set the coordinates for the rst point on the circumference of a circle centered on the origin as (x0 , y0 ) = (0, r ) 2. Calculate the initial value of the decision parameter as 5 r 4 At each xk position, starting at k = 0, perform the following test. If pk < 0, the next point along the circle centered on (0, 0) is (xk +1 , yk ) and p0 = pk +1 = pk + 2xk +1 + 1 Otherwise, the next point along the circle is (xk + 1, yk 1) and pk +1 = pk + 2xk +1 + 1 2 yk +1 where 2xk +1 = 2xk + 2 and 2 yk +1 = 2 yk 2.

3.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-9

Circle-Generating Algorithms

107

4. 5.

Determine symmetry points in the other seven octants. Move each calculated pixel position (x , y) onto the circular path centered at (xc , yc ) and plot the coordinate values: x = x + xc , y = y + yc Repeat steps 3 through 5 until x y.

6.

EXAMPLE 3-2 Midpoint Circle Drawing Given a circle radius r = 10, we demonstrate the midpoint circle algorithm by determining positions along the circle octant in the rst quadrant from x = 0 to x = y. The initial value of the decision parameter is p 0 = 1 r = 9 For the circle centered on the coordinate origin, the initial point is (x0 , y0 ) = (0, 10), and initial increment terms for calculating the decision parameters are 2x0 = 0, 2 y0 = 20

Successive midpoint decision parameter values and the corresponding coordinate positions along the circle path are listed in the following table. k
0 1 2 3 4 5 6
y 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 x

pk
9 6 1 6 3 8 5

(xk +1 , yk +1 )
(1, 10) (2, 10) (3, 10) (4, 9) (5, 9) (6, 8) (7, 7)

2xk +1
2 4 6 8 10 12 14

2 y k +1
20 20 20 18 18 16 14
yx

FIGURE 3-20 Pixel positions (solid circles) along a circle path centered on the origin and with radius r = 10, as calculated by the midpoint circle algorithm. Open (hollow) circles show the symmetry positions in the rst quadrant.

A plot of the generated pixel positions in the rst quadrant is shown in Fig. 3-20.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

108

CHAPTER 3

Graphics Output Primitives

The following code segment illustrates procedures that could be used to implement the midpoint circle algorithm. Values for a circle radius and for the center coordinates of the circle are passed to procedure circleMidpoint. A pixel position along the circular path in the rst octant is then computed and passed to procedure circlePlotPoints. This procedure sets the circle color in the frame buffer for all circle symmetry positions with repeated calls to the setPixel routine, which is implemented with the OpenGL point-plotting functions.
#include <GL/glut.h> class screenPt { private: GLint x, y; public: /* Default Constructor: initializes coordinate position to (0, 0). screenPt ( ) { x = y = 0; } void setCoords (GLint xCoordValue, GLint yCoordValue) { x = xCoordValue; y = yCoordValue; } GLint getx ( ) const return x; } { */

};

GLint gety ( ) const { return y; } void incrementx ( ) { x++; } void decrementy ( ) { y--; }

void setPixel (GLint xCoord, GLint yCoord) { glBegin (GL_POINTS); glVertex2i (xCoord, yCoord); glEnd ( ); } void circleMidpoint (GLint xc, GLint yc, GLint radius) { screenPt circPt; GLint p = 1 - radius; // Initial value for midpoint parameter. Set coords for top point of circle.

circPt.setCoords (0, radius); //

void circlePlotPoints (GLint, GLint, screenPt);

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-10

Ellipse-Generating Algorithms

109

/* Plot the initial point in each circle quadrant. */ circlePlotPoints (xc, yc, circPt); /* Calculate next point and plot in each octant. */ while (circPt.getx ( ) < circPt.gety ( )) { circPt.incrementx ( ); if (p < 0) p += 2 * circPt.getx ( ) + 1; else { circPt.decrementy ( ); p += 2 * (circPt.getx ( ) - circPt.gety ( )) + 1; } circlePlotPoints (xc, yc, circPt); } GLint yc, screenPt circPt) ), ), ), ), ), ), ), ), yc yc yc yc yc yc yc yc + + + + circPt.gety circPt.gety circPt.gety circPt.gety circPt.getx circPt.getx circPt.getx circPt.getx ( ( ( ( ( ( ( ( )); )); )); )); )); )); )); ));

void circlePlotPoints (GLint xc, { setPixel (xc + circPt.getx ( setPixel (xc - circPt.getx ( setPixel (xc + circPt.getx ( setPixel (xc - circPt.getx ( setPixel (xc + circPt.gety ( setPixel (xc - circPt.gety ( setPixel (xc + circPt.gety ( setPixel (xc - circPt.gety ( }

3-10 ELLIPSE-GENERATING ALGORITHMS


Loosely stated, an ellipse is an elongated circle. We can also describe an ellipse as a modied circle whose radius varies from a maximum value in one direction to a minimum value in the perpendicular direction. The straight-line segments through the interior of the ellipse in these two perpendicular directions are referred to as the major and minor axes of the ellipse.
y

F1

d1 d2 P = (x, y)

Properties of Ellipses
A precise denition of an ellipse can be given in terms of the distances from any point on the ellipse to two xed positions, called the foci of the ellipse. The sum of these two distances is the same value for all points on the ellipse (Fig. 3-21). If the distances to the two focus positions from any point P = (x , y) on the ellipse are labeled d1 and d2 , then the general equation of an ellipse can be stated as d1 + d2 = constant (3-34)

F2

FIGURE 3-21 Ellipse generated about foci F1 and F2 .

Expressing distances d1 and d2 in terms of the focal coordinates F1 = (x1 , y1 ) and F2 = (x2 , y2 ), we have (x x1 )2 + ( y y1 )2 + (x x2 )2 + ( y y2 )2 = constant (3-35)

By squaring this equation, isolating the remaining radical, and squaring again, we can rewrite the general ellipse equation in the form A x 2 + B y2 + C x y + D x + E y + F = 0 (3-36)

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

110
y

CHAPTER 3

Graphics Output Primitives

yc

ry rx

xc

Ellipse centered at (xc , yc ) with semimajor axis r x and semiminor axis r y .

FIGURE 3-22

where the coefcients A, B , C , D, E , and F are evaluated in terms of the focal coordinates and the dimensions of the major and minor axes of the ellipse. The major axis is the straight-line segment extending from one side of the ellipse to the other through the foci. The minor axis spans the shorter dimension of the ellipse, perpendicularly bisecting the major axis at the halfway position (ellipse center) between the two foci. An interactive method for specifying an ellipse in an arbitrary orientation is to input the two foci and a point on the ellipse boundary. With these three coordinate positions, we can evaluate the constant in Eq. 3-35. Then, the values for the coefcients in Eq. 3-36 can be computed and used to generate pixels along the elliptical path. Ellipse equations are greatly simplied if the major and minor axes are oriented to align with the coordinate axes. In Fig. 3-22, we show an ellipse in standard position with major and minor axes oriented parallel to the x and y axes. Parameter r x for this example labels the semimajor axis, and parameter r y labels the semiminor axis. The equation for the ellipse shown in Fig. 3-22 can be written in terms of the ellipse center coordinates and parameters r x and r y as x xc rx
2

r rx y yc u

y yc ry

=1

(3-37)

Using polar coordinates r and , we can also describe the ellipse in standard position with the parametric equations x = xc + r x cos y = yc + r y sin (3-38)

xc

The bounding circle and eccentric angle for an ellipse with rx > r y.

FIGURE 3-23

Angle , called the eccentric angle of the ellipse, is measured around the perimeter of a bounding circle. If r x > r y , the radius of the bounding circle is r = r x (Fig. 3-23). Otherwise, the bounding circle has radius r = r y . As with the circle algorithm, symmetry considerations can be used to reduce computations. An ellipse in standard position is symmetric between quadrants, but, unlike a circle, it is not symmetric between the two octants of a quadrant. Thus, we must calculate pixel positions along the elliptical arc throughout one quadrant, then use symmetry to obtain curve positions in the remaining three quadrants (Fig. 3-24).

Midpoint Ellipse Algorithm


(x, y) ry rx (x, y) (x, y) (x, y)

FIGURE 3-24

Symmetry of an ellipse. Calculation of a point (x , y) in one quadrant yields the ellipse points shown for the other three quadrants.

Our approach here is similar to that used in displaying a raster circle. Given parameters r x , r y , and (xc , yc ), we determine curve positions (x , y) for an ellipse in standard position centered on the origin, then we shift all the points using a xed offset so that the ellipse is centered at (xc , yc ). If we wish also to display the ellipse in nonstandard position, we could rotate the ellipse about its center coordinates to reorient the major and minor axes in the desired directions. For the present, we consider only the display of ellipses in standard position. We discuss general methods for transforming object orientations and positions in Chapter 5. The midpoint ellipse method is applied throughout the rst quadrant in two parts. Figure 3-25 shows the division of the rst quadrant according to the slope of an ellipse with r x < r y . We process this quadrant by taking unit steps in the x direction where the slope of the curve has a magnitude less than 1.0, and then we take unit steps in the y direction where the slope has a magnitude greater than 1.0.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-10

Ellipse-Generating Algorithms
y Slope 1 ry
Region 1 Region 2

111

Regions 1 and 2 (Fig. 3-25) can be processed in various ways. We can start at position (0, r y ) and step clockwise along the elliptical path in the rst quadrant, shifting from unit steps in x to unit steps in y when the slope becomes less than 1.0. Alternatively, we could start at (r x , 0) and select points in a counterclockwise order, shifting from unit steps in y to unit steps in x when the slope becomes greater than 1.0. With parallel processors, we could calculate pixel positions in the two regions simultaneously. As an example of a sequential implementation of the midpoint algorithm, we take the start position at (0, r y ) and step along the ellipse path in clockwise order throughout the rst quadrant. We dene an ellipse function from Eq. 3-37 with (xc , yc ) = (0, 0) as
2 2 2 2 2 2 x + rx y rx ry f ellipse (x , y) = r y

rx

(3-39)

FIGURE 3-25

which has the following properties: < 0, if (x , y) is inside the ellipse boundary f ellipse (x , y) = 0, if (x , y) is on the ellipse boundary > 0, if (x , y) is outside the ellipse boundary

(3-40)

Ellipse processing regions. Over region 1, the magnitude of the ellipse slope is less than 1.0; over region 2, the magnitude of the slope is greater than 1.0.

Thus, the ellipse function f ellipse (x , y) serves as the decision parameter in the midpoint algorithm. At each sampling position, we select the next pixel along the ellipse path according to the sign of the ellipse function evaluated at the midpoint between the two candidate pixels. Starting at (0, r y ), we take unit steps in the x direction until we reach the boundary between region 1 and region 2 (Fig. 3-25). Then we switch to unit steps in the y direction over the remainder of the curve in the rst quadrant. At each step we need to test the value of the slope of the curve. The ellipse slope is calculated from Eq. 3-39 as
2 x 2r y dy = 2 dx 2r x y

(3-41)
2 2 2 2 2 2 ry 0 y rx x rx ry

At the boundary between region 1 and region 2, d y/d x = 1.0 and


2 2 2r y x = 2r x y

yk yk 1 midpoint

Therefore, we move out of region 1 whenever


2 2 2r y x 2r x y

(3-42)

xk

xk 1

Figure 3-26 shows the midpoint between the two candidate pixels at sampling position xk + 1 in the rst region. Assuming position (xk , yk ) has been selected in the previous step, we determine the next position along the ellipse path by evaluating the decision parameter (that is, the ellipse function 3-39) at this midpoint: p 1k = f ellipse xk + 1, yk 1 2 1 2
2 2 2 rx ry

Midpoint between candidate pixels at sampling position xk + 1 along an elliptical path.

FIGURE 3-26

2 2 = ry (xk + 1)2 + r x yk

(3-43)

If p 1k < 0, the midpoint is inside the ellipse and the pixel on scan line yk is closer to the ellipse boundary. Otherwise, the midposition is outside or on the ellipse boundary, and we select the pixel on scan line yk 1.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

112

CHAPTER 3

Graphics Output Primitives

At the next sampling position (xk +1 + 1 = xk + 2), the decision parameter for region 1 is evaluated as p 1k +1 = f ellipse xk +1 + 1, yk +1 1 2 1 2
2 2 2 2 rx ry

2 2 = ry yk +1 [(xk + 1) + 1]2 + r x

or
2 2 2 p 1k +1 = p 1k + 2r y (xk + 1) + r y + rx

yk +1

1 2

yk

1 2

(3-44)

where yk +1 is either yk or yk 1, depending on the sign of p 1k . Decision parameters are incremented by the following amounts: increment = 2 2 2r y xk +1 + r y , if p 1k < 0

2r 2 x + r 2 2r 2 y , if p 1 0 k x k +1 y y k +1

Increments for the decision parameters can be calculated using only addition and 2 2 subtraction, as in the circle algorithm, since values for the terms 2r y y can x and 2r x be obtained incrementally. At the initial position (0, r y ), these two terms evaluate to
2 2r y x=0 2 2r x y

(3-45) (3-46)

2 2r x ry

2 As x and y are incremented, updated values are obtained by adding 2r y to the 2 current value of the increment term in Eq. 3-45 and subtracting 2r x from the current value of the increment term in Eq. 3-46. The updated increment values are compared at each step, and we move from region 1 to region 2 when condition 3-42 is satised. In region 1, the initial value of the decision parameter is obtained by evaluating the ellipse function at the start position (x0 , y0 ) = (0, r y ):

p 10 = f ellipse 1, r y
2 2 2 2 2 2 ry x rx y rx ry 0

1 2
2 2 2 rx ry

yk yk 1 xk

2 2 = ry ry + rx

1 2

or
midpoint xk 1 x k 2

1 2 2 2 p 10 = r y rx r y + rx 4

(3-47)

Midpoint between candidate pixels at sampling position yk 1 along an elliptical path.

FIGURE 3-27

Over region 2, we sample at unit intervals in the negative y direction, and the midpoint is now taken between horizontal pixels at each step (Fig. 3-27). For this region, the decision parameter is evaluated as 1 p 2k = f ellipse xk + , yk 1 2
2 = ry xk +

1 2

2 2 2 + rx ( yk 1)2 r x ry

(3-48)

If p 2k > 0, the midposition is outside the ellipse boundary, and we select the pixel

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-10

Ellipse-Generating Algorithms

113

at xk . If p 2k 0, the midpoint is inside or on the ellipse boundary, and we select pixel position xk +1 . To determine the relationship between successive decision parameters in region 2, we evaluate the ellipse function at the next sampling step yk +1 1 = yk 2: 1 p 2k +1 = f ellipse xk +1 + , yk +1 1 2
2 = ry xk +1 +

1 2

2 2 2 + rx [( yk 1) 1]2 r x ry

(3-49)

or
2 2 2 p 2k +1 = p 2k 2r x ( yk 1) + r x + ry

xk +1 +

1 2

xk +

1 2

(3-50)

with xk +1 set either to xk or to xk + 1, depending on the sign of p 2k . When we enter region 2, the initial position (x0 , y0 ) is taken as the last position selected in region 1 and the initial decision parameter in region 2 is then 1 p 20 = f ellipse x0 + , y0 1 2
2 = ry x0 +

1 2

2 2 2 + rx ( y0 1)2 r x ry

(3-51)

To simplify the calculation of p 20 , we could select pixel positions in counterclockwise order starting at (r x , 0). Unit steps would then be taken in the positive y direction up to the last position selected in region 1. This midpoint algorithm can be adapted to generate an ellipse in nonstandard position using the ellipse function Eq. 3-36 and calculating pixel positions over the entire elliptical path. Alternatively, we could reorient the ellipse axes to standard position, using transformation methods discussed in Chapter 5, apply the midpoint ellipse algorithm to determine curve positions, and then convert calculated pixel positions to path positions along the original ellipse orientation. Assuming r x , r y , and the ellipse center are given in integer screen coordinates, we need only incremental integer calculations to determine values for the 2 2 decision parameters in the midpoint ellipse algorithm. The increments r x , ry , 2 2 2r x , and 2r y are evaluated once at the beginning of the procedure. In the following summary, we list the steps for displaying an ellipse using the midpoint algorithm.

Midpoint Ellipse Algorithm


1. Input r x , r y , and ellipse center (xc , yc ), and obtain the rst point on an ellipse centered on the origin as (x0 , y0 ) = (0, r y ) 2. Calculate the initial value of the decision parameter in region 1 as 1 2 2 2 p 10 = r y rx r y + rx 4

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

114

CHAPTER 3

Graphics Output Primitives

3.

At each xk position in region 1, starting at k = 0, perform the following test. If p 1k < 0, the next point along the ellipse centered on (0, 0) is (xk +1 , yk ) and
2 2 p 1k +1 = p 1k + 2r y xk +1 + r y

Otherwise, the next point along the ellipse is (xk + 1, yk 1) and


2 2 2 p 1k +1 = p 1k + 2r y xk +1 2r x yk +1 + r y

with
2 2 2 2r y xk +1 = 2r y xk + 2r y , 2 2 and continue until 2r y x 2r x y. 2 2 2 2r x yk +1 = 2r x yk 2r x

4.

Calculate the initial value of the decision parameter in region 2 as


2 p 20 = r y x0 +

1 2

2 2 2 + rx ( y0 1)2 r x ry

where (x0 , y0 ) is the last position calculated in region 1. 5. At each yk position in region 2, starting at k = 0, perform the following test. If p 2k > 0, the next point along the ellipse centered on (0, 0) is (xk , yk 1) and
2 2 p 2k +1 = p 2k 2r x yk +1 + r x

Otherwise, the next point along the ellipse is (xk + 1, yk 1) and


2 2 2 p 2k +1 = p 2k + 2r y xk +1 2r x yk +1 + r x

using the same incremental calculations for x and y as in region 1. Continue until y = 0. 6. 7. For both regions, determine symmetry points in the other three quadrants. Move each calculated pixel position (x , y) onto the elliptical path centered on (xc , yc ) and plot the coordinate values: x = x + xc , y = y + yc

EXAMPLE 3-3 Midpoint Ellipse Drawing Given input ellipse parameters r x = 8 and r y = 6, we illustrate the steps in the midpoint ellipse algorithm by determining raster positions along the ellipse path in the rst quadrant. Initial values and increments for the decision parameter calculations are
2 2r y x=0 2 2 2r x y = 2r x ry 2 (with increment 2r y = 72) 2 (with increment 2r x = 128)

For region 1, the initial point for the ellipse centered on the origin is

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-10

Ellipse-Generating Algorithms

115

(x0 , y0 ) = (0, 6), and the initial decision parameter value is 1 2 2 2 p 10 = r y = 332 rx r y + rx 4 Successive midpoint decision-parameter values and the pixel positions along the ellipse are listed in the following table. k
0 1 2 3 4 5 6

p 1k
332 224 44 208 108 288 244

(xk +1 , yk +1 )
(1, 6) (2, 6) (3, 6) (4, 5) (5, 5) (6, 4) (7, 3)

2 2r y xk +1

2 2r x y k +1

72 144 216 288 360 432 504

768 768 768 640 640 512 384

2 2 We now move out of region 1, since 2r y x > 2r x y. For region 2, the initial point is (x0 , y0 ) = (7, 3) and the initial decision parameter is 1 p 20 = f ellipse 7 + , 2 = 151 2

The remaining positions along the ellipse path in the rst quadrant are then calculated as k
0 1 2

p 1k
151 233 745

(xk +1 , yk +1 )
(8, 2) (8, 1) (8, 0)

2 xk +1 2r y

2 y k +1 2r x

576 576

256 128

6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8

FIGURE 3-28 Pixel positions along an elliptical path centered on the origin with r x = 8 and r y = 6, using the midpoint algorithm to calculate locations within the rst quadrant.

A plot of the calculated positions for the ellipse within the rst quadrant is shown in Fig. 3-28. In the following code segment, example procedures are given for implementing the midpoint ellipse algorithm. Values for the ellipse parameters Rx, Ry, xCenter, and yCenter are input to procedure ellipseMidpoint. Positions along the curve in the rst quadrant are then calculated and passed to procedure ellipsePlotPoints. Symmetry is used to obtain ellipse positions in the other three quadrants, and the setPixel routine sets the ellipse color in the framebuffer locations corresponding to these positions.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

116

CHAPTER 3

Graphics Output Primitives

inline int round (const float a)

{ return int (a + 0.5); }

/* The following procedure accepts values for an ellipse * center position and its semimajor and semiminor axes, then * calculates ellipse positions using the midpoint algorithm. */ void ellipseMidpoint (int xCenter, int yCenter, int Rx, int Ry) { int Rx2 = Rx * Rx; int Ry2 = Ry * Ry; int twoRx2 = 2 * Rx2; int twoRy2 = 2 * Ry2; int p; int x = 0; int y = Ry; int px = 0; int py = twoRx2 * y; void ellipsePlotPoints (int, int, int, int); /* Plot the initial point in each quadrant. */ ellipsePlotPoints (xCenter, yCenter, x, y); /* Region 1 */ p = round (Ry2 - (Rx2 * Ry) + (0.25 * Rx2)); while (px < py) { x++; px += twoRy2; if (p < 0) p += Ry2 + px; else { y--; py -= twoRx2; p += Ry2 + px - py; } ellipsePlotPoints (xCenter, yCenter, x, y); } /* Region 2 */ p = round (Ry2 * (x+0.5) * (x+0.5) + Rx2 * (y-1) * (y-1) - Rx2 * Ry2); while (y > 0) { y--; py -= twoRx2; if (p > 0) p += Rx2 - py; else { x++; px += twoRy2; p += Rx2 - py + px; } ellipsePlotPoints (xCenter, yCenter, x, y); }

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-11 void ellipsePlotPoints { setPixel (xCenter + setPixel (xCenter setPixel (xCenter + setPixel (xCenter } (int xCenter, int yCenter, int x, int y); x, x, x, x, yCenter yCenter yCenter yCenter + + y); y); y); y);

Other Curves

117

3-11 OTHER CURVES


Various curve functions are useful in object modeling, animation path specications, data and function graphing, and other graphics applications. Commonly encountered curves include conics, trigonometric and exponential functions, probability distributions, general polynomials, and spline functions. Displays of these curves can be generated with methods similar to those discussed for the circle and ellipse functions. We can obtain positions along curve paths directly from explicit representations y = f (x ) or from parametric forms. Alternatively, we could apply the incremental midpoint method to plot curves described with implicit functions f (x , y) = 0. A simple method for displaying a curved line is to approximate it with straight-line segments. Parametric representations are often useful in this case for obtaining equally spaced positions along the curve path for the line endpoints. We can also generate equally spaced positions from an explicit representation by choosing the independent variable according to the slope of the curve. Where the slope of y = f (x ) has a magnitude less than 1, we choose x as the independent variable and calculate y values at equal x increments. To obtain equal spacing where the slope has a magnitude greater than 1, we use the inverse function, x = f 1 ( y), and calculate values of x at equal y steps. Straight-line or curve approximations are used to generate a line graph for a set of discrete data values. We could join the discrete points with straightline segments, or we could use linear regression (least squares) to approximate the data set with a single straight line. A nonlinear least-squares approach is used to display the data set with some approximating function, usually a polynomial. As with circles and ellipses, many functions possess symmetries that can be exploited to reduce the computation of coordinate positions along curve paths. For example, the normal probability distribution function is symmetric about a center position (the mean), and all points within one cycle of a sine curve can be generated from the points in a 90 interval.

Conic Sections
In general, we can describe a conic section (or conic) with the second-degree equation A x 2 + B y2 + C x y + D x + E y + F = 0 (3-52)

where values for parameters A, B , C , D, E , and F determine the kind of curve we are to display. Given this set of coefcients, we can determine the particular

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

118
y

CHAPTER 3

Graphics Output Primitives

conic that will be generated by evaluating the discriminant B 2 4 AC : < 0, generates an ellipse (or circle) 2 B 4 AC = 0, generates a parabola > 0, generates a hyperbola
r

(3-53)

For example, we get the circle equation 3-26 when A = B = 1, C = 0, D = 2xc , 2 2 E = 2 yc , and F = xc + yc r 2 . Equation 3-52 also describes the degenerate conics: points and straight lines. u2 u1 x In some applications, circular and elliptical arcs are conveniently specied with the beginning and ending angular values for the arc, as illustrated in Fig. 3-29. And such arcs are sometimes dened by their endpoint coordinate positions. For FIGURE 3-29 A circular either case, we could generate the arc with a modied midpoint method, or we arc, centered on the origin, could display a set of approximating straight-line segments. dened with beginning angle Ellipses, hyperbolas, and parabolas are particularly useful in certain anima1 , ending angle 2 , and tion applications. These curves describe orbital and other motions for objects subradius r . jected to gravitational, electromagnetic, or nuclear forces. Planetary orbits in the solar system, for example, are approximated with ellipses; and an object projected into a uniform gravitational eld travels along a parabolic trajectory. Figure 3-30 shows a parabolic path in standard position for a gravitational eld acting in the negative y direction. The explicit equation for the parabolic trajectory of the object shown can be written as
y0 v0 g

y = y0 + a (x x0 )2 + b (x x0 )

(3-54)

x0

with constants a and b determined by the initial velocity v0 of the object and the acceleration g due to the uniform gravitational force. We can also describe such parabolic motions with parametric equations using a time parameter t , measured in seconds from the initial projection point: x = x0 + vx0 t 1 y = y0 + v y0 t g t 2 2 (3-55)

FIGURE 3-30

Parabolic path of an object tossed into a downward gravitational eld at the initial position (x0 , y0 ).

y ry y r x x Left Branch rx ry ry rx Right Branch x

Here, vx0 and v y0 are the initial velocity components, and the value of g near the surface of the earth is approximately 980 cm/sec2 . Object positions along the parabolic path are then calculated at selected time steps. Hyperbolic curves (Fig. 3-31) are useful in various scientic-visualization applications. Motions of objects along hyperbolic paths occur in connection with the collision of charged particles and in certain gravitational problems. For example, comets or meteorites moving around the sun may travel along hyperbolic paths and escape to outer space, never to return. The particular branch (left or right, in Fig. 3-31) describing the motion of an object depends on the forces involved in the problem. We can write the standard equation for the hyperbola centered on the origin in Fig. 3-31 as x rx
2

y ry

=1

(3-56)

FIGURE 3-31 Left and right branches of a hyperbola in standard position with the symmetry axis along the x axis.

with x r x for the left branch and x r x for the right branch. Since this equation differs from the standard ellipse equation 3-39 only in the sign between the x 2 and y2 terms, we can generate points along a hyperbolic path with a slightly modied ellipse algorithm.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-12

Parallel Curve Algorithms

119

Parabolas and hyperbolas possess a symmetry axis. For example, the parabola described by Eq. 3-55 is symmetric about the axis x = x0 + vx0 v y0 /g The methods used in the midpoint ellipse algorithm can be directly applied to obtain points along one side of the symmetry axis of hyperbolic and parabolic paths in the two regions: (1) where the magnitude of the curve slope is less than 1, and (2) where the magnitude of the slope is greater than 1. To do this, we rst select the appropriate form of Eq. 3-52 and then use the selected function to set up expressions for the decision parameters in the two regions.

Polynomials and Spline Curves


A polynomial function of nth degree in x is dened as
n

y=
k =0

a k xk (3-57)

= a 0 + a 1 x + + a n1 x n1 + a n x n

where n is a nonnegative integer and the a k are constants, with a n = 0. We obtain a quadratic curve when n = 2, a cubic polynomial when n = 3, a quartic curve when n = 4, and so forth. And we have a straight line when n = 1. Polynomials are useful in a number of graphics applications, including the design of object shapes, the specication of animation paths, and the graphing of data trends in a discrete set of data points. Designing object shapes or motion paths is typically accomplished by rst specifying a few points to dene the general curve contour, then the selected points are tted with a polynomial. One way to accomplish the curve tting is to construct a cubic polynomial curve section between each pair of specied points. Each curve section is then described in parametric form as x = a x0 + a x1 u + a x2 u2 + a x3 u3 y = a y0 + a y1 u + a y2 u2 + a y3 u3 (3-58)

where parameter u varies over the interval from 0 to 1.0. Values for the coefcients of u in the preceding equations are determined from boundary conditions on the curve sections. One boundary condition is that two adjacent curve sections have the same coordinate position at the boundary, and a second condition is to match the two curve slopes at the boundary so that we obtain one continuous, smooth curve (Fig. 3-32). Continuous curves that are formed with polynomial pieces are called spline curves, or simply splines. There are other ways to set up spline curves, and various spline-generating methods are explored in Chapter 8.

FIGURE 3-32

A spline curve formed with individual cubic polynomial sections between specied coordinate positions.

3-12 PARALLEL CURVE ALGORITHMS


Methods for exploiting parallelism in curve generation are similar to those used in displaying straight-line segments. We can either adapt a sequential algorithm by allocating processors according to curve partitions, or we could devise other methods and assign processors to screen partitions. A parallel midpoint method for displaying circles is to divide the circular arc from 45 to 90 into equal subarcs and assign a separate processor to each subarc.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

120

CHAPTER 3

Graphics Output Primitives

As in the parallel Bresenham line algorithm, we then need to set up computations to determine the beginning y value and decision parameter pk value for each processor. Pixel positions are calculated throughout each subarc, and positions in the other circle octants can be obtained by symmetry. Similarly, a parallel ellipse midpoint method divides the elliptical arc over the rst quadrant into equal subarcs and parcels these out to separate processors. Again, pixel positions in the other quadrants are determined by symmetry. A screen-partitioning scheme for circles and ellipses is to assign each scan line that crosses the curve to a separate processor. In this case, each processor uses the circle or ellipse equation to calculate curve intersection coordinates. For the display of elliptical arcs or other curves, we can simply use the scanline partitioning method. Each processor uses the curve equation to locate the intersection positions along its assigned scan line. With processors assigned to individual pixels, each processor would calculate the distance (or distance squared) from the curve to its assigned pixel. If the calculated distance is less than a predened value, the pixel is plotted.

3-13 PIXEL ADDRESSING AND OBJECT GEOMETRY


5 4 3 2 1 0 0 1 2 3 4 5

FIGURE 3-33

Lower-left section of a screen area with coordinate positions referenced by grid intersection lines.

In discussing the raster algorithms for displaying graphics primitives, we assumed that frame-buffer coordinates referenced the center of a screen pixel position. We now consider the effects of different addressing schemes and an alternate pixel-addressing method used by some graphics packages, including OpenGL. An object description that is input to a graphics program is given in terms of precise world-coordinate positions, which are innitesimally small mathematical points. But when the object is scan converted into the frame buffer, the input description is transformed to pixel coordinates which reference nite screen areas, and the displayed raster image may not correspond exactly with the relative dimensions of the input object. If it is important to preserve the specied geometry of world objects, we can compensate for the mapping of mathematical input points to nite pixel areas. One way to do this is simply to adjust the pixel dimensions of displayed objects so as to correspond to the dimensions given in the original mathematical description of the scene. For example, if a rectangle is specied as having a width of 40 cm, then we could adjust the screen display so that the rectangle has a width of 40 pixels, with the width of each pixel representing one centimeter. Another approach is to map world coordinates onto screen positions between pixels, so that we align object boundaries with pixel boundaries instead of pixel centers.

7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7

Screen Grid Coordinates


Figure 3-33 shows a screen section with grid lines marking pixel boundaries, one unit apart. In this scheme, a screen position is given as the pair of integer values identifying a grid-intersection position between two pixels. The address for any pixel is now at its lower-left corner, as illustrated in Fig. 3-34. And a straight-line path is now envisioned as between grid intersections. For example, the mathematical line path for a polyline with endpoint coordinates (0, 0), (5, 2), and (1, 4) would then be as shown in Fig. 3-35. Using screen grid coordinates, we now identify the area occupied by a pixel with screen coordinates (x , y) as the unit square with diagonally opposite corners at (x , y) and (x + 1, y + 1). This pixel-addressing method has several advantages:

FIGURE 3-34

Illuminated pixel at raster position (4, 5).

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-13

Pixel Addressing and Object Geometry

121

5 4 3 2 1 0 0 1 2 3 4 5

18 17 16 15 14 13 12 11 10 20 21 22 23 24 25 26 27 28 29 30

FIGURE 3-35 Line path for two connected line segments between screen grid-coordinate positions.

FIGURE 3-36 Line path and corresponding pixel display for grid endpoint coordinates (20, 10) and (30, 18).

it avoids half-integer pixel boundaries, it facilitates precise object representations, and it simplies the processing involved in many scan-conversion algorithms and other raster procedures. The algorithms for line drawing and curve generation discussed in the preceding sections are still valid when applied to input positions expressed as screen grid coordinates. Decision parameters in these algorithms would now be a measure of screen grid separation differences, rather than separation differences from pixel centers.

4 3 2 1 0 0 1 2 3 (a) 4 5

Maintaining Geometric Properties of Displayed Objects


When we convert geometric descriptions of objects into pixel representations, we transform mathematical points and lines into nite screen areas. If we are to maintain the original geometric measurements specied by the input coordinates for an object, we need to account for the nite size of pixels when we transform the object denition to a screen display. Figure 3-36 shows the line plotted in the Bresenham line-algorithm example of Section 3-5. Interpreting the line endpoints (20, 10) and (30, 18) as precise grid-crossing positions, we see that the line should not extend past screen-grid position (30, 18). If we were to plot the pixel with screen coordinates (30, 18), as in the example given in Section 3-5, we would display a line that spans 11 horizontal units and 9 vertical units. For the mathematical line, however, x = 10 and y = 8. If we are addressing pixels by their center positions, we can adjust the length of the displayed line by omitting one of the endpoint pixels. But if we think of screen coordinates as addressing pixel boundaries, as shown in Fig. 3-36, we plot a line using only those pixels that are interior to the line path; that is, only those pixels that are between the line endpoints. For our example, we would plot the leftmost pixel at (20, 10) and the rightmost pixel at (29, 17). This displays a line that has the same geometric magnitudes as the mathematical line from (20, 10) to (30, 18). For an enclosed area, input geometric properties are maintained by displaying the area using only those pixels that are interior to the object boundaries. The rectangle dened with the screen coordinate vertices shown in Fig. 3-37(a), for example, is larger when we display it lled with pixels up to and including

4 3 2 1 0 0 1 2 3 (b) 4 5

4 3 2 1 0 0 1 2 3 (c) 4 5

Conversion of rectangle (a) with vertices at screen coordinates (0, 0), (4, 0), (4, 3), and (0, 3) into display (b) that includes the right and top boundaries and into display (c) that maintains geometric magnitudes.

FIGURE 3-37

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

122

CHAPTER 3

Graphics Output Primitives

the border pixel lines joining the specied vertices (Fig. 3-37(b)). As dened, the area of the rectangle is 12 units, but as displayed in Fig. 3-37(b), it has an area of 20 units. In Fig. 3-37(c), the original rectangle measurements are maintained by displaying only the internal pixels. The right boundary of the input rectangle is at x = 4. To maintain the rectangle width in the display, we set the rightmost pixel grid coordinate for the rectangle at x = 3, since the pixels in this vertical column span the interval from x = 3 to x = 4. Similarly, the mathematical top boundary of the rectangle is at y = 3, so we set the top pixel row for the displayed rectangle at y = 2. These compensations for nite pixel size can be applied to other objects, including those with curved boundaries, so that the raster display maintains the input object specications. A circle with radius 5 and center position (10, 10), for instance, would be displayed as in Fig 3-38 by the midpoint circle algorithm using pixel centers as screen-coordinate positions. But the plotted circle has a diameter of 11. To plot the circle with the dened diameter of 10, we can modify the circle algorithm to shorten each pixel scan line and each pixel column, as in Fig. 3-39.

15

(10, 10)

15

A midpoint-algorithm plot of the circle equation (x 10)2 + ( y 10)2 = 52 using pixel-center coordinates.

FIGURE 3-38

( y, x 1)

(y 1, x 1)

(x, y 1) (x, y)

(x 1, y 1) (0, 0) (x 1, y)

Modication of the circle plot in Fig. 3-38 to maintain the specied circle diameter of 10.

FIGURE 3-39

( y, x) (y 1, x)

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-14

Fill-Area Primitives

123

(a)

(b)

(c)

FIGURE 3-40 Solid-color ll areas specied with various boundaries. (a) A circular ll region. (b) A ll area bounded by a closed polyline. (c) A lled area specied with an irregular curved boundary.

One way to do this is to generate points clockwise along the circular arc in the third quadrant, starting at screen coordinates (10, 5). For each generated point, the other seven circle symmetry points are generated by decreasing the x coordinate values by 1 along scan lines and decreasing the y coordinate values by 1 along pixel columns. Similar methods are applied in ellipse algorithms to maintain the specied proportions in the display of an ellipse.

3-14 FILL-AREA PRIMITIVES


Another useful construct, besides points, straight-line segments, and curves, for describing components of a picture is an area that is lled with some solid color or pattern. A picture component of this type is typically referred to as a ll area or a lled area. Most often, ll areas are used to describe surfaces of solid objects, but they are also useful in a variety of other applications. Also, ll regions are usually planar surfaces, mainly polygons. But, in general, there are many possible shapes for a region in a picture that we might wish to ll with some color option. Figure 3-40 illustrates a few possible ll-area shapes. For the present, we assume that all ll areas are to be displayed with a specied solid color. Other ll options are discussed in Chapter 4. Although any ll-area shape is possible, graphics libraries generally do not support specications for arbitrary ll shapes. Most library routines require that a ll area be specied as a polygon. Graphics routines can more efciently process polygons than other kinds of ll shapes because polygon boundaries are described with linear equations. Moreover, most curved surfaces can be approximated reasonably well with a set of polygon patches, just as a curved line can be approximated with a set of straight-line segments. And when lighting effects and surface-shading procedures are applied, an approximated curved surface can be displayed quite realistically. Approximating a curved surface with polygon facets is sometimes referred to as surface tessellation, or tting the surface with a polygon mesh. Figure 3-41 shows the side and top surfaces of a metal cylinder approximated in an outline form as a polygon mesh. Displays of such gures can be generated quickly as wire-frame views, showing only the polygon edges to give a general indication of the surface structure. Then the wire-frame model could be shaded to generate a display of a natural-looking material surface. Objects described with a set of polygon surface patches are usually referred to as standard graphics objects, or just graphics objects. In general, we can create ll areas with any boundary specication, such as a circle or connected set of spline-curve sections. And some of the polygon methods discussed in the next section can be adapted to display ll areas with a nonlinear

FIGURE 3-41 Wire-frame representation for a cylinder, showing only the front (visible) faces of the polygon mesh used to approximate the surfaces.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

124

CHAPTER 3

Graphics Output Primitives

border. Other ll-area methods for objects with curved boundaries are given in Chapter 4.

3-15 POLYGON FILL AREAS


Mathematically dened, a polygon is a plane gure specied by a set of three or more coordinate positions, called vertices, that are connected in sequence by straight-line segments, called the edges or sides of the polygon. Further, in basic geometry, it is required that the polygon edges have no common point other than their endpoints. Thus, by denition, a polygon must have all its vertices within a single plane and there can be no edge crossings. Examples of polygons include triangles, rectangles, octagons, and decagons. Sometimes, any plane gure with a closed-polyline boundary is alluded to as a polygon, and one with no crossing edges is referred to as a standard polygon or a simple polygon. In an effort to avoid ambiguous object references, we will use the term polygon to refer only to those planar shapes that have a closed-polyline boundary and no edge crossings. For a computer-graphics application, it is possible that a designated set of polygon vertices do not all lie exactly in one plane. This can be due to roundoff error in the calculation of numerical values, to errors in selecting coordinate positions for the vertices, or, more typically, to approximating a curved surface with a set of polygonal patches. One way to rectify this problem is simply to divide the specied surface mesh into triangles. But in some cases there may be reasons to retain the original shape of the mesh patches, so methods have been devised for approximating a nonplanar polygonal shape with a plane gure. We discuss how these plane approximations are calculated in the subsection on plane equations.

Polygon Classications
An interior angle of a polygon is an angle inside the polygon boundary that is formed by two adjacent edges. If all interior angles of a polygon are less than or equal to 180 , the polygon is convex. An equivalent denition of a convex polygon is that its interior lies completely on one side of the innite extension line of any one of its edges. Also, if we select any two points in the interior of a convex polygon, the line segment joining the two points is also in the interior. A polygon that is not convex is called a concave polygon. Figure 3-42 gives examples of convex and concave polygons. The term degenerate polygon is often used to describe a set of vertices that are collinear or that have repeated coordinate positions. Collinear vertices generate a line segment. Repeated vertex positions can generate a polygon shape with extraneous lines, overlapping edges, or edges that have a length equal to 0. Sometimes the term degenerate polygon is also applied to a vertex list that contains fewer than three coordinate positions.

180 180

A convex polygon (a), and a concave polygon (b).

FIGURE 3-42

(a)

(b)

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-15

Polygon Fill Areas

125

To be robust, a graphics package could reject degenerate or nonplanar vertex sets. But this requires extra processing to identify these problems, so graphics systems usually leave such considerations to the programmer. Concave polygons also present problems. Implementations of ll algorithms and other graphics routines are more complicated for concave polygons, so it is generally more efcient to split a concave polygon into a set of convex polygons before processing. As with other polygon preprocessing algorithms, concave polygon splitting is often not included in a graphics library. Some graphics packages, including OpenGL, require all ll polygons to be convex. And some systems accept only triangular ll areas, which greatly simplies many of the display algorithms.

Identifying Concave Polygons


A concave polygon has at least one interior angle greater than 180 . Also, the extension of some edges of a concave polygon will intersect other edges, and some pair of interior points will produce a line segment that intersects the polygon boundary. Therefore, we can use any one of these characteristics of a concave polygon as a basis for constructing an identication algorithm. If we set up a vector for each polygon edge, then we can use the cross product of adjacent edges to test for concavity. All such vector products will be of the same sign (positive or negative) for a convex polygon. Therefore, if some cross products yield a positive value and some a negative value, we have a concave polygon. Figure 3-43 illustrates the edge-vector, cross-product method for identifying concave polygons. Another way to identify a concave polygon is to take a look at the polygon vertex positions relative to the extension line of any edge. If some vertices are on one side of the extension line and some vertices are on the other side, the polygon is concave.

Splitting Concave Polygons


Once we have identied a concave polygon, we can split it into a set of convex polygons. This can be accomplished using edge vectors and edge cross products. Or, we can use vertex positions relative to an edge extension line to determine which vertices are on one side of this line and which are on the other. For the following algorithms, we assume that all polygons are in the xy plane. Of course, the original position of a polygon described in world coordinates may not be in
y

V6

E5

V5 E4 V4 E3 V3

(E1 E2)z 0 (E2 E3)z 0 (E3 E4)z 0 (E4 E5)z 0 (E5 E6)z 0 (E6 E1)z 0

E6 V1 E1

E2 V2 x

FIGURE 3-43 Identifying a concave polygon by calculating cross products of successive pairs of edge vectors.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

126

CHAPTER 3

Graphics Output Primitives

the xy plane, but we can always move it into that plane using the transformation methods discussed in Chapter 5. With the vector method for splitting a concave polygon, we rst need to form the edge vectors. Given two consecutive vertex positions, Vk and Vk +1 , we dene the edge vector between them as Ek = Vk +1 Vk Next we calculate the cross products of successive edge vectors in order around the polygon perimeter. If the z component of some cross products is positive while other cross products have a negative z component, the polygon is concave. Otherwise, the polygon is convex. This assumes that no series of three successive vertices are collinear, in which case the cross product of the two edge vectors for these vertices would be zero. If all vertices are collinear, we have a degenerate polygon (a straight line). We can apply the vector method by processing edge vectors in a counterclockwise order. If any cross product has a negative z component (as in Fig. 3-43), the polygon is concave and we can split it along the line of the rst edge vector in the cross-product pair. The following example illustrates this method for splitting a concave polygon. EXAMPLE 3-4 Vector Method for Splitting Concave Polygons
3 E5

Figure 3-44 shows a concave polygon with six edges. Edge vectors for this polygon can be expressed as E1 = (1, 0, 0) E2 = (1, 1, 0) E4 = (0, 2, 0) E6 = (0, 2, 0) E3 = (1, 1, 0) E5 = (3, 0, 0)

2 E6 1 E2 E1 0 1 2 3 E3 E4

where the z component is 0, since all edges are in the xy plane. The cross product E j Ek for two successive edge vectors is a vector perpendicular to the xy plane with z component equal to E j x E k y E k x E j y : E1 E2 = (0, 0, 1) E3 E4 = (0, 0, 2) E5 E6 = (0, 0, 6) E2 E3 = (0, 0, 2) E4 E5 = (0, 0, 6) E6 E1 = (0, 0, 2)

FIGURE 3-44

Splitting a concave polygon using the vector method.

Since the cross product E2 E3 has a negative z component, we split the polygon along the line of vector E2 . The line equation for this edge has a slope of 1 and a y intercept of 1. We then determine the intersection of this line with the other polygon edges to split the polygon into two pieces. No other edge cross products are negative, so the two new polygons are both convex. We can also split a concave polygon using a rotational method. Proceeding counterclockwise around the polygon edges, we shift the position of the polygon so that each vertex Vk in turn is at the coordinate origin. Then, we rotate the polygon about the origin in a clockwise direction so that the next vertex Vk +1 is on the x axis. If the following vertex, Vk +2 , is below the x axis, the polygon is concave. We then split the polygon along the x axis to form two new polygons, and we repeat the concave test for each of the two new polygons. The steps above

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-15

Polygon Fill Areas

127

are repeated until we have tested all vertices in the polygon list. Methods for rotating and shifting the position of an object are discussed in detail in Chapter 5. Figure 3-45 illustrates the rotational method for splitting a concave polygon.

Splitting a Convex Polygon into a Set of Triangles


Once we have a vertex list for a convex polygon, we could transform it into a set of triangles. This can be accomplished by rst dening any sequence of three V1 consecutive vertices to be a new polygon (a triangle). The middle triangle vertex is then deleted from the original vertex list. Then the same procedure is applied to V2 V3 this modied vertex list to strip off another triangle. We continue forming triangles in this manner until the original polygon is reduced to just three vertices, which V4 dene the last triangle in the set. A concave polygon can also by divided into a set of triangles using this approach, as long as the three selected vertices at each FIGURE 3-45 step form an interior angle that is less than 180 (a convex angle).

Inside-Outside Tests
Various graphics processes often need to identify interior regions of objects. Identifying the interior of a simple object, such as a convex polygon, a circle, or a sphere, is generally a straightforward process. But sometimes we must deal with more complex objects. For example, we may want to specify a complex ll region with intersecting edges, as in Fig. 3-46. For such shapes, it is not always clear which regions of the xy plane we should call interior and which regions we should designate as exterior to the object boundaries. Two commonly used algorithms for identifying interior areas of a plane gure are the odd-even rule and the nonzero winding-number rule. We apply the odd-even rule, also called the odd-parity rule or the even-odd rule, by rst conceptually drawing a line from any position P to a distant point outside the coordinate extents of the closed polyline. Then we count the number of line-segment crossings along this line. If the number of segments crossed by this line is odd, then P is considered to be an interior point. Otherwise, P is an exterior point. To obtain an accurate count of the segment crossings, we must be sure that the line path we choose does not intersect any line-segment endpoints. Figure 3-46(a) shows the interior and exterior regions obtained using the odd-even rule for a self-intersecting closed polyline. We can use this procedure, for example,
A A exterior D exterior D

Splitting a concave polygon using the rotational method. After moving V2 to the coordinate origin and rotating V3 onto the x axis, we nd that V4 is below the x axis. So we split the polygon along the line of V2 V3 , which is the x axis.

C interior

G interior E

G E

F B Odd-Even Rule (a)

F B Nonzero Winding-Number Rule (b)

FIGURE 3-46 Identifying interior and exterior regions of a closed polyline that contains self-intersecting segments.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

128

CHAPTER 3

Graphics Output Primitives

to ll the interior region between two concentric circles or two concentric polygons with a specied color. Another method for dening interior regions is the nonzero winding-number rule, which counts the number of times the boundary of an object winds around a particular point in the counterclockwise direction. This count is called the winding number, and the interior points of a two-dimensional object can be dened to be those that have a nonzero value for the winding number. We apply the nonzero winding number rule by initializing the winding number to 0 and again imagining a line drawn from any position P to a distant point beyond the coordinate extents of the object. The line we choose must not pass through any endpoint coordinates. As we move along the line from position P to the distant point, we count the number of object line segments that cross the reference line in each direction. We add 1 to the winding number every time we intersect a segment that crosses the line in the direction from right to left, and we subtract 1 every time we intersect a segment that crosses from left to right. The nal value of the winding number, after all boundary crossings have been counted, determines the relative position of P. If the winding number is nonzero, P is considered to be an interior point. Otherwise, P is taken to be an exterior point. Figure 3-46(b) shows the interior and exterior regions dened by the nonzero winding-number rule for a self-intersecting, closed polyline. For simple objects, such as polygons and circles, the nonzero winding-number rule and the odd-even rule give the same results. But for more complex shapes, the two methods may yield different interior and exterior regions, as in the example of Fig. 3-46. One way to determine directional boundary crossings is to set up vectors along the object edges (or boundary lines) and along the reference line. Then we compute the vector cross product of the vector u, along the line from P to a distant point, with an object edge vector E for each edge that crosses the line. Assuming that we have a two-dimensional object in the xy plane, the direction of each vector cross product will be either in the +z direction or in the z direction. If the z component of a cross product u E for a particular crossing is positive, that segment crosses from right to left and we add 1 to the winding number. Otherwise, the segment crosses from left to right and we subtract 1 from the winding number. A somewhat simpler way to compute directional boundary crossings is to use vector dot products instead of cross products. To do this, we set up a vector that is perpendicular to vector u and that has a right-to-left direction as we look along the line from P in the direction of u. If the components of u are denoted as (ux , u y ), then the vector that is perpendicular to u has components (u y , ux ) (Appendix A). Now, if the dot product of this perpendicular vector and a boundary-line vector is positive, that crossing is from right to left and we add 1 to the winding number. Otherwise, the boundary crosses our reference line from left to right, and we subtract 1 from the winding number. The nonzero winding-number rule tends to classify as interior some areas that the odd-even rule deems to be exterior, and it can be more versatile in some applications. In general, plane gures can be dened with multiple, disjoint components, and the direction specied for each set of disjoint boundaries can be used to designate the interior and exterior regions. Examples include characters (such as letters of the alphabet and punctuation symbols), nested polygons, and concentric circles or ellipses. For curved lines, the odd-even rule is applied by calculating intersections with the curve paths. Similarly, with the nonzero winding-number rule, we need to calculate tangent vectors to the curves at the crossover intersection points with the reference line from position P.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-15

Polygon Fill Areas

129

FIGURE 3-47 A ll area dened as a region that has a positive value for the winding number. This ll area is the union of two regions, each with a counterclockwise border direction.

FIGURE 3-48 A ll area dened as a region with a winding number greater than 1. This ll area is the intersection of two regions, each with a counterclockwise border direction.

Region B

Region A

AB

FIGURE 3-49 A ll area dened as a region with a positive value for the winding number. This ll area is the difference, A B, of two regions, where region A has a positive border direction (counterclockwise) and region B has a negative border direction (clockwise).

Variations of the nonzero winding-number rule can be used to dene interior regions in other ways. For example, we could dene a point to be interior if its winding number is positive or if it is negative. Or we could use any other rule to generate a variety of ll shapes. Sometimes, Boolean operations are used to specify a ll area as a combination of two regions. One way to implement Boolean operations is by using a variation of the basic winding-number rule. With this scheme, we rst dene a simple, nonintersecting boundary for each of two regions. Then if we consider the direction for each boundary to be counterclockwise, the union of two regions would consist of those points whose winding number is positive (Fig. 3-47). Similarly, the intersection of two regions with counterclockwise boundaries would contain those points whose winding number is greater than 1, as illustrated in Fig. 3-48. To set up a ll area that is the difference of two regions, say A B, we can enclose region A with a counterclockwise border and B with a clockwise border. Then the difference region (Fig. 3-49) is the set of all points whose winding number is positive.

Polygon Tables
Typically, the objects in a scene are described as sets of polygon surface facets. In fact, graphics packages often provide functions for dening a surface shape as a mesh of polygon patches. The description for each object includes coordinate information specifying the geometry for the polygon facets and other surface parameters such as color, transparency, and light-reection properties. As

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

130

CHAPTER 3

Graphics Output Primitives


V1 E3 E1 S1 S2 V5 E4 E5

E6

E2 V2

V3

V4 VERTEX TABLE V1: V2: V3: V4: V5: x1, y1, z1 x2, y2, z2 x3, y3, z3 x4, y4, z4 x5, y5, z5 EDGE TABLE E1: E2: E3: E4: E5: E6: V1, V2 V2, V3 V3, V1 V3, V4 V4, V5 V5, V1 SURFACE-FACET TABLE S1: S2: E1, E2, E3 E3, E4, E5, E6

FIGURE 3-50 Geometric data-table representation for two adjacent polygon surface facets, formed with six edges and ve vertices.

information for each polygon is input, the data are placed into tables that are to be used in the subsequent processing, display, and manipulation of the objects in the scene. These polygon data tables can be organized into two groups: geometric tables and attribute tables. Geometric data tables contain vertex coordinates and parameters to identify the spatial orientation of the polygon surfaces. Attribute information for an object includes parameters specifying the degree of transparency of the object and its surface reectivity and texture characteristics. Geometric data for the objects in a scene are arranged conveniently in three lists: a vertex table, an edge table, and a surface-facet table. Coordinate values for each vertex in the object are stored in the vertex table. The edge table contains pointers back into the vertex table to identify the vertices for each polygon edge. And the surface-facet table contains pointers back into the edge table to identify the edges for each polygon. This scheme is illustrated in Fig. 3-50 for two adjacent polygon facets on an object surface. In addition, individual objects and their component polygon faces can be assigned object and facet identiers for easy reference. Listing the geometric data in three tables, as in Fig. 3-50, provides a convenient reference to the individual components (vertices, edges, and surface facets) for each object. Also, the object can be displayed efciently by using data from the edge table to identify polygon boundaries. An alternative arrangement is to use just two tables: a vertex table and a surface-facet table. But this scheme is less convenient, and some edges could get drawn twice in a wire-frame display. Another possibility is to use only a surface-facet table, but this duplicates coordinate information, since explicit coordinate values are listed for each vertex in each polygon facet. Also the relationship between edges and facets would have to be reconstructed from the vertex listings in the surface-facet table. We can add extra information to the data tables of Fig. 3-50 for faster information extraction. For instance, we could expand the edge table to include forward pointers into the surface-facet table so that a common edge between polygons

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-15

Polygon Fill Areas

131

could be identied more rapidly (Fig. 3-51). This is particularly useful for rendering procedures that must vary surface shading smoothly across the edges from one polygon to the next. Similarly, the vertex table could be expanded to reference corresponding edges, for faster information retrieval. Additional geometric information that is usually stored in the data tables includes the slope for each edge and the coordinate extents for polygon edges, polygon facets, and each object in a scene. As vertices are input, we can calculate edge slopes, and we can scan the coordinate values to identify the minimum and maximum x , y, and z values for individual lines and polygons. Edge slopes and bounding-box information are needed in subsequent processing, such as surface rendering and visible-surface identication algorithms. Since the geometric data tables may contain extensive listings of vertices and edges for complex objects and scenes, it is important that the data be checked for consistency and completeness. When vertex, edge, and polygon denitions are specied, it is possible, particularly in interactive applications, that certain input errors could be made that would distort the display of the objects. The more information included in the data tables, the easier it is to check for errors. Therefore, error checking is easier when three data tables (vertex, edge, and surface facet) are used, since this scheme provides the most information. Some of the tests that could be performed by a graphics package are (1) that every vertex is listed as an endpoint for at least two edges, (2) that every edge is part of at least one polygon, (3) that every polygon is closed, (4) that each polygon has at least one shared edge, and (5) that if the edge table contains pointers to polygons, every edge referenced by a polygon pointer has a reciprocal pointer back to the polygon.

E1: E2: E3: E4: E5: E6:

V1, V2, S1 V2, V3, S1 V3, V1, S1, S2 V3, V4, S2 V4, V5, S2 V5, V1, S2

FIGURE 3-51 Edge table for the surfaces of Fig. 3-50 expanded to include pointers into the surface-facet table.

Plane Equations
To produce a display of a three-dimensional scene, a graphics system processes the input data through several procedures. These procedures include transformation of the modeling and world-coordinate descriptions through the viewing pipeline, identication of visible surfaces, and the application of rendering routines to the individual surface facets. For some of these processes, information about the spatial orientation of the surface components of objects is needed. This information is obtained from the vertex coordinate values and the equations that describe the polygon surfaces. Each polygon in a scene is contained within a plane of innite extent. The general equation of a plane is Ax + B y + C z + D = 0 (3-59)

where (x , y, z) is any point on the plane, and the coefcients A, B , C , and D (called plane parameters) are constants describing the spatial properties of the plane. We can obtain the values of A, B , C , and D by solving a set of three plane equations using the coordinate values for three noncollinear points in the plane. For this purpose, we can select three successive convex-polygon vertices, (x1 , y1 , z1 ), (x2 , y2 , z2 ), and (x3 , y3 , z3 ), in a counterclockwise order and solve the following set of simultaneous linear plane equations for the ratios A/ D, B / D, and C / D: ( A/ D)xk + ( B / D) yk + (C / D)zk = 1, k = 1, 2, 3 (3-60)

The solution to this set of equations can be obtained in determinant form, using

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

132

CHAPTER 3

Graphics Output Primitives

Cramer s rule, as 1 A= 1 1 x1 C = x2 x3 y1 y2 y3 y1 y2 y3 z1 z2 z3 1 1 1 x1 B = x2 x3 x1 D = x2 x3 1 1 1 z1 z2 z3 y1 y2 y3 z1 z2 z3

(3-61)

Expanding the determinants, we can write the calculations for the plane coefcients in the form A = y1 (z2 z3 ) + y2 (z3 z1 ) + y3 (z1 z2 ) B = z1 (x2 x3 ) + z2 (x3 x1 ) + z3 (x1 x2 ) C = x1 ( y2 y3 ) + x2 ( y3 y1 ) + x3 ( y1 y2 ) D = x1 ( y2 z3 y3 z2 ) x2 ( y3 z1 y1 z3 ) x3 ( y1 z2 y2 z1 ) These calculations are valid for any three coordinate positions, including those for which D = 0. When vertex coordinates and other information are entered into the polygon data structure, values for A, B , C , and D can be computed for each polygon facet and stored with the other polygon data. It is possible that the coordinates dening a polygon facet may not be contained within a single plane. We can solve this problem by dividing the facet into a set of triangles. Or we could nd an approximating plane for the vertex list. One method for obtaining an approximating plane is to divide the vertex list into subsets, where each subset contains three vertices, and calculate plane parameters A, B , C , D for each subset. The approximating plane parameters are then obtained as the average value for each of the calculated plane parameters. Another approach is to project the vertex list onto the coordinate planes. Then we take parameter A proportional to the area of the polygon projection on the yz plane, parameter B proportional to the projection area on the xz plane, and parameter C proportional to the projection area on the xy plane. The projection method is often used in ray-tracing applications. (3-62)

Front and Back Polygon Faces


Since we are usually dealing with polygon surfaces that enclose an object interior, we need to distinguish between the two sides of each surface. The side of a polygon that faces into the object interior is called the back face, and the visible, or outward, side is the front face. Identifying the position of points in space relative to the front and back faces of a polygon is a basic task in many graphics algorithms, as, for example, in determining object visibility. Every polygon is contained within an innite plane that partitions space into two regions. Any point that is not on the plane and that is visible to the front face of a polygon surface section is said to be in front of (or outside) the plane, and, thus, outside the object. And any point that is visible to the back face of the polygon is behind (or inside) the plane. A point that is behind (inside) all polygon surface planes is inside the object. We need to keep in mind that this inside/outside classication is relative to the plane containing the polygon, whereas our previous inside/outside tests using the winding-number or odd-even rule were in reference to the interior of some two-dimensional boundary. Plane equations can be used to identify the position of spatial points relative to the polygon facets of an object. For any point (x , y, z) not on a plane with

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-15

Polygon Fill Areas


y 1

133

parameters A, B , C , D, we have Ax + B y + C z + D = 0 Thus we can identify the point as either behind or in front of a polygon surface contained within that plane according to the sign (negative or positive) of Ax + B y + C z + D: if if A x + B y + C z + D < 0, A x + B y + C z + D > 0, the point (x , y, z) is behind the plane the point (x , y, z) is in front of the plane
z

These inequality tests are valid in a right-handed Cartesian system, provided the plane parameters A, B , C , and D were calculated using coordinate positions selected in a strictly counterclockwise order when viewing the surface along a front-to-back direction. For example, in Fig. 3-52, any point outside (in front of) the plane of the shaded polygon satises the inequality x 1 > 0, while any point inside (in back of) the plane has an x -coordinate value less than 1. Orientation of a polygon surface in space can be described with the normal vector for the plane containing that polygon, as shown in Fig. 3-53. This surface normal vector is perpendicular to the plane and has Cartesian components ( A, B , C ), where parameters A, B , and C are the plane coefcients calculated in Eqs. 3-62. The normal vector points in a direction from inside the plane to the outside; that is, from the back face of the polygon to the front face. As an example of calculating the components of the normal vector for a polygon, which also gives us the plane parameters, we choose three of the vertices of the shaded face of the unit cube in Fig. 3-52. These points are selected in a counterclockwise ordering as we view the cube from outside looking toward the origin. Coordinates for these vertices, in the order selected, are then used in Eqs. 3-62 to obtain the plane coefcients: A = 1, B = 0, C = 0, D = 1. Thus, the normal vector for this plane is N = (1, 0, 0), which is in the direction of the positive x axis. That is, the normal vector is pointing from inside the cube to the outside and is perpendicular to the plane x = 1. The elements of a normal vector can also be obtained using a vector crossproduct calculation. Assuming we have a convex-polygon surface facet and a right-handed Cartesian system, we again select any three vertex positions, V1 , V2 , and V3 , taken in counterclockwise order when viewing from outside the object toward the inside. Forming two vectors, one from V1 to V2 and the second from V1 to V3 , we calculate N as the vector cross product: N = (V2 V1 ) (V3 V1 ) (3-63) This generates values for the plane parameters A, B , and C . We can then obtain the value for parameter D by substituting these values and the coordinates for one of the polygon vertices into the plane equation 3-59 and solving for D. The plane equation can be expressed in vector form using the normal N and the position P of any point in the plane as N P = D (3-64) For a convex polygon, we could also obtain the plane parameters using the cross product of two successive edge vectors. And with a concave polygon, we can select the three vertices so that the two vectors for the cross product form an angle less than 180 . Otherwise, we can take the negative of their cross product to get the correct normal vector direction for the polygon surface.

FIGURE 3-52

The shaded polygon surface of the unit cube has plane equation x 1 = 0.
y N(A, B, C)

The normal vector N for a plane described with the equation Ax + B y + C z + D = 0 is perpendicular to the plane and has Cartesian components ( A, B , C ).

FIGURE 3-53

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

134

CHAPTER 3

Graphics Output Primitives

3-16 OpenGL POLYGON FILL-AREA FUNCTIONS


With one exception, the OpenGL procedures for specifying ll polygons are similar to those for describing a point or a polyline. A glVertex function is used to input the coordinates for a single polygon vertex, and a complete polygon is described with a list of vertices placed between a glBegin/glEnd pair. However, there is one additional function that we can use for displaying a rectangle that has an entirely different format. By default, a polygon interior is displayed in a solid color, determined by the current color settings. As options (which are described in the next chapter), we can ll a polygon with a pattern and we can display polygon edges as line borders around the interior ll. There are six different symbolic constants that we can use as the argument in the glBegin function to describe polygon ll areas. These six primitive constants allow us to display a single ll polygon, a set of unconnected ll polygons, or a set of connected ll polygons. In OpenGL, a ll area must be specied as a convex polygon. Thus, a vertex list for a ll polygon must contain at least three vertices, there can be no crossing edges, and all interior angles for the polygon must be less than 180 . And a single polygon ll area can be dened with only one vertex list, which precludes any specications that contain holes in the polygon interior, such as that shown in Fig. 3-54. We could describe such a gure using two overlapping convex polygons. Each polygon that we specify has two faces: a back face and a front face. In OpenGL, ll color and other attributes can be set for each face separately, and back/front identication is needed in both two-dimensional and threedimensional viewing routines. Therefore, polygon vertices should be specied in a counterclockwise order as we view the polygon from outside. This identies the front face for that polygon. Because graphics displays often include rectangular ll areas, OpenGL provides a special rectangle function that directly accepts vertex specications in the xy plane. In some implementations of OpenGL, the following routine can be more efcient than generating a ll rectangle using glVertex specications.
glRect* (x1, y1, x2, y2);

One corner of this rectangle is at coordinate position (x 1, y1), and the opposite corner of the rectangle is at position (x 2, y2). Sufx codes for glRect specify the coordinate data type and whether coordinates are to be expressed as array elements. These codes are i (for integer), s (for short), f (for oat), d (for double), and v (for vector). The rectangle is displayed with edges parallel to the xy

FIGURE 3-54

A polygon with a complex interior, which cannot be specied with a single vertex list.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-16

OpenGL Polygon Fill-Area Functions

135

250 200 150 100 50 50 100 150 200

FIGURE 3-55 Display of a square ll area using the glRect function.

coordinate axes. As an example, the following statement denes the square shown in Fig. 3-55.
glRecti (200, 100, 50, 250);

If we put the coordinate values for this rectangle into arrays, we can generate the same square with the following code.
int vertex1 [ ] = {200, 100}; int vertex2 [ ] = {50, 250}; glRectiv (vertex1, vertex2);

When a rectangle is generated with function glRect, the polygon edges are formed between the vertices in the order (x 1, y1), (x 2, y1), (x 2, y2), (x 1, y2), and then back to the rst vertex. Thus, in our example, we produced a vertex list with a clockwise ordering. In many two-dimensional applications, the determination of front and back faces is unimportant. But if we do want to assign different properties to the front and back faces of the rectangle, then we should reverse the order of the two vertices in this example so that we obtain a counterclockwise ordering of the vertices. In Chapter 4, we discuss another way that we can reverse the specication of front and back polygon faces. Each of the other six OpenGL polygon ll primitives is specied with a symbolic constant in the glBegin function, along with a a list of glVertex commands. With the OpenGL primitive constant GL POLYGON, we can display a single polygon ll area such as that shown in Fig. 3-56(a). For this example, we assume that we have a list of six points, labeled p1 through p6, specifying twodimensional polygon vertex positions in a counterclockwise ordering. Each of the points is represented as an array of (x , y) coordinate values.
glBegin (GL_POLYGON); glVertex2iv (p1); glVertex2iv (p2); glVertex2iv (p3); glVertex2iv (p4); glVertex2iv (p5); glVertex2iv (p6); glEnd ( );

A polygon vertex list must contain at least three vertices. Otherwise, nothing is displayed.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

136

CHAPTER 3

Graphics Output Primitives


p6 p5 p6 p5

p1

p4

p1

p4

p2 (a) p6 p5

p3

p2 (b) p6

p3

p5

p1

p4

p1

p4

p2 (c)

p3

p2 (d)

p3

Displaying polygon ll areas using a list of six vertex positions. (a) A single convex polygon ll area generated with the primitive constant GL POLYGON. (b) Two unconnected triangles generated with GL TRIANGLES. (c) Four connected triangles generated with GL TRIANGLE STRIP. (d) Four connected triangles generated with GL TRIANGLE FAN.

FIGURE 3-56

If we reorder the vertex list and change the primitive constant in the previous code example to GL TRIANGLES, we obtain the two separated triangle ll areas in Fig. 3-56(b).
glBegin (GL_TRIANGLES); glVertex2iv (p1); glVertex2iv (p2); glVertex2iv (p6); glVertex2iv (p3); glVertex2iv (p4); glVertex2iv (p5); glEnd ( );

In this case, the rst three coordinate points dene the vertices for one triangle, the next three points dene the next triangle, and so forth. For each triangle ll area, we specify the vertex positions in a counterclockwise order. A set of unconnected triangles is displayed with this primitive constant unless some vertex coordinates are repeated. Nothing is displayed if we do not list at least three vertices. And if the number of vertices specied is not a multiple of three, the nal one or two vertex positions are not used. By reordering the vertex list once more and changing the primitive constant to GL TRIANGLE STRIP, we can display the set of connected triangles shown in Fig. 3-56(c).

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-16 glBegin (GL_TRIANGLE_STRIP); glVertex2iv (p1); glVertex2iv (p2); glVertex2iv (p6); glVertex2iv (p3); glVertex2iv (p5); glVertex2iv (p4); glEnd ( );

OpenGL Polygon Fill-Area Functions

137

Assuming that no coordinate positions are repeated in a list of N vertices, we obtain N 2 triangles in the strip. Clearly, we must have N 3 or nothing is displayed. In this example, N = 6 and we obtain four triangles. Each successive triangle shares an edge with the previously dened triangle, so the ordering of the vertex list must be set up to ensure a consistent display. One triangle is dened for each vertex position listed after the rst two vertices. Thus, the rst three vertices should be listed in counterclockwise order, when viewing the front (outside) surface of the triangle. After that, the set of three vertices for each subsequent triangle is arranged in a counterclockwise order within the polygon tables. This is accomplished by processing each position n in the vertex list in the order n = 1, n = 2, . . . , n = N 2 and arranging the order of the corresponding set of three vertices according to whether n is an odd number or an even number. If n is odd, the polygon table listing for the triangle vertices is in the order n, n + 1, n + 2. If n is even, the triangle vertices are listed in the order n + 1, n, n + 2. In the preceding example, our rst triangle (n = 1) would be listed as having vertices (p1, p2, p6). The second triangle (n = 2) would have the vertex ordering (p6, p2, p3). Vertex ordering for the third triangle (n = 3) would be (p6, p3, p5). And the fourth triangle (n = 4) would be listed in the polygon tables with vertex ordering (p5, p3, p4). Another way to generate a set of connected triangles is to use the fan approach illustrated in Fig. 3-56(d), where all triangles share a common vertex. We obtain this arrangement of triangles using the primitive constant GL TRIANGLE FAN and the original ordering of our six vertices:
glBegin (GL_TRIANGLE_FAN); glVertex2iv (p1); glVertex2iv (p2); glVertex2iv (p3); glVertex2iv (p4); glVertex2iv (p5); glVertex2iv (p6); glEnd ( );

For N vertices, we again obtain N 2 triangles, providing no vertex positions are repeated, and we must list at least three vertices. In addition, the vertices must be specied in the proper order to correctly dene front and back faces for each triangle. The rst coordinate position listed (in this case, p1) is a vertex for each triangle in the fan. If we again enumerate the triangles and the coordinate positions listed as n = 1, n = 2, . . . , n = N 2, then vertices for triangle n are listed in the polygon tables in the order 1, n + 1, n + 2. Therefore, triangle 1 is dened with the vertex list (p1, p2, p3); triangle 2 has the vertex ordering (p1, p3, p4); triangle 3 has its vertices specied in the order (p1, p4, p5); and triangle 4 is listed with vertices (p1, p5, p6). Besides the primitive functions for triangles and a general polygon, OpenGL provides for the specications of two types of quadrilaterals (four-sided polygons). With the GL QUADS primitive constant and the following list of eight

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

138

CHAPTER 3

Graphics Output Primitives


p8 p1 p4 p5

p6 p2 p3 (a) p8 p1 p4 p5 p7

FIGURE 3-57 Displaying quadrilateral ll areas using a list of eight vertex positions. (a) Two unconnected quadrilaterals generated with GL QUADS. (b) Three connected quadrilaterals generated with GL QUAD STRIP.

p6 p2 p3 (b) p7

vertices, specied as two-dimensional coordinate arrays, we can generate the display shown in Fig. 3-57(a).
glBegin (GL_QUADS); glVertex2iv (p1); glVertex2iv (p2); glVertex2iv (p3); glVertex2iv (p4); glVertex2iv (p5); glVertex2iv (p6); glVertex2iv (p7); glVertex2iv (p8); glEnd ( );

The rst four coordinate points dene the vertices for one quadrilateral, the next four points dene the next quadrilateral, and so on. For each quadrilateral ll area, we specify the vertex positions in a counterclockwise order. If no vertex coordinates are repeated, we display a set of unconnected four-sided ll areas. We must list at least four vertices with this primitive. Otherwise, nothing is displayed. And if the number of vertices specied is not a multiple of four, the extra vertex positions are ignored. Rearranging the vertex list in the previous quadrilateral code example and changing the primitive constant to GL QUAD STRIP, we can obtain the set of connected quadrilaterals shown in Fig. 3-57(b).

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-17 glBegin (GL_QUAD_STRIP); glVertex2iv (p1); glVertex2iv (p2); glVertex2iv (p4); glVertex2iv (p3); glVertex2iv (p5); glVertex2iv (p6); glVertex2iv (p8); glVertex2iv (p7); glEnd ( );

OpenGL Vertex Arrays

139

A quadrilateral is set up for each pair of vertices specied after the rst two vertices in the list, and we need to list the vertices so that we generate a correct counterclockwise vertex ordering for each polygon. For a list of N vertices, we obtain N 1 quadrilaterals, providing that N 4. If N is not a multiple of 4, 2 any extra coordinate positions in the list are not used. We can enumerate these ll polygons and the vertices listed as n = 1, n = 2, . . . , n = N 1. Then poly2 gon tables will list the vertices for quadrilateral n in the vertex order number 2n 1, 2n, 2n + 2, 2n + 1. For this example, N = 8 and we have 3 quadrilaterals in the strip. Thus, our rst quadrilateral (n = 1) is listed as having a vertex ordering of (p1, p2, p3, p4). The second quadrilateral (n = 2) has the vertex ordering (p4, p3, p6, p5). And the vertex ordering for the third quadrilateral (n = 3) is (p5, p6, p7, p8). Most graphics packages display curved surfaces as a set of approximating plane facets. This is because plane equations are linear, and processing the linear equations is much quicker than processing quadric or other types of curve equations. So OpenGL and other packages provide polygon primitives to facilitate the approximation of a curved surface. Objects are modeled with polygon meshes, and a database of geometric and attribute information is set up to facilitate processing of the polygon facets. In OpenGL, primitives we can use for this purpose are the triangle strip, the triangle fan, and the quad strip. Fast hardwareimplemented polygon renderers are incorporated into high-quality graphics systems with the capability for displaying one million or more shaded polygons per second (usually triangles), including the application of surface texture and special lighting effects. Although the OpenGL core library allows only convex polygons, the OpenGL Utility (GLU) provides functions for dealing with concave polygons and other nonconvex objects with linear boundaries. A set of GLU polygon tessellation routines is available for converting such shapes into a set of triangles, triangle meshes, triangle fans, and straight-line segments. Once such objects have been decomposed, they can be processed with basic OpenGL functions.

3-17 OpenGL VERTEX ARRAYS


Although our examples so far have contained relatively few coordinate positions, describing a scene containing several objects can get much more complicated. To illustrate, we rst consider describing a single, very basic object: the unit cube shown in Fig. 3-58, with coordinates given in integers to simplify the following discussion. A straightforward method for dening the vertex coordinates is to

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

140

CHAPTER 3

Graphics Output Primitives


z

1 4 5

y 1 0 1

1 x 2 3

FIGURE 3-58

A cube with an edge length of 1.

FIGURE 3-59 Subscript values for array pt corresponding to the vertex coordinates for the cube shown in Fig. 3-58.

use a double-subscripted array, such as


GLint points [8][3] = { {0, 0, 0}, {0, 1, 0}, {1, 0, 0}, {1, 1, 0}, {0, 0, 1}, {0, 1, 1}, {1, 0, 1}, {1, 1, 1} };

Or we could rst dene a data type for a three-dimensional vertex position and then give the coordinates for each vertex position as an element of a single-subscripted array as, for example,
typedef GLint vertex3 [3]; vertex3 pt [8] = { {0, 0, 0}, {0, 1, 0}, {1, 0, 0}, {1, 1, 0}, {0, 0, 1}, {0, 1, 1}, {1, 0, 1}, {1, 1, 1} };

Next, we need to dene each of the six faces of this object. For this, we could make six calls either to glBegin (GL POLYGON) or to glBegin (GL QUADS). In either case, we must be sure to list the vertices for each face in a counterclockwise order when viewing that surface from the outside of the cube. In the following code segment, we specify each cube face as a quadrilateral and use a function call to pass array subscript values to the OpenGL primitive routines. Figure 3-59 shows the subscript values for array pt corresponding to the cube vertex positions.
void quad (GLint n1, GLint n2, GLint n3, GLint n4) { glBegin (GL_QUADS); glVertex3iv (pt [n1]); glVertex3iv (pt [n2]); glVertex3iv (pt [n3]); glVertex3iv (pt [n4]); glEnd ( ); }

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-17 void cube ( { quad (6, quad (5, quad (7, quad (4, quad (2, quad (7, } ) 2, 1, 3, 0, 0, 5, 3, 0, 1, 2, 1, 4, 7); 4); 5); 6); 3); 6);

OpenGL Vertex Arrays

141

Thus, the specication for each face requires six OpenGL functions, and we have six faces to specify. When we add color specications and other parameters, our display program for the cube could easily contain one hundred or more OpenGL function calls. And scenes with many complex objects can require much more. As we can see from the preceding cube example, a complete scene description could require hundreds or thousands of coordinate specications. In addition, there are various attribute and viewing parameters that must be set for individual objects. Thus, object and scene descriptions could require an enormous number of function calls, which puts a demand on system resources and can slow execution of the graphics programs. A further problem with complex displays is that object surfaces (such as the cube in Fig. 3-58) usually have shared vertex coordinates. Using the methods we have discussed up to now, these shared positions may need to be specied multiple times. To alleviate these problems, OpenGL provides a mechanism for reducing the number of function calls needed in processing coordinate information. Using a vertex array, we can arrange the information for describing a scene so that we need only a very few function calls. The steps involved are (1) Invoke the function glEnableClientState (GL VERTEX ARRAY) to activate the vertex-array feature of OpenGL. (2) Use the function glVertexPointer to specify the location and data format for the vertex coordinates. (3) Display the scene using a routine such as glDrawElements, which can process multiple primitives with very few function calls. Using the pt array previously dened for the cube, we implement these three steps in the following code example.
glEnableClientState (GL_VERTEX_ARRAY); glVertexPointer (3, GL_INT, 0, pt); GLubyte vertIndex [ ] = (6, 2, 3, 7, 5, 1, 0, 4, 7, 3, 1, 5, 4, 0, 2, 6, 2, 0, 1, 3, 7, 5, 4, 6); glDrawElements (GL_QUADS, 24, GL_UNSIGNED_BYTE, vertIndex);

With the rst command, glEnableClientState (GL VERTEX ARRAY), we activate a capability (in this case, a vertex array) on the client side of a clientserver system. Because the client (the machine that is running the main program) retains the data for a picture, the vertex array must be there also. As we noted in Chapter 2, the server (our workstation, for example) generates commands and displays the picture. Of course, a single machine can be both client and server.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

142

CHAPTER 3

Graphics Output Primitives

The vertex-array feature of OpenGL is deactivated with the command:


glDisableClientState (GL_VERTEX_ARRAY);

We next give the location and format of the coordinates for the object vertices in the function glVertexPointer. The rst parameter in glVertexPointer, 3 in this example, species the number of coordinates used in each vertex description. Data type for the vertex coordinates is designated using an OpenGL symbolic constant as the second parameter in this function. For our example, the data type is GL INT. Other data types are specied with the symbolic constants GL BYTE, GL SHORT, GL FLOAT, and GL DOUBLE. With the third parameter we give the byte offset between consecutive vertices. The purpose of this argument is to allow various kinds of data, such as coordinates and colors, to be packed together in one array. Since we are only giving the coordinate data, we assign a value of 0 to the offset parameter. The last parameter in the glVertexPointer function references the vertex array, which contains the coordinate values. All the indices for the cube vertices are stored in array vertIndex. Each of these indices is the subscript for array pt corresponding to the coordinate values for that vertex. This index list is referenced as the last parameter value in function glDrawElements and is then used by the primitive GL QUADS, which is the rst parameter, to display the set of quadrilateral surfaces for the cube. The second parameter species the number of elements in array vertIndex. Since a quadrilateral requires just four vertices and we specied 24, the glDrawElements function continues to display another cube face after each successive set of four vertices until all 24 have been processed. Thus, we accomplish the nal display of all faces of the cube with this single function call. The third parameter in function glDrawElements gives the type for the index values. Since our indices are small integers, we specied a type of GL UNSIGNED BYTE. The two other index types that can be used are GL UNSIGNED SHORT and GL UNSIGNED INT. Additional information can be combined with the coordinate values in the vertex arrays to facilitate the processing of a scene description. We can specify color values and other attributes for objects in arrays that can be referenced by the glDrawElements function. And we can interlace the various arrays for greater efciency. We take a look at the methods for implementing these attribute arrays in the next chapter.

3-18 PIXEL-ARRAY PRIMITIVES


In addition to straight lines, polygons, circles, and other primitives, graphics packages often supply routines to display shapes that are dened with a rectangular array of color values. We can obtain the rectangular grid pattern by digitizing (scanning) a photograph or other picture or by generating a shape with a graphics program. Each color value in the array is then mapped to one or more screen pixel positions. As we noted in Chapter 2, a pixel array of color values is typically referred to as a pixmap. Parameters for a pixel array can include a pointer to the color matrix, the size of the matrix, and the position and size of the screen area to be affected by the color values. Figure 3-60 gives an example of mapping a pixel-color array onto a screen area.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-19
y

OpenGL Pixel-Array Functions

143

ymax

n 5 rows

ymin m 8 columns

xmin

xmax

FIGURE 3-60 Mapping an n by m color array onto a region of the screen coordinates.

Another method for implementing a pixel array is to assign either the bit value 0 or the bit value 1 to each element of the matrix. In this case, the array is simply a bitmap, which is sometimes called a mask, that indicates whether or not a pixel is to be assigned (or combined with) a preset color.

3-19 OpenGL PIXEL-ARRAY FUNCTIONS


There are two functions in OpenGL that we can use to dene a shape or pattern specied with a rectangular array. One is a bitmap and the other is a pixmap. Also, OpenGL provides several routines for saving, copying, and manipulating arrays of pixel values.

OpenGL Bitmap Function


A binary array pattern is dened with the function
glBitmap (width, height, x0, y0, xOffset, yOffset, bitShape);

Parameters width and height in this function give the number of columns and number of rows, respectively, in the array bitShape. Each element of bitShape is assigned either a 1 or a 0. A value of 1 indicates that the corresponding pixel is to be displayed in a previously set color. Otherwise, the pixel is unaffected by the bitmap. (As an option, we could use a value of 1 to indicate that a specied color is to be combined with the color value stored in the refresh buffer at that position.) Parameters x0 and y0 dene the position that is to be considered the origin of the rectangular array. This origin position is specied relative to the lower left corner of bitShape, and values for x0 and y0 can be positive or negative. In addition, we need to designate a location in the frame buffer where the pattern is to be applied. This location is called the current raster position, and the bitmap is displayed by positioning its origin, (x0, y0), at the current raster position. Values assigned to parameters xOffset and yOffset are used

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

144

CHAPTER 3

Graphics Output Primitives

as coordinate offsets to update the frame-buffer current raster position after the bitmap is displayed. Coordinate values for x0, y0, xOffset, and yOffset, as well as the current raster position, are maintained as oating-point values. Of course, bitmaps will be applied at integer pixel positions. But oating-point coordinates allow a set of bitmaps to be spaced at arbitrary intervals, which is useful in some applications such as forming character strings with bitmap patterns. We use the following routine to set the coordinates for the current raster position.
glRasterPos* ( )

Parameters and sufx codes are the same as those for the glVertex function. Thus, a current raster position is given in world coordinates, and it is transformed to screen coordinates by the viewing transformations. For our two-dimensional examples, we can specify coordinates for the current raster position directly in integer screen coordinates. The default value for the current raster position is the world-coordinate origin (0, 0, 0). The color for a bitmap is the color that is in effect at the time that the glRasterPos command is invoked. Any subsequent color changes do not affect the bitmap. Each row of a rectangular bit array is stored in multiples of 8 bits, where the binary data is arranged as a set of 8-bit unsigned characters. But we can describe a shape using any convenient grid size. As an example, Fig. 3-61 shows a bit pattern dened on a 10-row by 9-column grid, where the binary data is specied with 16 bits for each row. When this pattern is applied to the pixels in the frame buffer, all bit values beyond the ninth column are ignored. We apply the bit pattern of Fig. 3-61 to a frame-buffer location with the following code section.
GLubyte bitShape [20] = { 0x1c, 0x00, 0x1c, 0x00, 0x1c, 0x00, 0x1c, 0x00, 0x1c, 0x00, 0xff, 0x80, 0x7f, 0x00, 0x3e, 0x00, 0x1c, 0x00, 0x08, 0x00}; glPixelStorei (GL_UNPACK_ALIGNMENT, 1); // Set pixel storage mode.

glRasterPos2i (30, 40); glBitmap (9, 10, 0.0, 0.0, 20.0, 15.0, bitShape);
0 0 0 0 10 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 12 16 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 08 0 1C 0 3E 0 7F 0 FF 0 1C 0 1C 0 1C 0 1C 0 1C 0 00 0 00 0 00 0 00 0 80 0 00 0 00 0 00 0 00 0 00

A bit pattern, specied in an array with 10 rows and 9 columns, is stored in 8-bit blocks of 10 rows with 16 bit values per row.

FIGURE 3-61

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-19

OpenGL Pixel-Array Functions

145

Array values for bitShape are specied row by row, starting at the bottom of the rectangular-grid pattern. Next we set the storage mode for the bitmap with the OpenGL routine glPixelStorei. The parameter value of 1 in this function indicates that the data values are to be aligned on byte boundaries. With glRasterPos, we set the current raster position to (30, 40). Finally, function glBitmap species that the bit pattern is given in array bitShape, and that this array has 9 columns and 10 rows. The coordinates for the origin of this pattern are (0.0, 0.0), which is the lower-left corner of the grid. We illustrate a coordinate offset with the values (20.0, 15.0), although we make no use of the offset in this example.

OpenGL Pixmap Function


A pattern dened as an array of color values is applied to a block of frame-buffer pixel positions with the function
glDrawPixels (width, height, dataFormat, dataType, pixMap);

Again, parameters width and height give the column and row dimensions, respectively, of the array pixMap. Parameter dataFormat is assigned an OpenGL constant that indicates how the values are specied for the array. For example, we could specify a single blue color for all pixels with the constant GL BLUE, or we could specify three color components in the order blue, green, red with the constant GL BGR. A number of other color specications are possible, and we examine color selections in greater detail in the next chapter. An OpenGL constant, such as GL BYTE, GL INT, or GL FLOAT, is assigned to parameter dataType to designate the data type for the color values in the array. The lowerleft corner of this color array is mapped to the current raster position, as set by the glRasterPos function. As an example, the following statement displays a pixmap dened in a 128-by-128 array of RGB color values.
glDrawPixels (128, 128, GL_RGB, GL_UNSIGNED_BYTE, colorShape);

Since OpenGL provides several buffers, we can paste an array of values into a particular buffer by selecting that buffer as the target of the glDrawPixels routine. Some buffers store color values and some store other kinds of pixel data. A depth buffer, for instance, is used to store object distances (depths) from the viewing position, and a stencil buffer is used to store boundary patterns for a scene. We select one of these two buffers by setting parameter dataFormat in the glDrawPixels routine to either GL DEPTH COMPONENT or GL STENCIL INDEX. For these buffers, we would need to set up the pixel array using either depth values or stencil information. We examine both of these buffers in more detail in later chapters. There are four color buffers available in OpenGL that can be used for screen refreshing. Two of the color buffers constitute a left-right scene pair for displaying stereoscopic views. For each of the stereoscopic buffers, there is a front-back pair for double-buffered animation displays. In a particular implementation of OpenGL, either stereoscopic viewing or double buffering, or both, might not be supported. If neither stereoscopic effects nor double buffering is supported, then there is only a single refresh buffer, which is designated as the front-left color buffer. This is the default refresh buffer when double buffering is not available or not in effect. If double buffering is in effect, the default is either the back-left and back-right buffers or only the back-left buffer, depending on the current state of

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

146

CHAPTER 3

Graphics Output Primitives

stereoscopic viewing. Also, a number of user-dened, auxiliary color buffers are supported that can be used for any nonrefresh purpose, such as saving a picture that is to be copied later into a refresh buffer for display. We select a single color or auxiliary buffer or a combination of color buffers for storing a pixmap with the following command.
glDrawBuffer (buffer);

A variety of OpenGL symbolic constants can be assigned to parameter buffer to designate one or more draw buffers. For instance, we can pick a single buffer with either GL FRONT LEFT, GL FRONT RIGHT, GL BACK LEFT, or GL BACK RIGHT. We can select both front buffers with GL FRONT, and we can select both back buffers with GL BACK. This is assuming that stereoscopic viewing is in effect. Otherwise, the previous two symbolic constants designate a single buffer. Similarly, we can designate either the left or right buffer pairs with GL LEFT or GL RIGHT. And we can select all the available color buffers with GL FRONT AND BACK. An auxiliary buffer is chosen with the constant GL AUXk, where k is an integer value from 0 to 3, although more than four auxiliary buffers may be available in some implementations of OpenGL.

OpenGL Raster Operations


In addition to storing an array of pixel values in a buffer, we can retrieve a block of values from a buffer or copy the block into another buffer area. And we can perform a variety of other operations on a pixel array. In general, the term raster operation or raster op is used to describe any function that processes a pixel array in some way. A raster operation that moves an array of pixel values from one place to another is also referred to as a block transfer of pixel values. On a bilevel system, these operations are called bitblt transfers (bit-block transfers), particularly when the functions are hardware implemented. On a multilevel system, the term pixblt is sometimes used for block transfers. We use the following function to select a rectangular block of pixel values in a designated set of buffers.
glReadPixels (xmin, ymin, width, height, dataFormat, dataType, array};

The lower-left corner of the rectangular block to be retrieved is at screencoordinate position (xmin, ymin). Parameters width, height, dataFormat, and dataType are the same as in the glDrawPixels routine. The type of data to be saved in parameter array depends on the selected buffer. We can choose either the depth buffer or the stencil buffer by assigning either the value GL DEPTH COMPONENT or the value GL STENCIL INDEX to parameter dataFormat. A particular combination of color buffers or an auxiliary buffer is selected for the application of the glReadPixels routine with the function
glReadBuffer (buffer);

Symbolic constants for specifying one or more buffers are the same as in the glDrawBuffer routine, except that we cannot select all four of the color buffers. The default buffer selection is the front left-right pair or just the front-left buffer, depending on the status of stereoscopic viewing.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-20

Character Primitives

147

We can also copy a block of pixel data from one location to another within the set of OpenGL buffers using the following routine.
glCopyPixels (xmin, ymin, width, height, pixelValues};

The lower-left corner of the block is at screen-coordinate location (xmin, ymin), and parameters width and height are assigned positive integer values to designate the number of columns and rows, respectively, that are to be copied. Parameter pixelValues is assigned either GL COLOR, GL DEPTH, or GL STENCIL to indicate the kind of data we want to copy: color values, depth values, or stencil values. And the block of pixel values is copied from a source buffer to a destination buffer, with its lower-left corner mapped to the current raster position. We select the source buffer with the glReadBuffer command, and we select the destination buffer with the glDrawBuffer command. Both the region to be copied and the destination area should lie completely within the bounds of the screen coordinates. To achieve different effects as a block of pixel values is placed into a buffer with glDrawPixels or glCopyPixels, we can combine the incoming values with the old buffer values in various ways. As an example, we could apply logical operations, such as and, or, and exclusive or, to combine the two blocks of pixel values. In OpenGL, we select a bitwise, logical operation for combining incoming and destination pixel color values with the functions
glEnable (GL_COLOR_LOGIC_OP); glLogicOp (logicOp);

A variety of symbolic constants can be assigned to parameter logicOp, including GL AND, GL OR, and GL XOR. In addition, either the incoming bit values or the destination bit values can be inverted (interchanging 0 and 1 values). We use the constant GL COPY INVERTED to invert the incoming color bit values and then replace the destination values with the inverted incoming values. And we could simply invert the destination bit values without replacing them with the incoming values using GL INVERT. The various invert operations can also be combined with the logical and, or, and exclusive or operations. Other options include clearing all the destination bits to the value 0 (GL CLEAR), or setting all the destination bits to the value 1 (GL SET). The default value for the glLogicOp routine is GL COPY, which simply replaces the destination values with the incoming values. Additional OpenGL routines are available for manipulating pixel arrays processed by the glDrawPixels, glReadPixels, and glCopyPixels functions. For example, the glPixelTransfer and glPixelMap routines can be used to shift or adjust color values, depth values, or stencil values. We return to pixel operations in later chapters as we explore other facets of computer-graphics packages.

3-20 CHARACTER PRIMITIVES


Graphics displays often include textural information such as labels on graphs and charts, signs on buildings or vehicles, and general identifying information for simulation and visualization applications. Routines for generating character

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

148

CHAPTER 3

Graphics Output Primitives

primitives are available in most graphics packages. Some systems provide an extensive set of character functions, while other systems offer only minimal support for character generation. Letters, numbers, and other characters can be displayed in a variety of sizes and styles. The overall design style for a set (or family) of characters is called a typeface. Today, there are thousands of typefaces available for computer applications. Examples of a few common typefaces are Courier, Helvetica, New York, Palatino, and Zapf Chancery. Originally, the term font referred to a set of cast metal character forms in a particular size and format, such as 10-point Courier Italic or 12-point Palatino Bold. A 14-point font has a total character height of about 0.5 centimeter. In other words, 72 points is about the equivalent of 2.54 centimeters (1 inch). The terms font and typeface are now often used interchangeably, since most printing is no longer done with cast metal forms. Typefaces (or fonts) can be divided into two broad groups: serif and sans serif. Serif type has small lines or accents at the ends of the main character strokes, while sans-serif type does not have accents. For example, the text in this book is set in a serif font (Palatino). But this sentence is printed in a sans-serif font (Univers). Serif type is generally more readable; that is, it is easier to read in longer blocks of text. On the other hand, the individual characters in sans-serif type are easier to recognize. For this reason, sans-serif type is said to be more legible. Since sans-serif characters can be quickly recognized, this typeface is good for labeling and short headings. Fonts are also classied according to whether they are monospace or proportional. Characters in a monospace font all have the same width. In a proportional font, character width varies. Two different representations are used for storing computer fonts. A simple method for representing the character shapes in a particular typeface is to set up a pattern of binary values on a rectangular grid. The set of characters is then referred to as a bitmap font (or bitmapped font). A bitmapped character set is also sometimes referred to as a raster font. Another, more exible, scheme is to describe character shapes using straight-line and curve sections, as in PostScript, for example. In this case, the set of characters is called an outline font or a stroke font. Figure 3-62 illustrates the two methods for character representation. When the pattern in Fig. 3-62(a) is applied to an area of the frame buffer, the 1 bits designate which pixel positions are to be displayed in a specied color. To display the character shape in Fig. 3-62(b), the interior of the character outline is treated as a ll area. Bitmap fonts are the simplest to dene and display: we just need to map the character grids to a frame-buffer position. In general, however, bitmap fonts

1 0 0 0 0 0 1 0

1 1 1 1 1 1 1 0

1 1 1 1 1 1 1 0

1 0 0 1 0 0 1 0

1 0 0 1 0 0 1 0 (a)

1 1 1 1 1 1 1 0

0 1 1 0 1 1 0 0

0 0 0 0 0 0 0 0 (b)

The letter B represented with an 8-by-8 bitmap pattern (a) and with an outline shape dened with straight-line and curve segments (b).

FIGURE 3-62

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-21
y 100 * * 50 * * x

OpenGL Character Functions

149

* *

41 94 59 43 85 74 110 59 121 89 149 122 150

50

100

FIGURE 3-63 A polymarker graph of a set of data values.

require more storage space, since each variation (size and format) must be saved in a font cache. It is possible to generate different sizes and other variations, such as bold and italic, from one bitmap font set, but this often does not produce good results. We can increase or decrease the size of a character bitmap only in integer multiples of the pixel size. To double the size of a character, we need to double the number of pixels in the bitmap. And this just increases the ragged appearance of its edges. In contrast to bitmap fonts, outline fonts can be increased in size without distorting the character shapes. And outline fonts require less storage because each variation does not require a distinct font cache. We can produce boldface, italic, or different sizes by manipulating the curve denitions for the character outlines. But it does take more time to process the outline fonts, since they must be scan converted into the frame buffer. There are a variety of possible functions for implementing character displays. Some graphics packages provide a function that accepts any character string and a frame-buffer starting position for the string. Another type of function is one that displays a single character at one or more selected positions. Since this character routine is useful for showing markers in a network layout or in displaying a point plot of a discrete data set, the character displayed by this routine is sometimes referred to as a marker symbol or polymarker, in analogy with a polyline primitive. In addition to standard characters, special shapes such as dots, circles, and crosses are often available as marker symbols. Figure 3-63 shows a plot of a discrete data set using an asterisk as a marker symbol. Geometric descriptions for characters are given in world coordinates, just as they are for other primitives, and this information is mapped to screen coordinates by the viewing transformations. A bitmap character is described with a rectangular grid of binary values and a grid reference position. This reference position is then mapped to a specied location in the frame buffer. An outline character is dened by a set of coordinate positions that are to be connected with a series of curves and straight-line segments and a reference position that is to be mapped to a given frame-buffer location. The reference position can be specied either for a single outline character or for a string of characters. In general, character routines can allow the construction of both two-dimensional and three-dimensional character displays.

3-21 OpenGL CHARACTER FUNCTIONS


Only low-level support is provided by the basic OpenGL library for displaying individual characters and text strings. We can explicitly dene any character as a bitmap, as in the example shape shown in Fig. 3-61, and we can store a set

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

150

CHAPTER 3

Graphics Output Primitives

of bitmap characters as a font list. A text string is then displayed by mapping a selected sequence of bitmaps from the font list into adjacent positions in the frame buffer. However, some predened character sets are available in the OpenGL Utility Toolkit (GLUT). So we do not need to create our own fonts as bitmap shapes, unless we want to display a font that is not available in GLUT. The GLUT library contains routines for displaying both bitmapped and outline fonts. Bitmapped GLUT fonts are rendered using the OpenGL glBitmap function, and the outline fonts are generated with polyline (GL LINE STRIP) boundaries. We can display a bitmap GLUT character with
glutBitmapCharacter (font, character);

where parameter font is assigned a symbolic GLUT constant identifying a particular set of type faces, and parameter character is assigned either the ASCII code or the specic character we wish to display. Thus, to display the upper-case letter A, we can either use the ASCII value 65 or the designation 'A'. Similarly, a code value of 66 is equivalent to 'B', code 97 corresponds to the lower-case letter 'a', code 98 corresponds to 'b', and so forth. Both xed-width fonts and proportionally spaced fonts are available. We can select a xed-width font by assigning either GLUT BITMAP 8 BY 13 or GLUT BITMAP 9 BY 15 to parameter font. And we can select a 10-point, proportionally spaced font with either GLUT BITMAP TIMES ROMAN 10 or GLUT BITMAP HELVETICA 10. A 12-point Times-Roman font is also available, as well as 12-point and 18-point Helvetica fonts. Each character generated by glutBitmapCharacter is displayed so that the origin (lower-left corner) of the bitmap is at the current raster position. After the character bitmap is loaded into the refresh buffer, an offset equal to the width of the character is added to the x coordinate for the current raster position. As an example, we could display a text string containing 36 bitmap characters with the following code.
glRasterPosition2i (x, y); for (k = 0; k < 36; k++) glutBitmapCharacter (GLUT_BITMAP_9_BY_15, text [k]);

Characters are displayed in the color that was specied before the execution of the glutBitmapCharacter function. An outline character is displayed with the following function call.
glutStrokeCharacter (font, character);

For this function, we can assign parameter font either the value GLUT STROKE ROMAN, which displays a proportionally spaced font, or the value GLUT STROKE MONO ROMAN, which displays a font with constant spacing. We control the size and position of these characters by specifying transformation operations (Chapter 5) before executing the glutStrokeCharacter routine. After each character is displayed, a coordinate offset is automatically applied so that the position for displaying the next character is to the right of the current character. Text strings generated with outline fonts are part of the geometric description for a two-dimensional or three-dimensional scene because they are constructed

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-23

OpenGL Display Lists

151

with line segments. Thus, they can be viewed from various directions, and we can shrink or expand them without distortion, or transform them in other ways. But they are slower to render, compared to bitmapped fonts.

3-22 PICTURE PARTITIONING


Some graphics libraries include routines for describing a picture as a collection of named sections and for manipulating the individual sections of a picture. Using these functions we can create, edit, delete, or move a part of a picture independently of the other picture components. And we can also use this feature of a graphics package for hierarchical modeling (Chapter 14), in which an object description is given as a tree structure composed of a number of levels specifying the object subparts. Various names are used for the subsections of a picture. Some graphics packages refer to them as structures, while other packages call them segments or objects. Also, the allowable subsection operations vary greatly from one package to another. Modeling packages, for example, provide a wide range of operations that can be used to describe and manipulate picture elements. On the other hand, for any graphics library, we can always structure and manage the components of a picture using procedural elements available in a high-level language such as C++.

3-23 OpenGL DISPLAY LISTS


Often it can be convenient or more efcient to store an object description (or any other set of OpenGL commands) as a named sequence of statements. We can do this in OpenGL using a structure called a display list. Once a display list has been created, we can reference the list multiple times with different display operations. On a network, a display list describing a scene is stored on the server machine, which eliminates the need to transmit the commands in the list each time the scene is to be displayed. We can also set up a display list so that it is saved for later execution, or we can specify that the commands in the list be executed immediately. And display lists are particularly useful for hierarchical modeling, where a complex object can be described with a set of simpler subparts.

Creating and Naming an OpenGL Display List


A set of OpenGL commands is formed into a display list by enclosing the commands within the glNewList/glEndList pair of functions. For example,
glNewList (listID, listMode}; . . . glEndList ( );

This structure forms a display list with a positive integer value assigned to parameter listID as the name for the list. Parameter listMode is assigned an OpenGL symbolic constant that can be either GL COMPILE or

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

152

CHAPTER 3

Graphics Output Primitives

GL COMPILE AND EXECUTE. If we want to save the list for later execution, we use GL COMPILE. Otherwise, the commands are executed as they are placed into the list, in addition to allowing us to execute the list again at a later time. As a display list is created, expressions involving parameters such as coordinate positions and color components are evaluated so that only the parameter values are stored in the list. Any subsequent changes to these parameters have no effect on the list. Because display-list values cannot be changed, we cannot include certain OpenGL commands, such as vertex-list pointers, in a display list. We can create any number of display lists, and we execute a particular list of commands with a call to its identier. Further, one display list can be embedded within another display list. But if a list is assigned an identier that has already been used, the new list replaces the previous list that had been assigned that identier. Therefore, to avoid losing a list by accidentally reusing its identier, we can let OpenGL generate an identier for us:
listID = glGenLists (1);

This statement returns one (1) unused positive integer identier to the variable listID. A range of unused integer list identiers is obtained if we change the argument of glGenLists from the value 1 to some other positive integer. For instance, if we invoke glGenLists (6), then a sequence of six contiguous positive integer values is reserved and the rst value in this list of identiers is returned to the variable listID. A value of 0 is returned by the glGenLists function if an error occurs or if the system cannot generate the range of contiguous integers requested. Therefore, before using an identier obtained from the glGenLists routine, we could check to be sure that it is not 0. Although unused list identiers can be generated with the glGenList function, we can independently query the system to determine whether a specic integer value has been used as a list name. The function to accomplish this is
glIsList (listID};

A value of GL TRUE is returned if the value of listID is an integer that has already been used as a display-list name. If the integer value has not been used as a list name, the glIsList function returns the value GL FALSE.

Executing OpenGL Display Lists


We execute a single display list with the statement
glCallList (listID);

The following code segment illustrates the creation and execution of a display list. We rst set up a display list that contains the description for a regular hexagon, dened in the xy plane using a set of six equally spaced vertices around the circumference of a circle, whose center coordinates are (200, 200) and whose radius is 150. Then we issue a call to function glCallList, which displays the hexagon.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-23

OpenGL Display Lists

153

const double TWO_PI = 6.2831853; GLuint regHex; GLdouble theta; GLint x, y, k; /* Set up a display list for a regular hexagon. * Vertices for the hexagon are six equally spaced * points around the circumference of a circle. */ regHex = glGenLists (1); // Get an identifier for the display list. glNewList (regHex, GL_COMPILE); glBegin (GL_POLYGON); for (k = 0; k < 6; k++) { theta = TWO_PI * k / 6.0; x = 200 + 150 * cos (theta); y = 200 + 150 * sin (theta); glVertex2i (x, y); } glEnd ( ); glEndList ( ); glCallList (regHex);

Several display lists can be executed using the following two statements.
glListBase (offsetValue); glCallLists (nLists, arrayDataType, listIDArray);

The integer number of lists that we want to execute is assigned to parameter nLists, and parameter listIDArray is an array of display-list identiers. In general, listIDArray can contain any number of elements, and invalid displaylist identiers are ignored. Also, the elements in listIDArray can be specied in a variety of data formats, and parameter arrayDataType is used to indicate a data type, such as GL BYTE, GL INT, GL FLOAT, GL 3 BYTES, or GL 4 BYTES. A display-list identier is calculated by adding the value in an element of listIDArray to the integer value of offsetValue that is given in the glListBase function. The default value for offsetValue is 0. This mechanism for specifying a sequence of display lists that are to be executed allows us to set up groups of related display lists, whose identiers are formed from symbolic names or codes. A typical example is a font set where each display-list identier is the ASCII value of a character. When several font sets are dened, we use parameter offsetValue in the glListBase function to obtain a particular font described within the array listIDArray.

Deleting OpenGL Display Lists


We eliminate a contiguous set of display lists with the function call
glDeleteLists (startID, nLists);

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

154

CHAPTER 3

Graphics Output Primitives

Parameter startID gives the initial display-list identier, and parameter nLists species the number of lists that are to be deleted. For example, the statement
glDeleteLists (5, 4);

eliminates the four display lists with identiers 5, 6, 7, and 8. An identier value that references a nonexistent display list is ignored.

3-24 OpenGL DISPLAY-WINDOW RESHAPE FUNCTION


In our introductory OpenGL program (Section 2-9), we discussed the functions for setting up an initial display window. But after the generation of our picture, we often want to use the mouse pointer to drag the display window to another screen location or to change its size. Changing the size of a display window could change its aspect ratio and cause objects to be distorted from their original shapes. To allow us to compensate for a change in display-window dimensions, the GLUT library provides the following routine
glutReshapeFunc (winReshapeFcn);

We can include this function in the main procedure in our program, along with the other GLUT routines, and it will be activated whenever the display-window size is altered. The argument for this GLUT function is the name of a procedure that is to receive the new display-window width and height. We can then use the new dimensions to reset the projection parameters and perform any other operations, such as changing the display-window color. In addition, we could save the new width and height values so that they could be used by other procedures in our program. As an example, the following program illustrates how we might structure the winReshapeFcn procedure. The glLoadIdentity command is included in the

FIGURE 3-64 Display window generated by the example program illustrating the use of the reshape function.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-24

OpenGL Display-Window Reshape Function

155

reshape function so that any previous values for the projection parameters will not affect the new projection settings. This program displays the regular hexagon discussed in Section 3-23. Although the hexagon center (at the position of the circle center) in this example is specied in terms of the display-window parameters, the position of the hexagon is unaffected by any changes in the size of the display window. This is because the hexagon is dened within a display list, and only the original center coordinates are stored in the list. If we want the position of the hexagon to change when the display window is resized, we need to dene the hexagon in another way or alter the coordinate reference for the display window. The output from this program is shown in Fig. 3-64.

#include <GL/glut.h> #include <math.h> #include <stdlib.h> const double TWO_PI = 6.2831853; /* Initial display-window size. */ GLsizei winWidth = 400, winHeight = 400; GLuint regHex; class screenPt { private: GLint x, y; public: /* Default Constructor: initializes coordinate position to (0, 0). screenPt ( ) { x = y = 0; } void setCoords (GLint xCoord, GLint yCoord) x = xCoord; y = yCoord; } GLint getx ( ) const return x; } GLint gety ( ) const return y; } { { */

};

static void init (void) { screenPt hexVertex, circCtr; GLdouble theta; GLint k; /* Set circle center coordinates. */ circCtr.setCoords (winWidth / 2, winHeight / 2);

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

156

CHAPTER 3

Graphics Output Primitives

glClearColor (1.0, 1.0, 1.0, 0.0);

//

Display-window color = white.

/* Set up a display list for a red regular hexagon. * Vertices for the hexagon are six equally spaced * points around the circumference of a circle. */ regHex = glGenLists (1); // Get an identifier for the display list. glNewList (regHex, GL_COMPILE); glColor3f (1.0, 0.0, 0.0); // Set fill color for hexagon to red. glBegin (GL_POLYGON); for (k = 0; k < 6; k++) { theta = TWO_PI * k / 6.0; hexVertex.setCoords (circCtr.getx ( ) + 150 * cos (theta), circCtr.gety ( ) + 150 * sin (theta)); glVertex2i (hexVertex.getx ( ), hexVertex.gety ( )); } glEnd ( ); glEndList ( );

void regHexagon (void) { glClear (GL_COLOR_BUFFER_BIT); glCallList (regHex); } glFlush ( );

void winReshapeFcn (int newWidth, int newHeight) { glMatrixMode (GL_PROJECTION); glLoadIdentity ( ); gluOrtho2D (0.0, (GLdouble) newWidth, 0.0, (GLdouble) newHeight); } glClear (GL_COLOR_BUFFER_BIT);

void main (int argc, char** argv) { glutInit (&argc, argv); glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); glutInitWindowPosition (100, 100); glutInitWindowSize (winWidth, winHeight); glutCreateWindow ("Reshape-Function & Display-List Example"); init ( ); glutDisplayFunc (regHexagon); glutReshapeFunc (winReshapeFcn); } glutMainLoop ( );

hearn-50265; ISBN: 0-13-015390-7

book

August 12, 2003

9:46

3-25

Summary

157

3-25 SUMMARY
The output primitives discussed in this chapter provide the basic tools for constructing pictures with individual points, straight lines, curves, lled color areas, array patterns, and text. We specify primitives by giving their geometric descriptions in a Cartesian, world-coordinate reference system. Examples of displays generated with output primitives are illustrated in Figs. 3-65 and 3-66. Three methods that can be used to locate pixel positions along a straight-line path are the DDA algorithm, Bresenhams algorithm, and the midpoint method. Bresenhams line algorithm and the midpoint line method are equivalent, and they are the most efcient. Color values for the pixel positions along the line path are efciently stored in the frame buffer by incrementally calculating the memory addresses. Any of the line-generating algorithms can be adapted to a parallel implementation by partitioning the line segments and distributing the partitions among the available processors. Circles and ellipses can be efciently and accurately scan converted using midpoint methods and taking curve symmetry into account. Other conic sections (parabolas and hyperbolas) can be plotted with similar methods. Spline curves, which are piecewise continuous polynomials, are widely used in animation and in computer-aided design. Parallel implementations for generating curve displays can be accomplished with methods similar to those for parallel line processing. To account for the fact that displayed lines and curves have nite widths, we can adjust the pixel dimensions of objects to coincide to the specied geometric dimensions. This can be done with an addressing scheme that references pixel positions at their lower left corner, or by adjusting line lengths. A ll area is a planar region that is to be displayed in a solid color or color pattern. Fill-area primitives in most graphics packages are polygons. But, in general, we could specify a ll region with any boundary. Often, graphics systems allow only convex polygon ll areas. In that case, a concave-polygon ll area can be displayed by dividing it into a set of convex polygons. Triangles are the easiest polygons to ll, since each scan line crossing a triangle intersects exactly two polygon edges (assuming the scan line does not pass through any vertices). The odd-even rule can be used to locate the interior points of a planar region. Other methods for dening object interiors are also useful, particularly with irregular, self-intersecting objects. A common example is the nonzero winding-number rule. This rule is more exible than the odd-even rule for handling objects dened with multiple boundaries. We can also use variations of the winding-number rule to combine plane areas using Boolean operations.

A data plot generated with straight-line segments, curves, character marker symbols, and text. (Courtesy of Wolfram Research, Inc., The Maker of Mathematica.)

FIGURE 3-65

FIGURE 3-66

An electrical diagram drawn with straight-line sections, circles, lled rectangles, and text. (Courtesy of Wolfram Research, Inc., The Maker of Mathematica.)

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

158

CHAPTER 3

Graphics Output Primitives TABLE 3-1

SUMMARY OF OpenGL OUTPUT PRIMITIVE FUNCTIONS AND RELATED ROUTINES Function


gluOrtho2D glVertex*

Description
Specify a 2D world-coordinate reference. Select a coordinate position. This function must be placed within a glBegin/glEnd pair. Plot one or more point positions, each specied in a glVertex function. The list of positions is then closed with a glEnd statement. Display a set of straight-line segments, whose endpoint coordinates are specied in glVertex functions. The list of endpoints is then closed with a glEnd statement. Display a polyline, specied using the same structure as GL LINES. Display a closed polyline, specied using the same structure as GL LINES. Display a ll rectangle in the xy plane. Display a ll polygon, whose vertices are given in glVertex functions and terminated with a glEnd statement. Display a set of ll triangles using the same structure as GL POLYGON. Display a ll-triangle mesh, specied using the same structure as GL POLYGON. Display a ll-triangle mesh in a fan shape with all triangles connected to the rst vertex, specied with same structure as GL POLYGON. Display a set of ll quadrilaterals, specied with same structure as GL POLYGON. Display a ll-quadrilateral mesh, specied with same structure as GL POLYGON. Activate vertex-array features of OpenGL. Specify an array of coordinate values. Display a specied primitive type from array data.

glBegin (GL POINTS);

glBegin (GL LINES);

glBegin (GL LINE STRIP); glBegin (GL LINE LOOP); glRect* glBegin (GL POLYGON);

glBegin (GL TRIANGLES); glBegin (GL TRIANGLE STRIP); glBegin (GL TRIANGLE FAN);

glBegin (GL QUADS);

glBegin (GL QUAD STRIP); glEnableClientState (GL VERTEX ARRAY); glVertexPointer (size, type, stride, array); glDrawElements (prim, num, type, array);

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

3-25

Summary

159

Function
glNewList (listID, listMode)

Description
Dene a set of commands as a display list, terminate with a glEndList statement. Generate one or more display-list identiers. Query function to determine whether a display-list identier is in use. Execute a single display list. Specify an offset value for an array of display-list identiers. Execute multiple display lists. Eliminate a specied sequence of display lists. Specify a two-dimensional or threedimensional current position for the frame buffer. This position is used as a reference for bitmap and pixmap patterns. Specify a binary pattern that is to be mapped to pixel positions relative to the current position. Specify a color pattern that is to be mapped to pixel positions relative to the current position. Select one or more buffers for storing a pixmap. Save a block of pixels in a selected array. Copy a block of pixels from one buffer position to another. Select a logical operation for combining two pixel arrays, after enabling with the constant GL COLOR LOGIC OP. Specify a font and a bitmap character for display. Specify a font and an outline character for display. Specify actions to be taken when display-window dimensions are changed.

glGenLists glIsList glCallList glListBase glCallLists glDeleteLists glRasterPos*

glBitmap (w, h, x0, y0, xShift, yShift, pattern); glDrawPixels (w, h, type, format, pattern); glDrawBuffer glReadPixels glCopyPixels glLogicOp

glutBitmapCharacter (font, char); glutStrokeCharacter (font, char); glutReshapeFunc

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

160

CHAPTER 3

Graphics Output Primitives

Each polygon has a front face and a back face, which determines the spatial orientation of the polygon plane. This spatial orientation can be determined from the normal vector, which is perpendicular to the polygon plane and points in the direction from the back face to the front face. We can determine the components of the normal vector from the polygon plane equation or by forming a vector cross product using three points in the plane, where the three points are taken in a counterclockwise order and the angle formed by the three points is less than 180 . All coordinate values, spatial orientations, and other geometric data for a scene are entered into three tables: vertex, edge, and surface-facet tables. Additional primitives available in graphics packages include pattern arrays and character strings. Pattern arrays can be used to specify two-dimensional shapes, including a character set, using either a rectangular set of binary values or a set of color values. Character strings are used to provide picture and graph labeling. Using the primitive functions available in the basic OpenGL library, we can generate points, straight-line segments, convex polygon ll areas, and either bitmap or pixmap pattern arrays. Routines for displaying character strings are available in GLUT. Other types of primitives, such as circles, ellipses, and concavepolygon ll areas, can be constructed or approximated with these functions, or they can be generated using routines in GLU and GLUT. All coordinate values are expressed in absolute coordinates within a right-handed Cartesian-coordinate reference system. Coordinate positions describing a scene can be given in either a two-dimensional or a three-dimensional reference frame. We can use integer or oating-point values to give a coordinate position, and we can also reference a position with a pointer to an array of coordinate values. A scene description is then transformed by viewing functions into a two-dimensional display on an output device, such as a video monitor. Except for the glRect function, each coordinate position for a set of points, lines, or polygons is speced in a glVertex function. And the set of glVertex functions dening each primitive is included between a glBegin/glEnd pair of statements, where the primitive type is identied with a symbolic constant as the argument for the glBegin function. When describing a scene containing many polygon ll surfaces, we can efciently generate the display using OpenGL vertex arrays to specify geometric and other data. In Table 3-1, we list the basic functions for generating output primitives in OpenGL. Some related routines are also listed in this table.

EXAMPLE PROGRAMS
Here, we present a few example OpenGL programs illustrating the use of output primitives. Each program uses one or more of the functions listed in Table 3-1. A display window is set up for the output from each program using the GLUT routines discussed in Chapter 2. The rst program illustrates the use of a polyline, a set of polymarkers, and bit-mapped character labels to generate a line graph for monthly data over a period of one year. A proportionally spaced font is demonstrated, although a xed-width font is usually easier to align with graph positions. Since the bit maps are referenced at the lower-left corner by the raster-position function, we must shift the reference position to align the center of a text string with a plotted data position. Figure 3-67 shows the output of the line-graph program.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

Example Programs

161

FIGURE 3-67 A polyline and polymarker plot of data points output by the lineGraph routine.

#include <GL/glut.h> GLsizei winWidth = 600, winHeight = 500; GLint xRaster = 25, yRaster = 150; GLubyte label [36] = {'J', 'A', 'J', 'O', 'a', 'p', 'u', 'c', 'n', 'r', 'l', 't', 'F', 'M', 'A', 'N', // Initial display window size. // Initialize raster position. 'e', 'a', 'u', 'o', 'b', 'y', 'g', 'v', 'M', 'J', 'S', 'D', 'a', 'u', 'e', 'e', 'r', 'n', 'p', 'c'};

GLint dataValue [12] = {420, 342, 324, 310, 262, 185, 190, 196, 217, 240, 312, 438}; void init (void) { glClearColor (1.0, 1.0, 1.0, 1.0); glMatrixMode (GL_PROJECTION); gluOrtho2D (0.0, 600.0, 0.0, 500.0); } void lineGraph (void) { GLint month, k; GLint x = 30; glClear (GL_COLOR_BUFFER_BIT);

// White display window.

// Initialize x position for chart. // Clear display window.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

162

CHAPTER 3

Graphics Output Primitives

glColor3f (0.0, 0.0, 1.0); // Set line color to blue. glBegin (GL_LINE_STRIP); // Plot data as a polyline. for (k = 0; k < 12; k++) glVertex2i (x + k*50, dataValue [k]); glEnd ( ); glColor3f (1.0, 0.0, 0.0); // Set marker color to red. for (k = 0; k < 12; k++) { // Plot data as asterisk polymarkers. glRasterPos2i (xRaster + k*50, dataValue [k] - 4); glutBitmapCharacter (GLUT_BITMAP_9_BY_15, '*'); } glColor3f (0.0, 0.0, 0.0); // Set text color to black. xRaster = 20; // Display chart labels. for (month = 0; month < 12; month++) { glRasterPos2i (xRaster, yRaster); for (k = 3*month; k < 3*month + 3; k++) glutBitmapCharacter (GLUT_BITMAP_HELVETICA_12, label [k]); xRaster += 50; } glFlush ( );

void winReshapeFcn (GLint newWidth, GLint newHeight) { glMatrixMode (GL_PROJECTION); glLoadIdentity ( ); gluOrtho2D (0.0, GLdouble (newWidth), 0.0, GLdouble (newHeight)); } glClear (GL_COLOR_BUFFER_BIT);

void main (int argc, char** argv) { glutInit (&argc, argv); glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); glutInitWindowPosition (100, 100); glutInitWindowSize (winWidth, winHeight); glutCreateWindow ("Line Chart Data Plot"); init ( ); glutDisplayFunc (lineGraph); glutReshapeFunc (winReshapeFcn); } glutMainLoop ( );

We use the same data set in the second program to produce the bar chart in Fig. 3-68. This program illustrates an application of rectangular ll areas, as well as bit-mapped character labels.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

Example Programs

163

FIGURE 3-68 A bar chart generated by the barChart procedure.

void barChart (void) { GLint month, k; glClear (GL_COLOR_BUFFER_BIT); // Clear display window.

glColor3f (1.0, 0.0, 0.0); // Set bar color to red. for (k = 0; k < 12; k++) glRecti (20 + k*50, 165, 40 + k*50, dataValue [k]); glColor3f (0.0, 0.0, 0.0); // Set text color to black. xRaster = 20; // Display chart labels. for (month = 0; month < 12; month++) { glRasterPos2i (xRaster, yRaster); for (k = 3*month; k < 3*month + 3; k++) glutBitmapCharacter (GLUT_BITMAP_HELVETICA_12, label [h]); xRaster += 50; } glFlush ( );

Pie charts are used to show the percentage contribution of individual parts to the whole. The next program constructs a pie chart, using the midpoint routine for generating a circle. Example values are used for the number and relative sizes of the slices, and the output from this program appears in Fig. 3-69.

FIGURE 3-69

Output produced with the pieChart procedure.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

164

CHAPTER 3

Graphics Output Primitives

#include <GL/glut.h> #include <stdlib.h> #include <math.h> const GLdouble twoPi = 6.283185; class scrPt { public: GLint x, y; }; GLsizei winWidth = 400, winHeight = 300; void init (void) { glClearColor (1.0, 1.0, 1.0, 1.0); glMatrixMode (GL_PROJECTION); gluOrtho2D (0.0, 200.0, 0.0, 150.0); . . . // Midpoint routines for displaying a circle. // Initial display window size.

void pieChart (void) { scrPt circCtr, piePt; GLint radius = winWidth / 4;

// Circle radius.

GLdouble sliceAngle, previousSliceAngle = 0.0; GLint k, nSlices = 12; // Number of slices. GLfloat dataValues[12] = {10.0, 7.0, 13.0, 5.0, 13.0, 14.0, 3.0, 16.0, 5.0, 3.0, 17.0, 8.0}; GLfloat dataSum = 0.0; circCtr.x = winWidth / 2; circCtr.y = winHeight / 2; circleMidpoint (circCtr, radius); for (k = 0; k < nSlices; k++) dataSum += dataValues[k]; for (k = 0; k < nSlices; k++) { sliceAngle = twoPi * dataValues[k] / dataSum + previousSliceAngle; piePt.x = circCtr.x + radius * cos (sliceAngle); piePt.y = circCtr.y + radius * sin (sliceAngle); glBegin (GL_LINES); glVertex2i (circCtr.x, circCtr.y); glVertex2i (piePt.x, piePt.y); glEnd ( ); previousSliceAngle = sliceAngle; } // Circle center position. // Call a midpoint circle-plot routine.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

Example Programs void displayFcn (void) { glClear (GL_COLOR_BUFFER_BIT); glColor3f (0.0, 0.0, 1.0); pieChart ( ); glFlush ( );

165

// //

Clear display window. Set circle color to blue.

void winReshapeFcn (GLint newWidth, GLint newHeight) { glMatrixMode (GL_PROJECTION); glLoadIdentity ( ); gluOrtho2D (0.0, GLdouble (newWidth), 0.0, GLdouble (newHeight)); glClear (GL_COLOR_BUFFER_BIT); /* Reset display-window size parameters. winWidth = newWidth; winHeight = newHeight; */

void main (int argc, char** argv) { glutInit (&argc, argv); glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); glutInitWindowPosition (100, 100); glutInitWindowSize (winWidth, winHeight); glutCreateWindow ("Pie Chart"); init ( ); glutDisplayFunc (displayFcn); glutReshapeFunc (winReshapeFcn); } glutMainLoop ( );

Some variations on the circle equations are displayed by our last example program, which uses the parametric polar equations (3-28) to compute points along the curve paths. These points are then used as the endpoint positions for straightline sections, displaying the curves as approximating polylines. The curves shown in Fig. 3-70 are generated by varying the radius r of a circle. Depending on how we vary r , we can produce a lima con, cardioid, spiral, or other similar gure.

(a)

(b)

(c)

(d)

(e)

FIGURE 3-70 Curved gures displayed by the drawCurve procedure: (a) lima con, (b) cardiod, (c) three-leaf curve, (d) four-leaf curve, and (e) spiral.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

166

CHAPTER 3

Graphics Output Primitives

#include <GL/glut.h> #include <stdlib.h> #include <math.h> #include <iostream.h> struct screenPt { GLint x; GLint y; }; typedef enum { limacon = 1, cardioid, threeLeaf, fourLeaf, spiral } curveName; GLsizei winWidth = 600, winHeight = 500; void init (void) { glClearColor (1.0, 1.0, 1.0, 1.0); glMatrixMode (GL_PROJECTION); gluOrtho2D (0.0, 200.0, 0.0, 150.0); // Initial display window size.

void lineSegment (screenPt pt1, screenPt pt2) { glBegin (GL_LINES); glVertex2i (pt1.x, pt1.y); glVertex2i (pt2.x, pt2.y); glEnd ( ); } void drawCurve (GLint curveNum) { /* The limacon of Pascal is a modification of the circle equation * with the radius varying as r = a * cos (theta) + b, where a * and b are constants. A cardiod is a limacon with a = b. * Three-leaf and four-leaf curves are generated when * r = a * cos (n * theta), with n = 3 and n = 2, respectively. * A spiral is displayed when r is a multiple of theta. */ const GLdouble twoPi = 6.283185; const GLint a = 175, b = 60; GLfloat r, theta, dtheta = 1.0 / float (a); GLint x0 = 200, y0 = 250; // Set an initial screen position. screenPt curvePt[2]; glColor3f (0.0, 0.0, 0.0); curvePt[0].x = x0; curvePt[0].y = y0; // Set curve color to black.

// Initialize curve position.

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

Example Programs switch (curveNum) { case limacon: case cardioid: case threeLeaf: case fourLeaf: case spiral: default: }

167

curvePt[0].x curvePt[0].x curvePt[0].x curvePt[0].x break; break;

+= += += +=

a + b; a + a; a; a;

break; break; break; break;

theta = dtheta; while (theta < two_Pi) { switch (curveNum) { case limacon: r = a * cos (theta) + b; case cardioid: r = a * (1 + cos (theta)); case threeLeaf: r = a * cos (3 * theta); case fourLeaf: r = a * cos (2 * theta); case spiral: r = (a / 4.0) * theta; default: }

break; break; break; break; break; break;

curvePt[1].x = x0 + r * cos (theta); curvePt[1].y = y0 + r * sin (theta); lineSegment (curvePt[0], curvePt[1]); curvePt[0].x = curvePt[1].x; curvePt[0].y = curvePt[1].y; theta += dtheta;

void displayFcn (void) { GLint curveNum; glClear (GL_COLOR_BUFFER_BIT); cout cout cout cout cin << << << << >> // Clear display window.

"\nEnter the integer value corresponding to\n"; "one of the following curve names.\n"; "Press any other key to exit.\n"; "\n1-limacon, 2-cardioid, 3-threeLeaf, 4-fourLeaf, 5-spiral: curveNum;

";

if (curveNum == 1 || curveNum == 2 || curveNum == 3 || curveNum == 4 || curveNum == 5) drawCurve (curveNum); else exit (0); } glFlush ( );

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

168

CHAPTER 3

Graphics Output Primitives

void winReshapeFcn (GLint newWidth, GLint newHeight) { glMatrixMode (GL_PROJECTION); glLoadIdentity ( ); gluOrtho2D (0.0, (GLdouble) newWidth, 0.0, (GLdouble) newHeight); } glClear (GL_COLOR_BUFFER_BIT);

void main (int argc, char** argv) { glutInit (&argc, argv); glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); glutInitWindowPosition (100, 100); glutInitWindowSize (winWidth, winHeight); glutCreateWindow ("Draw Curves"); init ( ); glutDisplayFunc (displayFcn); glutReshapeFunc (winReshapeFcn); } glutMainLoop ( );

REFERENCES
Basic information on Bresenhams algorithms can be found in Bresenham (1965 and 1977). For midpoint methods, see Kappel (1985). Parallel methods for generating lines and circles are discussed in Pang (1990) and in Wright (1990). Many other methods for generating and processing graphics primitives are discussed in Glassner (1990), Arvo (1991), Kirk (1992), Heckbert (1994), and Paeth (1995). Additional programming examples using OpenGL primitive functions are given in Woo, Neider, Davis, and Shreiner (1999). A listing of all OpenGL primitive functions is available in Shreiner (2000). For a complete reference to GLUT, see Kilgard (1996).

EXERCISES
3-1 3-2 3-3 Implement a polyline function using the DDA algorithm, given any number (n) of input points. A single point is to be plotted when n = 1. Extend Bresenhams line algorithm to generate lines with any slope, taking symmetry between quadrants into account. Implement a polyline function, using the algorithm from the previous exercise, to display the set of straight lines connecting a list of n input points. For n = 1, the routine displays a single point. Use the midpoint method to derive decision parameters for generating points along a straight-line path with slope in the range 0 < m < 1. Show that the midpoint decision parameters are the same as those in the Bresenham line algorithm. Use the midpoint method to derive decision parameters that can be used to generate straight-line segments with any slope.

3-4

3-5

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

Exercises 3-6 3-7 3-8 Set up a parallel version of Bresenhams line algorithm for slopes in the range 0 < m < 1. Set up a parallel version of Bresenhams algorithm for straight lines with any slope. Suppose you have a system with an 8 inch by 10 inch video monitor that can display 100 pixels per inch. If memory is organized in one-byte words, the starting frame buffer address is 0, and each pixel is assigned one byte of storage, what is the frame buffer address of the pixel with screen coordinates (x , y)? Suppose you have a system with an 8 inch by 10 inch video monitor that can display 100 pixels per inch. If memory is organized in one-byte words, the starting frame buffer address is 0, and each pixel is assigned 6 bits of storage, what is the frame buffer address (or addresses) of the pixel with screen coordinates (x , y)? Incorporate the iterative techniques for calculating frame-buffer addresses (Section 3-7) into the Bresenham line algorithm. Revise the midpoint circle algorithm to display circles with input geometric magnitudes preserved (Section 3-13). Set up a procedure for a parallel implementation of the midpoint circle algorithm. Derive decision parameters for the midpoint ellipse algorithm assuming the start position is (r x , 0) and points are to be generated along the curve path in counterclockwise order. Set up a procedure for a parallel implementation of the midpoint ellipse algorithm. Devise an efcient algorithm that takes advantage of symmetry properties to display a sine function over one cycle. Modify the algorithm in the preceding exercise to display a sine curve over any specied angular interval. Devise an efcient algorithm, taking function symmetry into account, to display a plot of damped harmonic motion: y = Ae k x sin( x + ) where is the angular frequency and is the phase of the sine function. Plot y as a function of x for several cycles of the sine function or until the maximum amplitude A is reduced to 1 0.

169

3-9

3-10 3-11 3-12 3-13

3-14 3-15 3-16 3-17

3-18

Using the midpoint method, and taking symmetry into account, develop an efcient algorithm for scan conversion of the following curve over the interval 10 x 10. y= 1 3 x 12

3-19

Use the midpoint method and symmetry considerations to scan convert the parabola y = 100 x 2 over the interval 10 x 10.

3-20

Use the midpoint method and symmetry considerations to scan convert the parabola x = y2 for the interval 10 y 10.

3-21

Set up a midpoint algorithm, taking symmetry considerations into account to scan convert any parabola of the form y = a x2 + b with input values for parameters a , b , and the range for x .

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

170

CHAPTER 3

Graphics Output Primitives 3-22 3-23 Set up geometric data tables as in Fig. 3-50 for a unit cube. Set up geometric data tables for a unit cube using just a vertex table and a surface-facet table, then store the same information using just the surface-facet table. Compare the two methods for representing the unit cube with a representation using the three tables in Exercise 3-22. Estimate the storage requirements for each. Dene an efcient polygon-mesh representation for a cylinder and justify your choice of representation. Set up a procedure for establishing the geometric data tables for any input set of points dening the polygon facets for the surface of a three-dimensional object. Devise routines for checking the three geometric data tables in Fig. 3-50 to ensure consistency and completeness. Write a program for calculating parameters A, B , C , and D for an input mesh of polygon-surface facets. Write a procedure to determine whether an input coordinate position is in front of a polygon surface or behind it, given the plane parameters A, B , C , and D for the polygon. If the coordinate reference for a scene is changed from a right-handed system to a left-handed system, what changes could we make in the values of surface plane parameters A, B , C , and D to ensure that the orientation of the plane is correctly described? Develop a procedure for identifying a nonplanar vertex list for a quadrilateral. Extend the algorithm of the previous exercise to identify a nonplanar vertex list that contains more than four coordinate positions. Write a procedure to split a set of four polygon vertex positions into a set of triangles. Devise an algorithm for splitting a set of n polygon vertex positions, with n > 4, into a set of triangles. Set up an algorithm for identifying a degenerate polygon vertex list that may contain repeated vertices or collinear vertices. Devise an algorithm for identifying a polygon vertex list that contains intersecting edges. Write a routine to identify concave polygons by calculating cross products of pairs of edge vectors. Write a routine to split a concave polygon, using the vector method. Write a routine to split a concave polygon, using the rotational method. Devise an algorithm for determining interior regions for any input set of vertices using the nonzero winding-number rule and cross-product calculations to identify the direction for edge crossings. Devise an algorithm for determining interior regions for any input set of vertices using the nonzero winding-number rule and dot-product calculations to identify the direction for edge crossings. What regions of the self-intersecting polyline shown in Fig. 3-46 have a positive winding number? What are the regions that have a negative winding number? What regions have a winding number greater than 1? Write a routine to implement a text-string function that has two parameters: one parameter species a world-coordinate position and the other parameter species a text string. Write a routine to implement a polymarker function that has two parameters: one parameter is the character that is to be displayed and the other parameter is a list of world-coordinate positions.

3-24 3-25 3-26 3-27 3-28

3-29

3-30 3-31 3-32 3-33 3-34 3-35 3-36 3-37 3-38 3-39

3-40

3-41

3-42

3-43

hearn-50265; ISBN: 0-13-015390-7

book

July 30, 2003

15:46

Exercises 3-44 Modify the example program in Section 3-24 so that the displayed hexagon is always at the center of the display window, regardless of how the display window may be resized. Write a complete program for displaying a bar chart. Input to the program is to include the data points and the labeling required for the x and y axes. The data points are to be scaled by the program so that the graph is displayed across the full area of a display window. Write a program to display a bar chart in any selected area of a display window. Write a procedure to display a line graph for any input set of data points in any selected area of the screen, with the input data set scaled to t the selected screen area. Data points are to be displayed as asterisks joined with straight-line segments, and the x and y axes are to be labeled according to input specications. (Instead of asterisks, small circles or some other symbols could be used to plot the data points.) Using a circle function, write a routine to display a pie chart with appropriate labeling. Input to the routine is to include a data set giving the distribution of the data over some set of intervals, the name of the pie chart, and the names of the intervals. Each section label is to be displayed outside the boundary of the pie chart near the corresponding pie section.

171

3-45

3-46 3-47

3-48

You might also like