Module V-PPT - 2

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 58

MODULE V

• Projections – Parallel and perspective projections – vanishing points.


Visible surface detection methods– Back face removal- Z-Buffer
algorithm, A-buffer algorithm, Depth-sorting method, Scan line
algorithm
Projections
• Once world co-ordinate descriptions of the object in a scene are
converted to viewing co-ordinates, we can project the 3-dimensional
object onto the 2D plane
Projections

• In parallel projection, coordinate


positions are transferred to view plane
along parallel lines
• orthogonal/orthographic
• oblique
• For perspective projection,
coordinates are transferred to view
plane along lines that converge at a
point
Types of Parallel Projection
Orthogonal Projections

• Projection along lines parallel to


the view-plane normal N
• Front, side, rear orthogonal
projections are often called
elevations
• The top one is called plan view

Figure 10-17 Orthogonal projections


of an object, displaying plan and elevation views.
Axonometric and Isometric
Orthogonal Projections .
• Orthogonal projections which show
more than one face of an object are
called axonometric orthogonal
projections
• Isometric: Most common axonometric
projections that are generated by
aligning projection plane so that it
intersects principal axes at the same
distance from origin
Oblique Projection

• direction of projection is not normal to the projection of plane.


• In oblique projection, we can view the object better than orthographic projection.
• There are two types of oblique projections − Cavalier and Cabinet.
Cavalier projection
• makes 45° angle with the projection
plane.
• The projection of a line perpendicular
to the view plane has the same length
as the line itself in Cavalier projection.
• In a cavalier projection, the
foreshortening factors for all three
principal directions are equal.
Cabinet projection
• makes 63.4° angle with the
projection plane.
• In Cabinet projection, lines
perpendicular to the viewing
surface are projected at ½ their
actual length.
Perspective Projections

• Project objects to view plane


along converging paths to
projection reference point (or
center of projection)

Figure 10-33 A perspective projection of two


equal-length line segments at different distances
from the view plane.
Types of Perspective Projection
Vanishing Points

• The point at which a set of projected parallel


lines appears to coverge is called a vanishing
points
• Vanishing points for lines parallel to principal
axes are principal vanishing points
• How many principal v.p.s can be seen
depends on projection plane orientation
• 1-point, 2-point, or 3-point projections
Oblique parallel projection
• An oblique projection is obtained by projecting points
along parallel lines that are not perpendicular to the
projection plane
• In some applications packages, an oblique projection
vector is specified with two angles, α and Ø,
• Point (x, y, z) is projected to position (xp, yp) on the view
plane.
• Orthographic projection coordinates on the plane are
(x, y).
Oblique parallel projection
Oblique parallel projection
• The oblique projection line from (x, y, z) to (xp, yp) makes
an angle α with the line on the projection plane that joins
(xp, yp) and (x, y).
• This line, of length L, is at an angle Ø with the horizontal
direction in the projection plane.
• We can express the projection coordinates in terms of x,
y, L, and Ø as
xp= x+Lcos Ø
yp= y+Lsin Ø
• Length L depends on angle α & the z coordinate of the
point to be projected:
tan α = z/L
Thus, L=z/ tan α
Let, L1=1/ tan α (L1 is the inverse of tan α, which is
also the value of L when z = 1. )
Thus, L = zL1
• We can then write the oblique
projection equations as
• xp= x + z(L1cosØ)
• yp= y+ z(L1sinØ)
• The transformation matrix for
producing any parallel
projection onto the xvyv plane
can be written as
Mparallel =
VISIBLE SURFACE DETECTION
METHODS
• Hidden-surface elimination methods
• Identifying visible parts of a scene from a viewpoint
• Numerous algorithms
• More memory - storage
• More processing time – execution time
• Only for special types of objects - constraints
• Deciding a method for a particular application
• Complexity of the scene
• Type of objects
• Available equipment
• Static or animated scene

33
Classification of Visible-Surface Detection Algorithms

• Object-space methods vs. Image-space methods


• Object definition directly vs. their projected images
• Most visible-surface algorithms use image-space methods
• Object-space can be used effectively in some cases
• Ex) Line-display algorithms

• Object-space methods
• Compares objects and parts of objects to each other

• Image-space methods
• Point by point at each pixel position on the projection plane

34
Sorting and Coherence Methods
• To improve performance
• Sorting
• Facilitate depth comparisons
• Ordering the surfaces according to their distance from the viewplane

• Coherence
• Take advantage of regularity

35
Back-Face Detection
• Fast & simple object-space method
• For identifying back faces of a polyhedron
• Based on inside-outside tests
Inside-outside test
• A point (x, y, z) is “inside” a surface with plane parameters A, B, C, and D
• Ax  By  Cz  D  0
• The polygon is a back face if

V N 0

N = (A, B, C)
V

• V is a vector in the viewing direction from the eye(camera)


• N is the normal vector to a polygon surface
37
Advanced Configuration
• In the case of concave polyhedron
• Need more tests
• Determine faces totally or partly obscured by other faces
• In general, back-face removal can be expected to eliminate about half of the surfaces from
further visibility tests

View of a concave polyhedron with


38
one face partially hidden by other surfaces>
Depth-Buffer Method
Characteristics
• Commonly used image-space approach
• Compares depths of each pixel on the projection plane
• Referred to as the z-buffer method

• Usually applied to scenes of polygonal surfaces


• Depth values can be computed very quickly
• Easy to implement
Yv
S3 S2
S1

(x, y)
Xv

39
Zv
Depth Buffer & Refresh Buffer
• Two buffer areas are required
• Depth buffer
• Store depth values for each (x, y) position
• All positions are initialized to minimum depth
• Refresh buffer
• Stores the intensity values for each position
• All positions are initialized to the background intensity

40
Algorithm
• Initialize the depth buffer and refresh buffer
depth(x, y) = 0, refresh(x, y) = Ibackgnd
• For each position on each polygon surface
• Calculate the depth for each (x, y) position on the polygon
• If z > depth(x, y), then set
depth(x, y) = z, refresh(x, y) = Isurf(x, y)
• Advanced
• With resolution of 1024 by 1024
• Over a million positions in the depth buffer
• Process one section of the scene at a time
• Need a smaller depth buffer
• The buffer is reused for the next section

41
A-Buffer Method
A-Buffer Method
Characteristics
• An extension of the ideas in the depth-buffer method
• The origin of this name
• At the other end of the alphabet from “z-buffer”
• Antialiased, area-averaged, accumulation-buffer
• A drawback of the depth-buffer method
• Deals only with opaque surfaces
• Can’t accumulate intensity values
Foreground
for more than one surface transparent surface

Background
43
opaque surface
A- Buffer Algorithm
• Each position in the buffer can reference a linked list of
surfaces
• Several intensities can be considered at each pixel position
• Object edges can be antialiased
• Each position in the A-buffer has two fields
• Depth field
• Stores a positive or negative real number
• Intensity field
• Stores surface-intensity information or a pointer value
Depth
field

d> I d< Surf Surf 


0 Intensity 0 Intensity 1 2 
field field
<Organization of an A-buffer pixel position : (a) single-surface overlap (b) multiple-surface ove
44
Algorithm
• If the depth field is positive
• The number at that position is the depth
• The intensity field stores the RGB

• If the depth field is negative


• Multiple-surface contributions to the pixel
• The intensity field stores a pointer to a linked list of surfaces
• Data for each surface in the linked list

45
Scan-Line Method
Scan Line Method
Characteristics
• Extension of the scan-line algorithm for filling polygon interiors
• For all polygons intersecting each scan line
• Processed from left to right
• Depth calculations for each overlapping surface
• The intensity of the nearest position is entered into the refresh buffer

47
Tables for The Various Surfaces
• Edge table
• Coordinate endpoints for each line
• Slope of each line
• Pointers into the polygon table
• Identify the surfaces bounded by each line

• Polygon table
• Coefficients of the plane equation for each surface
• Intensity information for the surfaces
• Pointers into the edge table

48
Active List & Flag
• Active list
• Contain only edges across the current scan line
• Sorted in order of increasing x
• Flag for each surface
• Indicate whether inside or outside of the surface
• At the leftmost boundary of a surface
• The surface flag is turned on
• At the rightmost boundary of a surface
• The surface flag is turned off

49
Example

• Active list for scan line 1


B
yv E
• Edge table F
A Scan line 1
• AB, BC, EH, and FG S1 S2
Scan line 2
• Between AB and BC, only H Scan line 3
C
the flag for surface S1 is on D
G
– No depth calculations are necessary xv

– Intensity for surface S1 is entered into the refresh buffer

• Similarly, between EH and FG, only the flag for S2 is on

50
Example(cont.)
• For scan line 2, 3
• AD, EH, BC, and FG
• Between AD and EH, only the flag for S1 is on
• Between EH and BC, the flags for both surfaces are on
• Depth calculation is needed
• Intensities for S1 are loaded into the refresh buffer until BC
• Take advantage of coherence
• Pass from one scan line to next
• Scan line 3 has the same active list as scan line 2
• Unnecessary to make depth calculations between EH and BC

51
Depth-Sorting Method
Depth Sorting Method
Operations
• Image-space and object-space operations
• Sorting operations in both image and object-space
• The scan conversion of polygon surfaces in image-space
• Basic functions
• Surfaces are sorted in order of decreasing depth
• Surfaces are scan-converted in order, starting with the surface of greatest
depth

53
Algorithm
• Referred to as the painter’s algorithm
• In creating an oil painting
• First paints the background colors
• The most distant objects are added
• Then the nearer objects, and so forth
• Finally, the foregrounds are painted over all objects
• Each layer of paint covers up the previous layer
• Process
• Sort surfaces according to their distance from the viewplane
• The intensities for the farthest surface are then entered into the refresh buffer
• Taking each succeeding surface in decreasing depth order

54
Overlapping Tests
• Tests for each surface that overlaps with S
• The bounding rectangle in the xy plane for the two surfaces do not overlap (1)
Easy
• Surface S is completely behind the overlapping surface relative to the viewing position (2)
• The overlapping surface is completely in front of S relative to the viewing position (3)
• The projections of the two surfaces onto the view plane do not overlap (4)

• If all the surfaces pass at least one of the tests, none of them is behind S
• No reordering is then necessary and S is scan converted
Difficult

55
Overlapping Test Examples

S’

(1) xv (2) xv
zv zv

S S

S’ S’

(3) xv (4) xv
zv zv
56
Surface Reordering
• If all four tests fail with S’
• Interchange surfaces S and S’ in the sorted list
• Repeat the tests for each surface that is reordered in the list

S S’’
S’ S S’

xv xv
zv <S  S’> zv <S  S’’, then S’’  S’>

57
Drawback
• If two or more surfaces alternately obscure each other
• Infinite loop
• Flag any surface that has been reordered to a farther depth
• It can’t be moved again
• If an attempt to switch the surface a second time
• Divide it into two parts to eliminate the cyclic loop
• The original surface is then replaced by the two new surfaces

58

You might also like