Unit 5 Visible Surface Detection Methods

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Unit 5

Visible Surface Detection Methods


(Hidden Surface Elimination)

Visible surface detection or Hidden surface removal is major concern for realistic graphics for
identifying those parts of a scene that are visible from a chosen viewing position. Numerous algorithms
have been devised for efficient identification of visible objects for different types of applications. Some
methods require more memory, some involve more processing time, and some apply only to special
types of objects. Deciding upon a method for a particular application can depend on such factors as the
complexity of the scene, type of objects to be displayed, available equipment, and whether static or
animated displays are to be generated.

Visible surface detection methods are broadly classified according to whether they deal with objects or
with their projected images.

These two approaches are:


 Object-Space methods: An object-space method compares objects and parts of objects to each
other within the scene definition to determine which surfaces, as a whole, we should label as visible.
 Image-Space methods: Visibility is decided point by point at each pixel position on the projection
plane.

Most visible surface detection algorithm use image-space-method but in some cases object space
methods can also be effectively used.

Depth-Buffer Method
Depth Buffer Method is the commonly used image-space method for detecting visible surface. It is also
known as z-buffer method. It compares surface depths at each pixel position on the projection plane. It
is called z-buffer method since object depth is usually measured from the view plane along the z-axis of
a viewing system.
Each surface of scene is processed separately, one point at a time across the surface. The method is
usually applied to scenes containing only polygon surfaces, because depth values can be computed very
quickly and method is easy to implement. This method can be applied also to non planer surfaces.

With object description converted to projection co-ordinates, each ( x, y ,z ) position on polygon


surface corresponds to the orthographic projection point ( x, y) on the view plane. Therefore for
each pixel position ( x, y) on the view plane, object depth is compared by z-values.

1
Three surfaces at varying distance from view plane xvyv, the
projection along (x, y) surface S 1 is closest to the view-plane so
surface intensity value of S1 at (x, y) is saved.

In Z-buffer method, two buffers area are required.


 Depth buffer (z-buffer): stores the depth value for
each (x, y) position as surfaces are processed.
 Refresh buffer (Image buffer): stores the intensity
value for each position.

Initially all the positions in depth buffer are set to 0, and refresh buffer is initialize to background color.
Each surfaces listed in polygon table are processed one scan line at a time, calculating the depth (z-val)
for each position (x, y). The calculated depth is compared to the value previously stored in depth buffer
at that position. If calculated depth is greater than stored depth value in depth buffer, new depth value
is stored and the surface intensity at that position is determined and placed in refresh buffer.

Algorithm: Z-buffer
1. Initialize depth buffer and refresh buffer so that for all buffer
position (x, y)
depth(x, y) = 0, refresh (x, y) = Ibackground.
2. For each position on each polygon surface, compare depth
values to previously stored value in depth buffer to determine
visibility.
 Calculate the depth z for each (x, y) position on
polygon
 If z > depth (x, y) then
depth (x, y) = z, refresh (x, y) = Isurface(x, y)

Where Ibackground = Intensity value for background


Isurface (x, y) = Intensity value for surface at pixel position
(x, y) on projected plane.

After all surfaces are processed, the depth buffer contains the
depth value of the visible surface and refresh buffer contains the
corresponding intensity values for those surfaces.

2
The depth values of the surface position (x, y) are calculated by plane equation of surface.
− Ax−By−D
Z=
C
Let Depth Z' at position (x+1, y) along the scan line is,
− A ( x+1 )−By−D
Z '=
C
A
⇒ Z ' =Z−
C (1)
−A
C is constant for each surface so succeeding depth value across a scan line are obtained from
preceding values by single addition.

Drawback: It can only find one visible surface at each pixel position i.e. deals only with opaque surfaces
and cannot accumulate intensities for more than one surface, as is necessary if transparent surfaces are
to be displayed. A-buffer described below solves this problem.

A-Buffer Method
An extension of the ideas in the depth-buffer method is the A-buffer method. A-buffer method
represents an antialiased, area-averaged, accumulation-buffer method developed by LucasFilm.
A buffer expands z-buffer so that each position in the buffer can reference a linked list of
surfaces. So multiple surface intensities can be taken into consideration at each pixel and object edges
can be antialiased.

Each position in the A-buffer has two fields:


 Depth field: stores a +ve or –ve real numbers
 Intensity field: stores surface intensity value or a pointer value

Viewing an opaque surface through a transparent


surface requires multiple surface intensity
contributions for pixel positions

Organization of an A-buffer pixel position:


a) Single-surface overlap of the
corresponding pixel area (d > 0)
b) Multiple surface overlap (d < 0)

If depth field < 0, this indicates multiple surface contributions to the pixel intensity. The intensity field
then stores a pointer to a linked list of surface data. Data for each surface includes:
 RGB intensity components
 Opacity parameter (% of transparency)
 Depth
 Percent of area coverage
 Surface identifier
 Other surface-rendering parameters
 Pointer to next surface
3
Scan Line Method
 This is image-space method for removing hidden surface which is extension of the scan line polygon
filling for polygon interiors. Instead of filling one surface we deal with multiple surfaces here.
 In this method, as each scan line is processed, all polygon surfaces intersecting that line are
examined to determine which are visible. Across each scan line, depth calculations are made for
each overlapping surface to determine which is nearest to the view plane. When the visible surface
has been determined, the intensity value for that position is entered into the image buffer.

Scan line crossing the


projection of two
surfaces s1 and s2, in
the view plane. Dashed
lines indicate the
boundaries of hidden
surfaces.

Figure above illustrates the scan-line method for locating visible portions of surfaces for pixel positions
along the line. The active list for scan line 1 contains information from the edge table for edges AB, BC,
EH, and FG. For positions along this scan line between edges AB and BC, only the flag for surface S 1 is on.
Therefore, no depth calculations are necessary, and intensity information for surface S 1 is entered from
the polygon table into the refresh buffer. Similarly, between edges EH and FG, only the flag for surface S 2
is on. No other positions along scan line 1 intersect surfaces, so the intensity values in the other areas
are set to the background intensity. The background intensity can be loaded throughout the buffer in an
initialization routine.
For scan lines 2 and 3, the active edge list contains edges AD, EH, BC, and FG. Along scan line 2 from
edge AD to edge EH, only the flag for surface S 1 is on. But between edges EH and BC, the flags for both
surfaces are on. In this interval, depth calculations must be made using the plane coefficients for the
two surfaces. For this example, the depth of surface S1 is assumed to be less than that of S 2, so
intensities for surface S1 are loaded into the refresh buffer until boundary BC is encountered. Then the
flag for surface S1 goes off, and intensities for surface S2 are stored until edge FG is passed.
We can take advantage of coherence along the scan lines as we pass from one scan line to the next. In
Fig., scan line 3 has the same active list of edges as scan line 2. Since no changes have occurred in line
intersections, it is unnecessary again to make depth calculations between edges EH and BC. The two
surfaces must be in the same orientation as determined on scan line 2, so the intensities for surface S 1
can be entered without further calculations.

4
Any number of overlapping polygon surfaces can be processed with this scan-line method. Flags for the
surfaces are set to indicate whether a position is inside or outside, and depth calculations are performed
when surfaces overlap. When these coherence methods are used, we need to be careful to keep track of
which surface section is visible on each scan line. This works only if surfaces do not cut through or
otherwise cyclically overlap each other.

If any kind of cyclic overlap is present in a scene, we can divide the surfaces to eliminate the overlaps.
The dashed lines in this figure indicate where planes could be subdivided to form two distinct surfaces,
so that the cyclic overlaps are eliminated.

Depth sorting Method


This method uses both object space and image space methods. In this method the surface
representation of 3D object are sorted in of decreasing depth from viewer. Then sorted surface are scan
converted in order starting with surface of greatest depth for the viewer. The conceptual steps that
performed in depth-sort algorithm are
1. Surfaces are sorted in order of decreasing depth (z-coordinate).
2. Resolve any ambiguity that may cause when the polygons z-extents overlap, splitting polygons
if necessary.
3. Surfaces are scan converted in order, starting with the surface of greatest depth.

Sorting operations are carried out in both image and object space, and the scan conversion of the
polygon surfaces is performed in image space.

This algorithm is also called "Painter's Algorithm" as it simulates how a painter typically produces his
painting. In creating oil painting, an artist first paints the background colors. Next, the most distant
objects are added, then the nearer objects and so forth. Finally, foreground objects are painted on the
canvas over the background and other objects that have been painted on the canvas. Each layer of the
paint covers up the previous layer. Using similar technique,
 We first sort surfaces according to their distances from the view plane
 Intensity values for the farthest surface are then entered into the refresh buffer.
 Taking each succeeding surface in turn (in decreasing depth order), we “paint” the surface
intensities onto the frame buffer over the intensities of previously processed surfaces.

Problem: One of the major problems in this algorithm is intersecting polygon surfaces. As shown in fig.
below.

5
 Different polygons may have same depth.
 The nearest polygon could also be farthest.
 We cannot use simple depth-sorting to remove the hidden-surfaces in the images.

Solution: For intersecting polygons, we can split one polygon into two or more polygons which can then
be painted from back to front. This needs more time to compute intersection between polygons. So it
becomes complex algorithm for such surface existence.

Example
Assuming we are viewing along the z axis. Surface S with the greatest depth is then compared to other
surfaces in the list to determine whether there are any overlaps in depth. If no depth overlaps occur, S
can be scan converted. This process is repeated for the next surface in the list. However, if depth
overlap is detected, we need to make some additional comparisons to determine whether any of the
surfaces should be reordered.
We make the following tests for each surface that overlaps with S. If any one of these tests is true, no
reordering is necessary for that surface. The tests are listed in order of increasing difficulty.
1. The bounding rectangles in the xy plane for the two surfaces do not overlap
2. Surface S is completely behind the overlapping surface relative to the viewing position.
3. The overlapping surface is completely in front of S relative to the viewing position.
4. The projections of the two surfaces onto the view plane do not overlap.
We perform these tests in the order listed and proceed to the next overlapping surface as soon as we
find one of the tests is true. If all the overlapping surfaces pass at least one of these tests, none of them
is behind S. No reordering is then necessary and S is scan converted .

6
7

You might also like