Chapter 1
Chapter 1
Chapter 1
INTRODUCTION
Geometry
Animation
The subfield of animation studies descriptions for surfaces (and other phenomena) that
move or deform over time. Historically, most work in this field has focused on parametric
and data-driven models, but recently physical simulation has become more popular as
computers have become more powerful computationally.
Rendering
Rendering generates images from a model. Rendering may simulate light transport to
create realistic images or it may create images that have a particular artistic style in non-
photorealistic rendering. The two basic operations in realistic rendering are transport
(how much light passes from one place to another) and scattering (how surfaces interact
with light).
Imaging
Digital imaging or digital image acquisition is the creation of digital images, such as of a
physical scene or of the interior structure of an object.
Topology
OpenGL's basic operation is to accept primitives such as points, lines and polygons,
and convert them into pixels. This is done by a graphics pipeline known as the OpenGL
state machine. Most OpenGL commands either issue primitives to the graphics pipeline,
or configure how the pipeline processes these primitives. Prior to the introduction of
OpenGL 2.0, each stage of the pipeline performed a fixed function and was configurable
only within tight limits. OpenGL 2.0 offers several stages that are fully programmable
using GLSL.
OpenGL is a low-level, procedural API, requiring the programmer to dictate the exact
steps required to render a scene. These contrasts with descriptive APIs, where a
programmer only needs to describe a scene and can let the library manage the details of
rendering it. OpenGL's low-level design requires programmers to have a good knowledge
of the graphics pipeline, but also gives a certain amount of freedom to implement novel
rendering algorithms.
1. Evaluation, if necessary, of the polynomial functions which define certain inputs, like
NURBS surfaces, approximating curves and the surface geometry.
2. Vertex operations, transforming and lighting them depending on their material. Also
clipping non visible parts of the scene in order to produce the viewing volume.
3. Rasterization or conversion of the previous information into pixels. The polygons are
represented by the appropriate color by means of interpolation algorithms.
Several libraries are built on top of or beside OpenGL to provide features not
available in OpenGL itself. Libraries such as GLU can always be found with OpenGL
implementations, and others such as GLUT and SDL have grown over time and provide
rudimentary cross platform windowing and mouse functionality and if unavailable can
easily be downloaded and added to a development environment. Simple graphical user
interface functionality can be found in libraries like GLUI or FLTK. Still other libraries
like GL Aux (OpenGL Auxiliary Library) are deprecated and have been superseded by
functionality commonly available in more popular libraries, but code using them still
exists, particularly in simple tutorials. Other libraries have been created to provide
OpenGL application developers a simple means of managing OpenGL extensions
andversioning. Examples of these libraries include GLEW (the OpenGL Extension
Wrangler Library) and GLEE (the OpenGL Easy Extension Library).
Many OpenGL extensions, as well as extensions to related APIs like GLU, GLX
and WGL, have been defined by vendors and groups of vendors. The OpenGL Extension
Registry is maintained by SGI and contains specifications for all known extensions,
written as modifications to appropriate specification documents. The registry also defines
naming conventions, guidelines for creating new extensions and writing suitable
extension specifications and other related documentation.
The color plate gives an idea of the kinds of things that can be done with the OpenGL
graphics system. The following list briefly describes the major graphics operations which
OpenGL performs to render an image on the screen.
During these stages, OpenGL might perform other operations, such as eliminating
parts of objects that are hidden by other objects. In addition, after the scene is rasterized
but before it's drawn on the screen, the user can perform some operations on the pixel
data if needed. In some implementations (such as with the X Window System), OpenGL
is designed to work even if the computer that displays the graphics that is created isn't the
computer that runs the graphics program. This might be the case if a user works in a
networked computer environment where many computers are connected to one another
by a digital network.
In this situation, the computer on which the program runs and issues OpenGL drawing
commands is called the client, and the computer that receives those commands and
performs the drawing is called the server.
The format for transmitting OpenGL commands (called the protocol) from the client
to the server is always the same, so OpenGL programs can work across a network even if
the client and server are different kinds of computers. If an OpenGL program isn't
running across a network, then there's only one computer, and it is both the client and the
server.
Commands are always processed in the order in which they are received, although
there may be an indeterminate delay before a command takes effect. This means that each
primitive is drawn completely before any subsequent command takes effect. It also means
that state-querying commands return data that's consistent with complete execution of all
previously issued OpenGL commands.
OpenGL commands use the prefix gl and initial capital letters for each word
making up the command name (glClearColor(), for example). Similarly, OpenGL
defined constants begin with GL_, use all capital letters, and use underscores to separate
words (like GL_COLOR_BUFFER_BIT). Some seemingly extraneous letters appended
to some command names (for example, the 3f in glColor3f () and glVertex3f ())can also
be seen. It's true that the Color part of the command name glColor3f () is enough to
define the command as one that sets the current color. However, more than one such
command has been defined so that the user can use different types of arguments. In
particular, the 3 part of the suffix indicates that three arguments are given; another
version of the Color command takes four arguments. The f part of the suffix indicates
that the arguments are floating-point numbers. Having different formats allows OpenGL
to accept the user's data in his or her own data format.
Some OpenGL commands accept as many as 8 different data types for their
arguments. The letters used as suffixes to specify these data types for ISO C
implementations of OpenGL are shown in Table 1.1, along with the corresponding
OpenGL type definitions. The particular implementation of OpenGL that are used might
not follow this scheme exactly; an implementation in C++ or ADA, for example, wouldn't
need to implementation in C++ or ADA, for example, wouldn't need to.
Display Lists
All data, whether it describes geometry or pixels, can be saved in a display list for current
or later use. (The alternative to retaining data in a display list is processing the data
immediately - also known as immediate mode.) When a display list is executed, the
retained data is sent from the display list just as if it were sent by the application in
immediate mode.
Evaluators
All geometric primitives are eventually described by vertices. Parametric curves and
surfaces may be initially described by control points and polynomial functions called
basis functions. Evaluators provide a method to derive the vertices used to represent the
surface from the control points. The method is a polynomial mapping, which can produce
surface normal, texture coordinates, colors, and spatial coordinate values from the control
points.
Per-Vertex Operations
For vertex data, next is the "per-vertex operations" stage, which converts the vertices into
primitives. Some vertex data are transformed by 4 x 4 floating-point matrices. Spatial
coordinates are projected from a position in the 3D world to a position on the screen. If
advanced features are enabled, this stage is even busier. If texturing is used, texture
coordinates may be generated and transformed here. If lighting is enabled, the lighting
calculations are performed using the transformed vertex, surface normal, light source
position, material properties, and other lighting information to produce a color value.
Primitive Assembly
Pixel Operations
While geometric data takes one path through the OpenGL rendering pipeline, pixel data
takes a different route. Pixels from an array in system memory are first unpacked from
one of a variety of formats into the proper number of components. Next the data is scaled,
biased, and processed by a pixel map. The results are clamped and then either written into
texture memory or sent to the rasterization step. If pixel data is read from the frame
buffer, pixel-transfer operations are performed. Then these results are packed into an
appropriate format and returned to an array in system memory. There are special pixel
copy operations to copy data in the framebuffer to other parts of the frame buffer or to the
texture memory. A single pass is made through the pixel transfer operations before the
data is written to the texture memory or back to the frame buffer.
Texture Assembly
An OpenGL application may wish to apply texture images onto geometric objects to
make them look more realistic. If several texture images are used, it's wise to put them
into texture objects so that it can be easily switched among them. Some OpenGL
implementations may have special resources to accelerate texture performance. There
may be specialized, high-performance texture memory. If this memory is available, the
texture objects may be prioritized to control the use of this limited and valuable resource.
Rasterization
Rasterization is the conversion of both geometric and pixel data into fragments. Each
fragment square corresponds to a pixel in the frame buffer. Line and polygon stipples,
line width, point size, shading model, and coverage calculations to support antialiasing
are taken into consideration as vertices are connected into lines or the interior pixels are
calculated for a filled polygon. Color and depth values are assigned for each fragment
square.
Fragment Operations
Before values are actually stored into the frame buffer, a series of operations are
performed that may alter or even throw out fragments. All these operations can be
enabled or disabled. The first operation which may be encountered is texturing, where a
Texel is generated from texture memory for each fragment and applied to the fragment.
Then fog calculations may be applied, followed by the scissor test, the alpha test, the
stencil test, and the depth-buffer test. Failing an enabled test may end the continued
processing of a fragment's square. Then, blending, dithering, logical operation, and
masking by a bitmask may be performed. Finally, the thoroughly processed fragment is
drawn into the appropriate buffer.
There are numerous Windowing system and interface libraries available for OpenGL as
well as Scene graphs and High-level libraries build on top of OpenGL
About GLUT
GLUT is the OpenGL Utility Toolkit, a window system independent toolkit for writing
OpenGL programs. The OpenGL Utility Toolkit (GLUT) is a window system-
independent toolkit, written by Mark Kilgard, to hide the complexities of differing
window system APIs. GLUT routines use the prefix glut. It implements a simple
windowing application programming interface (API) for OpenGL. GLUT makes it
considerably easier to learn about and explore OpenGL Programming.
Libraries that are modeled on the functionality of GLUT providing support for things
like: windowing and events, user input, menus, full screen rendering, performance timing
GLX is used on Unix OpenGL implementation to manage interaction with the X Window
System and to encode OpenGL onto the X protocol stream for remote rendering. GLU is
the OpenGL Utility Library. This is a set of functions to create texture mipmaps from a
base image, map coordinates between screen and object space, and draw quadric surfaces
and NURBS. DRI is the Direct Rendering Infrastructure for coordinating the Linux
kernel, X window system, 3D graphics hardware and an OpenGL-based rendering engine.
GLX Library
GLX 1.3 is used on Unix OpenGL implementation to manage interaction with the X
Window System and to encode OpenGL onto the X protocol stream for remote rendering.
It supports: pixel buffers for hardware accelerated offscreen rendering; read-only
drawables for preprocessing of data in an offscreen window and direct video input; and
FBConfigs, a more powerful and flexible interface for selecting frame buffer
configurations underlying an OpenGL rendering window.
GLU Library
The OpenGL Utility Library (GLU) contains several routines that use lower-level
OpenGL commands to perform such tasks as setting up matrices for specific viewing
orientations and projections, performing polygon tessellation, and rendering surfaces.
This library is provided as part of every OpenGL implementation. GLU routines use the
prefix glu. This is a set of functions to create texture mipmaps from a base image, map
coordinates between screen and object space, and draw quadric surfaces and NURBS.
GLU 1.2 is the version of GLU that goes with OpenGL 1.1. GLU 1.3 is available and
includes new capabilities corresponding to new OpenGL 1.2 features.
Leading software developers use OpenGL, with its robust rendering libraries, as the
2D/3D graphics foundation for higher-level APIs. Developers leverage the capabilities of
OpenGL to deliver highly differentiated, yet widely supported vertical market solutions.
Quesa is a high level 3D graphics library, released as Open Source under the
LGPL, which implements Apple's QuickDraw 3D API on top of OpenGL. It
supports both retained and immediate mode rendering, an extensible file format,
plug-in renderers, a wide range of high level geometries, hierarchical models, and
a consistent and object-orientated API. Quesa currently supports Mac OS, Linux,
and Windows - ports to Be and Mac OS X are in progress.