Cs602 Shorts
Cs602 Shorts
Cs602 Shorts
Angles are positive angles if the terminal ray rotates counterclockwise around the
vertex from the initial ray . Angles are negative angles if the terminal ray rotates
Answer: clockwise around the vertex from the initial ray
Question: Give the concept for line at right angle?
The lines are said to be at right angles if the rotating half line (or ray) from starting
Answer: from initial position to the final position describes one quarter of a circle.
Question: What is Quadrant?
Let X'OX and Y'OY perpendicular coplanar lines intersecting each other at O. We
refer X'OX as x-axis and Y'OY as y-axis. It is clear from the adjoining figure, that
these two lines divide the plane into four equal parts, each part is called a Quadrant.
The four Quadrants are: XOY - first Quadrant YOX' - second Quadrant X'OY' -
Answer: third Quadrant Y'OX - fourth Quadrant
What Software packages are used to design "Room Layout Design and
Question: Architectural Simulations" as in e-handouts?
Following are some famous known applications used for this purpose: CAD system
-> Designing and building then simulating layouts used mostly by builders,
engineers etc 3D Studio Max -> Designing of 3D objects Maya -> Designing of 3D
Answer: objects
I tried to run code given in Lecture 13 using DevC for Table drawing but both
Question: codes are not compiled?
This code is made using Borland Turbo C which will not run using other compilers
or coding languages, but actual purpose of this code for you is to learn it by looking
at how it works and map it into your own coding by what ever you are using as
compiler. Or try to use turbo C to compile this code. Use this code as a learning
Answer: tool pseudocode.
Question: Explain Virtual Reality Systems?
Virtual Reality Systems present a computer-generated visual and auditory
experience that allows a user to be immersed within a computer generated “ world
” for various purposes. Used in conjunction with traditional computer input
systems this can be used, for example, as a powerful design tool allowing a user to
see objects that he or she is designing. The application to entertainment or training
simulation systems is equally useful as it allows for the creation of an infinite
number of immersive environments to suit any need. The addition of haptic
systems to virtual reality will greatly increase its effectiveness at simulating real-
world situations. One example can potentially include a medical training system
using a simulator and virtual reality where a haptic system provides doctors with
the “feel” of virtual patients. Figure 1 shows the schematic of such a medical
simulation system, the visual display and the haptic gloves are combined to
Answer: simulate, in this example, an abdominal aortic aneurysm surgery.
Question: What are the Working principles of Virtual Reality Systems?
Answer: The working principle of dielectric polymers can be summarized as follows: an
elastomeric polymer film that acts as a capacitor is sandwiched between two
compliant electrodes. Two effects occur simultaneously when an electric field is
applied between the two electrodes. The polymer is stretched in surface and
compressed in thickness during actuation. The change in thickness can be used for
mechanical output. However, there is a need to stack many layers of dielectric
films and electrodes to obtain large displacements. On the other hand, the area
expansion can be used as another method for actuation. The reported strains
obtained by pre-stretching the film are much larger than in thickness but a restoring
force is required to ensure the desired boundary conditions on the film. This can be
achieved either by using an antagonistic pair of actuators or return springs. The
following section describes the design of some compliant frames used as a spring
back force to generate a two-way actuator.
Question: What is meant by stencil buffer?
A stencil buffer is an extra buffer, in addition to the color buffer (pixel buffer) and
depth buffer (z-buffering) found on modern computer graphics hardware. The
buffer is per pixel, and works on integer values, usually with a depth of one byte
per pixel. The depth buffer and stencil buffer often share the same area in the RAM
Answer: of the graphics hardware.
What is the logic behind fractal geometry using in computer science? If fractal is
Question: modern geometry than why we use euclidean geometry technique?
A fractal is generally "a rough or fragmented geometric shape that can be split into
parts, each of which is (at least approximately) a reduced-size copy of the whole a
property called self-similarity. The term was coined in 1975 and was derived from
the Latin fractus meaning "broken" or "fractured." Fractal is modern geometry and
uses of fractal often has the following features: • It has a fine structure at arbitrarily
small scales. • It is too irregular to be easily described in traditional language. • It is
self-similar (at least approximately or stochastically). • It has a Hausdorff
dimension which is greater than its topological dimension (although this
requirement is not met by space-filling curves such as the Hilbert curve). • It has a
simple and recursive definition. Because they appear similar at all levels of
magnification, fractals are often considered to be infinitely complex (in informal
terms). Natural objects that approximate fractals to a degree include clouds,
mountain ranges, lightning bolts, coastlines, and snow flakes. However, not all
self-similar objects are fractal for example, the real line is formally self-similar but
Answer: fails to have other fractal characteristics.
Question: What is difference between quadratic and cubic parametric curves?
Quadratic parametric curves require three control point; whereas cubic curves
requires four control points. 2. Obviously three control points can create less
complex curves; therefore quadratic curves can have only one curvature because
two control points represent the starting and ending point; whereas; third control
Answer: point in between control the curvature of the curve
Question: What is the main difference between Forward Scattering and Back Scattering?
Forward scattering is the normal scattering of light after interacting with any
surface but back scattering results only when the surface is very rough and irregular
Answer: so the light is reflected back in the direction it comes.
Question: What is meant by Blending?
Answer: Blending means how much the background and foreground colors are mixed, for
example if we set alpha value equal 100% than only front color is visible. What is
perspective matrix? The perspective matrix is used to represent the actual point
according to our monitor screen settings how we want to project that point on
screen(perspective projection ).
Question: What is Fudge Factor?
By fudge factor we mean the value that will solve our problem and will help us in
Answer: avoiding calculations so the term of ambient light we use is fudge factor.
Question: Please explain Radio-city Method?
Radio city method calculates the ambient light that will be emitted from different
Answer: materials, using scientific calculations.
Question: Explain about Attenuation Factor?
Attenuation factor mean's how much light is attenuated after covering a certain
distance d, and it is given by the relation light intensity is directly proportional to
Answer: 1/d2 meaning light decreases as distance increases.
Question: Is Ambient in features are not prominent due to solid color?
Ambient light is due to atmosphere and we assume in most cases that certain value
of ambient light is present in the scene because for every point its calculation
Answer: consumes a lot of time and computational cost.
Why is the Bresenham's line drawing algorithm more efficient than the DDA line
Question: drawing algorithm?
Bresenham’s algorithm uses integer arithmetic whereas the DDA uses floating
point arithmetic; and as you know floating point arithmetic is much slower than
integer arithmetic that is why Bresenham’s algorithm is more efficient than DDA
Answer: algorithm.
Question: What is the difference between bitmap graphics & vector graphics?
In vector graphics we use lines ( In fact vectors) to represent and store images so
they have smaller size, on the other hand in bitmap graphics we use pixels store and
Answer: represent the image data.
Question: What is meant by Resolution?
Resolution (In bits ) is No. of Horizontal pixels * No of Vertical pixels * No of bits
Answer: used to represent per pixel.
Question: What is the basic difference b/w Flat shading and Phong shading?
Flat shading produce shading in a flat manner (It produces constant
Answer: shading everywhere on the object). In Phong shading we keep in
http://vustudents.ning.com mind the viewer position so in it the shading output produced is more
/ natural and can represent shiny spots that are viewable from certain
angles on reflecting object's surfaces.
Question: How many colors are used in the image if N bits are required to store a pixel?
Answer: 2n colors will be stored per pixel.
Question: What is the difference between bitmap graphics & colored graphics?
Bitmaps graphics store per pixel information. How much is the per pixel image
information depends upon the image type for example the black and white need
Answer: only one bit per pixel while colored image need 8, 16,24 bits per pixel.
Question: What are Evaluators?
Evaluators are used to compute points on a curve or surface. Using them we can
draw curves or surfaces. One dimensional evaluators are used to draw Curves for
example Bezier curve. Tow dimensional evaluators are used to draw Surfaces for
Answer: example Bezier surface
Question: Explain what are "FRACTALS" & also their use in computer graphics?
Fractals Fractal are geometric patterns that is repeated at ever smaller scales to
produce irregular shapes and surfaces that can not be represented by classical
geometry. Fractals are used in computer modeling of irregular patterns and
structure in nature. Use of Fractals: Fractals are used to represent that kind of
Answer: complex natural objects that can not be represented by ordinary classical geometry.
Question: what is antialiasing?
Antialiasing is a technique for removing the non continuous irregular effects
produced at the boundary (edges) of different shapes due to smaller resolutions
(Big Pixel size) . In this technique we change the color of some pixels surrounding
Answer: the shape boundary so that the image look more natural and real.
Question: What is the opacity of the color?
Opacity Opaque objects does not allow the light to pass through them. For example
Walls, Metals, Wood, and their this property is called opacity. Opacity and
Transparency are converse to each other. Transparency is the measure of how
Answer: much light that is passed through an object.
Question: What is hue?
Hue is the measure of pureness of any color, and saturation is the measure
Answer: (amount) of hue.
Question: What is local and global co-ordinate system?
Every thing in the real world is represented by x, y, z Coordinates these are the
global valus of these three variables. And within this world there are different
objects that have their own dimensions along x, y, z these values are for that
particular object and represent only that object are called local coordinates of that
Answer: particular object.
Question: How to improve performance of Bresenham line algorithms?
Several techniques can be used to improve the performance of line-drawing
procedures. These are important because line drawing is one of the fundamental
primitives used by most of the other rendering applications. An improvement in the
speed of line-drawing will result in an overall improvement of most graphical
applications. Removing procedure calls using macros or inline code can produce
improvements. Unrolling loops also may produce longer pieces of code, but these
may run faster. The use of separate x and y coordinates can be discarded in favour
Answer: of direct frame buffer addressing.
Question: What is the concept behind clipping?
It is desirable to restrict the effect of graphics primitives to a sub-region of the
canvas, to protect other portions of the canvas. All primitives are clipped to the
boundaries of this clipping rectangle; that is, primitives lying outside the clip
Answer: rectangle are not drawn.
Question: What do you know about clipping individual points?
If the x coordinate boundaries of the clipping rectangle are Xmin and Xmax, and
the y coordinate boundaries are Ymin and Ymax, then the following inequalities
must be satisfied for a point at (X,Y) to be inside the clipping rectangle: Xmin < X
< Xmax and Ymin < Y < Ymax If any of the four inequalities does not hold, the
Answer: point is outside the clipping rectangle.
Question: What is clipping?
In rendering, clipping refers to an optimization where the computer only draws
Answer: things that might be visible to the viewer.
Question: What is the importance of Clipping in Games?
Good clipping strategy is important in the development of video games in order to
maximize the game's frame rate and visual quality. Despite GPU chips that are
faster every year, it remains computationally expensive to transform, texture, and
shade polygons, especially with the multiple texture and shading passes common
today. Hence, game developers must live within a certain "budget" of polygons that
Answer: can be drawn each video frame.
Question: How can Lights be understood in Computer Graphics?
In order to understand how an object's color is determined, you'll need to
understand the parts that come into play to create the final color. First, you need a
source of illumination, typically in the form of a light source in your scene. A light
has the properties of color (an rgb value) and intensity. Typically, these are
multiplied to give scaled rgb values. Lights can also have attenuation, which means
that their intensity is a function of the distance from the light to the surface. Lights
can additionally be given other properties such as a shape (e.g., spotlights) and
position (local or directional), but that's more in the implementation rather than the
Answer: math of lighting effects.
Question: Why shaders have importance in 3D Computer Graphics?
One of the nice things about shaders is that you can create your own for whatever
special effects you are looking for In fact, one of the reasons that shaders have
finally made it into mainstream 3D computer graphics is the flexibility that they
provide, which can finally be realized in real time on consumer grade hardware.
Unfortunately, with power comes responsibility the responsibility to understand
how lighting and shading in computer graphics is traditionally done and how you
can do it yourself (or do it differently) in a shader. But first, you'll need an
Answer: understanding of the mathematics behind lighting and shading.
Question: What can be done using Shaders?
With shaders you can * Perform basic geometry transformations * Warp the
geometry * Blend or skin geometry * Generate color information (specular and
diffuse) * Tween vertices between transformation matrices * Generate texture
coordinates * Transform texture coordinates * Size point sprites * Use a custom
illumination model * Perform nonphotorealistic rendering * Perform bump,
environment, and specular mapping * Perform your own texture-blending
operations * Clip pixels In fact, your options are limited only by your ingenuity and
Answer: the size of the shader buffer (which limits how complicated your shader can be).
Question: What the term reflections on reflections represent?
Answer: The term reflection actually encompasses three entirely different types of optical
phenomena. These three kinds of reflection are specular reflection (like a mirror),
diffuse reflection (often called Lambertian) and reflexive reflection (or retro-
reflection).
Question: How can rendering be elaborated in 3D Computer Graphics?
Rendering is all about simulating the real world as much as possible. If you are
rendering something sitting on your table, think about everything that is going on
in the room. What is around the object? Where is the light coming from? Is there
light coming from different places around the room? How bright is it? What color
is the light? What do the shadows look like? What is reflecting in the object? What
kinds of highlights are on the object? All of these things are the result of lighting
and the environment. Novice users often don't take the time to consider these things
and then try to replicate them. It's no wonder most beginners' renderings don't look
Answer: very good.
Question: What is Diffuse Reflection?
The ideal diffuse reflector (or Lambertian reflector): Incident light is reflected with
equal intensity in all directions, regardless of viewing position. How much light is
reflected depends on the “reflectiveness” of the surface. A highly reflective surface
Answer: reflects most of the light.
Question: What is Specular Reflection?
Specular reflection is for shiny objects In contrast to diffuse reflection, specular
Answer: reflection is not constant at all angles from the surface.
Question: What is Attenuation?
Answer: Attenuate means “to weaken”. Light loses energy as it travels further.
Question: What is point?
Points are most often considered within the framework of Euclidean geometry,
where they are one of the fundamental objects. Euclid originally defined the point
vaguely, as "that which has no part". In two dimensional Euclidean space, a point is
Answer: represented by an ordered pair (x,y) coordinates.
Question: What is Circle?
A circle is a simple shape of Euclidean geometry consisting of those points in a
plane which are the same distance from a given point called the centre. The
Answer: common distance of the points of a circle from its center is called its radius.
▶ Reply
Message
Question: What does the .gl or .GL file format have to do with OpenGL?
.gl files have nothing to do with OpenGL, but are sometimes confused with it. .gl is
Answer: a file format for images, which has no relationship to IRIS GL or OpenGL.
Question: What is the GLUT toolkit?
Answer: GLUT is a portable toolkit which performs window and event operations to support
OpenGL rendering. GLUT version 2.0 has: o window functions, including multiple
windows for OpenGL rendering o callback driven event processing o sophisticated
input devices, including dials and buttons box, tablet, Spaceball(TM) o idle
routines and timers o a simple cascading pop-up menu facility o routines to
generate wire and solid objects o bitmap and stroke fonts o request and queries for
multisample and stereo windows o OpenGL extension query support
Question: What is GLUT?
GLUT is the OpenGL Utility Toolkit, a window system independent toolkit for
writing OpenGL programs. It implements a simple windowing application
programming interface (API) for OpenGL. GLUT makes it considerably easier to
learn about and explore OpenGL Programming. Libraries that are modeled on the
functionality of GLUT providing support for things like: windowing and events,
Answer: user input, menuing, full screen rendering, performance timing
Question: What is GLU?
GLU is the OpenGL Utility Library. This is a set of functions to create texture
mipmaps from a base image, map coordinates between screen and object space, and
Answer: draw quadric surfaces and NURBS.
Question: What do I need to compile and run OpenGL programs?
The following applies specifically to C/C++ usage. To compile and link OpenGL
programs, you'll need OpenGL header files and libraries. To run OpenGL programs
you may need shared or dynamically loaded OpenGL libraries, or a vendor-specific
OpenGL Installable Client Driver (ICD) specific to your device. Also, you may
need include files and libraries for the GLU and GLUT libraries. Where you get
these files and libraries will depend on which OpenGL system platform you're
using. OpenG.org maintains a list of links to OpenGL Utility libraries. You can
download most of what you need from there. Under Microsoft Windows 9x, NT,
and 2000: If you're using Visual C++, your compiler comes with include files for
OpenGL and GLU, as well as .lib files to link with. For GLUT, download these
files. Install glut.h in your compiler's include directory, glut32.lib in your
compiler's lib directory, and glut32.dll in your Windows system directory
(c:\windows\system for Windows 9x, or c:\winnt\system32 for Windows
NT/2000). In summary, a fully installed Windows OpenGL development
environment will look like this: File Location [compiler]\include\gl gl.h glut.h
glu.h [compiler]\lib Opengl32.lib glut32.lib glu32.lib [system] Opengl32.dll
glut32.dll glu32.dll where [compiler] is your compiler directory (such as
c:\Program Files\Microsoft Visual Studio\VC98) and [system] is your Windows
Answer: 9x/NT/2000 system directory (such as c:\winnt\system32 or c:\windows\system).
Question: What is GLUT? How is it different from OpenGL?
Because OpenGL doesn't provide routines for interfacing with a windowing system
or input devices, an application must use a variety of other platform-specific
routines for this purpose. The result is nonportable code. Furthermore, these
platform-specific routines tend to be full-featured, which complicates construction
of small programs and simple demos. GLUT is a library that addresses these issues
by providing a platform-independent interface to window management, menus, and
input devices in a simple and elegant manner. Using GLUT comes at the price of
Answer: some flexibility.
Question: What is GLU? How is it different from OpenGL?
If you think of OpenGL as a low-level 3D graphics library, think of GLU as adding
some higher-level functionality not provided by OpenGL. Some of GLU's features
include: o Scaling of 2D images and creation of mipmap pyramids o
Transformation of object coordinates into device coordinates and vice versa o
Support for NURBS surfaces o Support for tessellation of concave or bow tie
polygonal primitives o Specialty transformation matrices for creating perspective
and orthographic projections, positioning a camera, and selection/picking o
Rendering of disk, cylinder, and sphere primitives o Interpreting OpenGL error
values as ASCII text The best source of information on GLU is the OpenGL red
and blue books and the GLU specification, which you can obtain from the OpenGL
Answer: org Web page.
Question: How does the camera work in OpenGL?
As far as OpenGL is concerned, there is no camera. More specifically, the camera
is always located at the eye space coordinate (0., 0., 0.). To give the appearance of
moving the camera, your OpenGL application must move the scene with the
Answer: inverse of the camera transformation.
Question: How can I move my eye, or camera, in my scene?
OpenGL doesn't provide an interface to do this using a camera model. However,
the GLU library provides the gluLookAt() function, which takes an eye position, a
position to look at, and an up vector, all in object space coordinates. This function
computes the inverse camera transform according to its parameters and multiplies it
Answer: onto the current matrix stack.
Question: Where should my camera go, the ModelView or Projection matrix?
The GL_PROJECTION matrix should contain only the projection transformation
calls it needs to transform eye space coordinates into clip coordinates. The
GL_MODELVIEW matrix, as its name implies, should contain modeling and
viewing transformations, which transform object space coordinates into eye space
coordinates. Remember to place the camera transformations on the
GL_MODELVIEW matrix and never on the GL_PROJECTION matrix. Think of
the projection matrix as describing the attributes of your camera, such as field of
view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you
Answer: stand with the camera and the direction you point it.
Given the current ModelView matrix, how can I determine the object-space
Question: location of the camera?
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a
vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the
resulting vector is the object-space location of the camera. OpenGL doesn't let you
inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll
Answer: need to compute the inverse with your own code.
Question: How do I get a specified point (XYZ) to appear at the center of the scene?
Answer: gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your
point as the fourth, fifth, and sixth parameters to gluLookAt().
Question: How do I draw 3D objects on a 2D screen?
There are many ways to do this. Some approaches map the viewing rectangle onto
the scene, by shooting rays through each pixel center and assigning color according
to the object hit by the ray. Other approaches map the scene onto the viewing
rectangle, by drawing each object into the region, keeping track of which object is
in front of which. The mapping mentioned above is also referred to as "projection",
and the two most popular projections are perspective projection and parallel
projection. For example, to do a parallel projection of a scene onto a viewing
rectangle, you can just discard the Z coordinate (divide by depth), and "clip" the
Answer: objects to the viewing rectangle (discard portions that lie outside the region).
Question: How do I draw a circle as a Bezier (or B-Spline) curve?
The short answer is, "You can't." Unless you use a rational spline you can only
approximate a circle. The approximation may look acceptable, but it is sensitive to
scale. Magnify the scale and the error of approximation magnifies. Deviations from
Answer: circularity that were not visible in the small can become glaring in the large.