5cs4 4 CGMT Guess Paper Solution 8492
5cs4 4 CGMT Guess Paper Solution 8492
5cs4 4 CGMT Guess Paper Solution 8492
2. Use in biology
4. Architect
5. Entrainment
6. Computer art
7. Presentation Graphics
8. Animation
9. Printing technology
10. Visualization
Ans: Computer graphics are pictures and films created using computers. Usually, the term refers to computer-generated image
data created with the help of specialized graphical hardware and software. It is a vast and recently developed area of computer
science. The phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is oft en
abbreviated as CG, though sometimes erroneously referred to as computer-generated imagery (CGI).
Ans: computer graphics traditionally referred to the process of creating images from abstract models. A computer game, for
example, might internally keep track of Mario as a large list of points, where each point has three numbers representing its (x, y, z)
coordinates. Then, given the coordinates of the camera, and the direction its facing, the computer will calculate the color at each row
and column in the final image of Mario that you see on you r screen.
Image processing refers to the process of starting with an existing image and refining it in some way to obtain another image. For
example, if you take a picture with your camera, you would use an image processing algorithm to try and make the co lors more
vibrant, or remove the blur, or increase the resolution. The output of an image processing algorithm is another image.
Ans: Raster scan and random scan are the mechanisms used in displays for rendering the picture of an object on the screen of the
monitor. The main difference between raster scan and random scan lies in the drawing of a picture where the raster scan point s the
electron beam at the entire screen but incorporating just one line at a time in the downward direction. On the other hand, in the
random scan, the electron beam is guided on just those regions of the screen where the picture actually lies.
Ans: In digital imaging, a pixel or picture element is a physical point in a raster image, or the smallest addressable element in
an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen.
Each of the pixels that represent an image stored inside a computer has a pixel value which describes how bright that pixel is,
and/or what color it should be. In the simplest case of binary images, the pixel value is a 1-bit number indicating either foreground
or background. For a grayscale images, the pixel value is a single number that represents the brightness of the pixel. The most
common pixel format is the byte image, where this number is stored as an 8-bit integer giving a range of possible values from 0 to
255. Typically zero is taken to be black, and 255 is taken to be white. Values in between make up the different shades of gray.
Cathode Ray Tube (CRT) is a computer display screen, used to display the output in a standard composite video signal. The
working of CRT depends on movement of an electron beam which moves back and forth across the back of the screen. The source
of the electron beam is the electron gun; the gun is located in the narrow, cylindrical neck at the extreme rear of a CRT which
produces a stream of electrons through thermionic emission. Usually, A CRT has a fluorescent screen to display the output sig nal. A
simple CRT is shown in below.
The operation of a CRT monitor is basically very simple. A cathode ray tube consist s of one or more electron guns, possibly
internal electrostatic deflection plates and a phosphor target. CRT has three electron beams – one for each (Red, Green, and Blue)
is clearly shown in figure. The electron beam produces a tiny, bright visible spot when it strikes the phosphor-coated screen. In
every monitor device the entire front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster.
An image (raster) is displayed by scanning the electron beam across the screen. The phosphor’s targets are begins to fade after a
short time, the image needs to be refreshed continuously. Thus CRT produces the three color images which are primary colors.
Here we used a 50 Hz rate to eliminate the flicker by refreshing the screen.
Ans:
1. Education and Training: Computer-generated model of the physical, financial and economic system is often used as educational
aids. Model of physical systems, physiological system, population trends or equipment can help trainees to understand the operation
of the system.
For some training applications, particular systems are designed. For example Flight Simulator.
Flight Simulator: It helps in giving training to the pilots of airplanes. These pilots spend much of their training not in a real aircraft
but on the ground at the controls of a Flight Simulator.
Advantages:
1. Fuel Saving
2. Safety
3. Ability to familiarize the training with a large number of the world's airports.
2. Use in Biology: Molecular biologist can display a picture of molecules and gain insight into their structure with the help of
computer graphics.
3. Computer-Generated Maps: Town planners and transportation engineers can use computer-generated maps which display data
useful to them in their planning work.
4. Architect: Architect can explore an alternative solution to design problems at an interactive graphics terminal. In this way, they
can test many more solutions that would not be possible without the computer.
5. Presentation Graphics: Example of presentation Graphics are bar charts, line graphs, pie charts and other displays showing
relationships between multiple parameters. Presentation Graphics is commonly used to summarize
o Financial Reports
o Statistical Reports
o Mathematical Reports
o Scientific Reports
o Economic Data for research reports
o Managerial Reports
o Consumer Information Bulletins
o And other types of reports
6. Computer Art: Computer Graphics are also used in the field of commercial arts. It is used to generate television and advertising
commercial.
7. Entertainment: Computer Graphics are now commonly used in making motion pictures, music videos and television shows.
8. Visualization: It is used for visualization of scientists, engineers, medical personnel, business analysts for the study of a large
amount of information.
9. Educational Software: Computer Graphics is used in the development of educational software for making computer-aided
instruction.
10. Printing Technology: Computer Graphics is used for printing technology and textile design.
Q. 4 Explain the functions of display processor in raster scan display. Compare the merits and demerits of raster and vector
devices.
Ans: Raster Scan : In a raster scan system, the electron beam is swept across the screen, one row at a time from top to bottom. As
the electron beam moves across each row, the beam intensity is turned on and off to create a patter of illuminated spots.
Picture definition is stored in memory area called the Refresh Buffer or Frame Buffer. This memory area holds the set of
intensity values for all the screen points. Stored intensity values are then retrieved from the refresh buffer and “painted” on the
screen one row (scan line) at a time as shown in the following illustration.
Each screen point is referred to as a pixel (picture element) or pel. At the end of each scan line, the electron beam returns to the
left side of the screen to begin displaying the next scan line.
Random Scan (Vector Scan): In this technique, the electron beam is directed only to the part of the screen where the picture is to be
drawn rather than scanning from left to right and top to bottom as in raster scan. It is also called vector display, stroke-writing
display, or calligraphic display.
Picture definition is stored as a set of line-drawing commands in an area of memory referred to as the refresh display file. To
display a specified picture, the system cycles through the set of commands in the display file, drawing each component line in turn.
After all the line-drawing commands are processed, the system cycles back to the first line command in the list.
Random-scan displays are designed to draw all the component lines of a picture 30 to 60 times each second.
Ans: (i) Resolution-- In computers, resolution is the number of pixels (individual points of color) contained on a display monitor,
expressed in terms of the number of pixels on the horizontal axis and the number on the vertical axis. The sharpness of the image on
a display depends on the resolution and the size of the monitor. The same pixel resolution will be sharper on a smaller monit or and
gradually lose sharpness on larger monitors because the same number of pixels are being spread out over a larger nu mber of inches.
A given computer display system will have a maximum resolution that depends on its physical ability to focus light (in which case
the physical dot size - the dot pitch - matches the pixel size) and usually several lesser resolutions. For example, a display system
that supports a maximum resolution of 1280 by 1023 pixels may also support 1024 by 768, 800 by 600, and 640 by 480 resolution s.
Note that on a given size monitor, the maximum resolution may offer a sharper image but be spread across a space too small to read
well.
(ii)Flickering --- Flickering is the display of one image over the top of another in rapid succession, the result of this is screen flicker,
where one image can be seen in briefly before another one
(iii) Interlacing-- Interlacing (also known as interleaving) is a method of encoding a bitmap image such that a person who has
partially received it sees a degraded copy of the entire image. When communicating over a slow communications link, this is often
preferable to seeing a perfectly clear copy of one part of the image, as it helps the viewer decide more quickly whether to a bort or
continue the transmission. Interlacing is a form of incremental decoding, because the image can be loaded incrementally. Another
form of incremental decoding is progressive scan. In progressive scan the loaded image is decoded line for line, so instead o f
becoming incrementally clearer it becomes incrementally larger. The main difference between the interlace concept in bitmaps and
in video is that even progressive bitmaps can be loaded over multiple frames. For example: Interlaced GIF is a GIF image that
seems to arrive on your display like an image coming through a slowly open ing Venetian blind. A fuzzy outline of an image is
gradually replaced by seven successive waves of bit streams that fill in the missing lines until the image arrives at its full resolution.
Interlaced graphics were once widely used in web design and before that in the distribution of graphics files over bulletin board
systems and other low-speed communications methods. The practice is much less common today, as common broadband internet
connections allow most images to be downloaded to the user's screen nearly instantaneously, and interlacing is usually an inefficient
method of encoding images.
(iv) Refreshing ----The refresh rate (most commonly the "vertical refresh rate", "vertical scan rate" for cathode ray tubes) is the
number of times in a second that display hardware updates its buffer. This is distinct from the measure of frame rate in that the
refresh rate includes the repeated drawing of identical frames, while frame rate measures how often a video source can feed a n entire
frame of new data to a display. For example, most movie projectors advance from one frame to the next one 24 times each second.
But each frame is illuminated two or three times before the next frame is projected using a shutter in front of its lamp. As a result,
the movie projector runs at 24 frames per second, but has a 48 or 72 Hz refresh rate. On cathode ray tube (CRT) displays, increasing
the refresh rate decreases flickering, thereby reducing eye strain. However, if a refresh rate is specified that is beyond wh at is
recommended for the display, damage to the display can occur. For computer programs or telemetry, the term is also applied to how
frequently a datum is updated with a new external value from another source
Ans: (i) Joystick : Joystick is also a pointing device which is used to move cursor position on a monitor screen. It is a stick
having a spherical ball at its both lower and upper ends. The lower spherical ball moves in a socket. The Joystick can be moved in
all four directions.
The function of joystick is similar to that of a mouse. It is mainly used in Computer Aided Designing (CAD) and playing compu ter
games.
(ii) Scanner : Scanner is an input device which works more like a photocopy machine. It is used when some information is
available on a paper and it is to be transferred to the hard disc of the computer for further manipulation.
Scanner captures images from the source which are then converted into the digital form that can be stored on the disc. These images
can be edited before they are printed.
(iii)Light pen : Light pen is a pointing device which is similar to a pen. It is used to select a displayed menu item or draw pictures
on the monitor screen. It consists of a photocell and an optical system placed in a small tube.
When light pen's tip is moved over the monitor screen and pen button is pressed, its photocell sensing element detects the sc reen
location and sends the corresponding signal to the CPU.
(iv)Trackball : Track ball is an input device that is mostly used in notebook or laptop computer, instead of a mouse. This is a ball
which is half inserted and by moving fingers on ball, pointer can be moved.
Since the whole device is not moved, a track ball requires less space than a mouse. A track ball comes in various shapes like a ball, a
button and a square.
Unit 2:
Short Answers: (2 Marks Each)
Q. 1 What is Line? Write equations for line.
Ans: An important topic of high school algebra is "the equation of a line." This means an equation in x and y whose solution set is a
line in the (x,y) plane.
y = mx + b.
This in effect uses x as a parameter and writes y as a function of x: y = f(x) = mx+b. When x = 0, y = b and the point (0,b) is the
intersection of the line with the y-axis.
Thinking of a line as a geometrical object and not the graph of a function, it makes sense to treat x and y more evenhande dly. The
general equation for a line (normal form) is
ax + by = c,
with the stipulation that at least one of a or b is nonzero. This can easily be converted to slope -intercept form by solving for y:
y = (-a/b) + c/b,
except for the special case b = 0, when the line is parallel to the y-axis.
Ans:
BASIS FOR
DDA ALGORITHM BRESENHAM ALGORITHM
COMPARISON
Ans: Antialiasing is a technique used in computer graphics to remove the aliasing effect. The aliasing effect is the appearance of
jagged edges or “jaggies” in a rasterized image (an image rendered using pixels). The problem of jagged edges technically occ urs
due to distortion of the image when scan conversion is done with sampling at a low frequency, which is also known as
Undersampling. Aliasing occurs when real-world objects which comprise of smooth, continuous curves are rasterized using pixels.
Ans: A polygon is any 2-dimensional shape formed with straight lines. Triangles, quadrilaterals, pentagons, and hexagons are all
examples of polygons. The name tells you how many sides the shape has. For example, a triangle has three sides, and a
quadrilateral has four sides. So, any shape that can be drawn by connecting three straight lines is called a triangle, and an y shape
that can be drawn by connecting four straight lines is called a quadrilateral.
Shape # of Sides
Triangle 3
Square 4
Rectangle 4
Quadrilateral 4
Pentagon 5
Hexagon 6
Heptagon 7
Octagon 8
Nonagon 9
Decagon 10
n-gon n sides
Ans: It is a process of representing graphics objects a collection of pixels. The graphics objects are continuous. The pixels used
are discrete. Each pixel can have either on or off state.
Ans:
4-Connected Polygon
In this technique 4-connected pixels are used as shown in the figure. We are putting the pixels above, below, to the right,
and to the left side of the current pixels and this process will continue until we find a boundary with different color.
8-Connected Polygon
In this technique 8-connected pixels are used as s hown in the figure. We are putting pixels above, below, right and left side
of the current pixels as we were doing in 4-connected technique.
In addition to this, we are also putting pixels in diagonals so that entire area of the current pixel is covered. T his process
will continue until we find a boundary with different color.
Ans: Drawing a circle on the screen is a little complex than drawing a line. There are two popular algorithms for generating a
circle − Bresenham’s Algorithmand Midpoint Circle Algorithm. These algorithms are based on the idea of determining the
subsequent points required to draw the circle. Let us discuss the algorithms in detail
We cannot display a continuous arc on the raster display. Instead, we have to choose the nearest pixel position to complete the arc.
From the following illustration, you can see that we have put the pixel at (X, Y) location and now need to decide where to pu t the
next pixel − at N (X+1, Y) or at S (X+1, Y-1).
Step 1 − Get the coordinates of the center of the circle and radius, and store them in x, y, and R respectively. Set P=0 and Q=R.
Step 1 − Input radius r and circle center (xc,yc)(xc,yc) and obtain the first point on the circumference of the
P0P0 = 5/4 – r (See the following description for simplification of this equation.)
f(x, y) = x2 + y2 - r2 = 0
f(xi - 1/2 + e, yi + 1)
= (xi - 1/2 + e)2 + (yi + 1)2 - r2
Thus,
= di - 2(xi - 1) + 2(yi + 1) + 1
= di + 2(yi + 1 - xi + 1) + 1
= di + 2yi+1 + 1
Step 3 − At each XKXK position starting at K=0, perform the following test −
PK+1 = PK + 2XK+1 + 1
Else
Step 5 − Move each calculate pixel position (X, Y) onto the circular path centered on (XC,YC)(XC,YC) and plot the coordinate
values.
X=X+XC, Y=Y+YC
This algorithm is used for scan converting a line. It was developed by Bresenham. It is an efficient method because it involv es only
integer addition, subtractions, and multiplication operations. These operations can be performed very rapidly so lines can be
generated quickly.
In this method, next pixel selected is that one who has the least distance from true line.
Assume a pixel P1 '(x1 ',y 1 '),then select subsequent pixels as we work our may to the night, one pixel position at a time in the
horizontal direction toward P2 '(x2 ',y 2 ').
The line is best approximated by those pixels that fall the least distance from the path between P 1 ',P2 '.
To chooses the next one between the bottom pixel S and top pixel T.
If S is chosen
We have xi+1 =xi +1 and y i+1 =y i
If T is chosen
We have xi+1 =xi +1 and y i+1 =y i +1
This difference is
s-t = (y-yi)-[(yi+1)-y]
= 2y - 2yi -1
We can write the decision variable d i+1 for the next slip on
d i+1 =2△y.xi+1 -2△x.y i+1 +c
d i+1 -d i =2△y.(xi+1 -xi )- 2△x(y i+1 -y i )
Special Cases
Finally, we calculate d 1
d 1 =△x[2m(x1 +1)+2b -2y 1 -1]
d 1 =△x[2(mx1 +b-y 1 )+2m-1]
3. It can be implemented using hardware because it does not use multiplication and division.
4. It is faster as compared to DDA (Digital Differential Analyzer) because it does not involve floating point calculations like DDA
Algorithm.
Disadvantage:
1. This algorithm is meant for basic line drawing only Initializing is not a part of Bresenham's line algorithm. So to draw s mooth
lines, you should want to look into a different algorithm.
Step5: Consider (x, y) as starting point and xend as maximu m possible value of x.
If dx < 0
Then x = x2
y = y2
xend =x1
If dx > 0
Then x = x1
y = y1
xend =x2
Step9: Increment x = x + 1
Step10: Draw a point of latest (x, y) coordinates
Step11: Go to step 7
Example: Starting and Ending position of the line are (1, 1) and (8, 5). Find intermediate points.
Solution: x1 =1
y 1 =1
x2 =8
y 2 =5
dx= x2 -x1 =8-1=7
dy=y 2 -y 1 =5-1=4
I1 =2* ∆y=2*4=8
I2 =2*(∆y-∆x)=2*(4-7)=-6
d = I1 -∆x=8-7=1
x y d=d+I1 or I2
1 1 d+I2 =1+(-6)=-5
2 2 d+I1 =-5+8=3
3 2 d+I2 =3+(-6)=-3
4 3 d+I1 =-3+8=5
5 3 d+I2 =5+(-6)=-1
6 4 d+I1 =-1+8=7
7 4 d+I2 =7+(-6)=1
8 5
Q. 3 What is aliasing and explain different types of anti aliasing techniques .
Ans: In the line drawing algorithms, we have seen that all rasterized locations do not match with the true line and we have to select
the optimum raster locations to represent a straight line. This problem is severe in low resolution screens. In such screens line
appears like a stair-step, as shown in the figure below. This effect is known as aliasing. It is dominant for lines having gentle and
sharp slopes.
The aliasing effect can be reduced by adjusting intensities of the pixels along the line. The process of adjusting intensities of the
pixels along the line to minimize the effect of aliasing is called antialiasing.
The aliasing effect can be minimized by increasing resolution of the raster display. By increasing resolution and making it t wice the
original one, the line passes through twice as many column of pixels and therefore has twice as many jags, but each jag is half as
large in x and in y direction.
As shown in the figure above, line looks better in twice resolution, but this improvement comes at the price of quadrupling t he cost
of memory, bandwidth of memory and scan-conversion time. Thus increasing resolution is an expensive method for reducing
aliasing effect.
With raster system that are capable of displaying more than two intensity levels (colour and gray scale), we can apply antialiasing
methods to modify pixel intensities. By appropriately varying the intensities of pixels along the line or object boundaries, we can
smooth the edges to lessen the stair-step or the jagged appearance.
Supersampling or Postfiltering:-
Supersampling or Postfiltering is the process by which aliasing effects in graphics are reduced by increasing the frequency o f the
sampling grid and then averaging the results down. This process means calculating a virtua l image at a higher spatial resolution than
the frame store resolution and then averaging down to the final resolution. It is called Postfiltering as the filtering is ca rried out after
sampling.
1. A continuous image I(x , y) is sampled at n times the frame resolution. This is a virtual image.
2. The virtual image is then lowpass filtered.
3. The filtered image is then resampled at the final frame resolution.
In this antialiasing method pixel intensity is determined by calculating the areas of overlap of each pixel with the objects to be
displayed. Antialiasing by computing area is referred to as Area sampling or Prefiltering. A modification to Bresenham's
algorithm was developed by Pitteway and Watkinson. In this algorithm, each pixel is given intensity depending on the area of
overlap of the pixel and the line. So, due to the blurring effect along the line edges, the effect of anti-aliasing is not very prominent,
although it still exists. For sampling shapes other than polygons, this can be very computationally intensive.
Q. 4 Write DDA in algorithmic form. Also explain the algorithm with the help of suitable example.
Ans: DDA stands for Digital Differential Analyzer. It is an incremental method of scan conversion of line. In this method
calculation is performed at each step but by using results of previous steps.
m=
y i+1 -y i =∆y.......................equation 3
y i+1 -xi =∆x......................equation 4
y i+1 =y i +∆y
∆y=m∆x
y i+1 =y i +m∆x
∆x=∆y/m
xi+1 =xi +∆x
xi+1 =xi +∆y/m
xi+1= , y=y+1
Until y → y2</y
Advantage:
1. It is a faster method than method of using direct use of line equation.
3. It allows us to detect the change in the value of x and y ,so plotting of same point twice is not possible.
Disadvantage:
1. It involves floating point additions rounding off is done. Accumulations of round off error cause accumulation of error.
2. Rounding off operations and floating point operations consumes a lot of time.
3. It is more suitable for generating line using the software. But it is less suited for hardware implementation.
DDA Algorithm:
Example : If a line is drawn from (2, 3) to (6, 15) with use of DDA. How many points will needed to generate such line?
m=
Ans: Ellipse is defined as the locus of a point in a plane which moves in a plane in such a manner that the ratio of its distance from a
fixed point called focus in the same plane to its distance from a fixed straight line called directrix is a lways constant, which should
always be less than unity.
If the distance to the two foci from any point P=(x,y) on the ellipse are labeled d1 and d2 then the general equation of the ellipse can
be stated as- d1 +d2 =constant.
For expressing the distances d1 and d2 in terms of focal coordinates F1 and F2 we have:- Ax2 +By2 +Cxy+Dx+Ey+F=0 where A, B,
C, D,E, and F are evaluated in terms of focal coordinates and dimensions of the major and minor axes of the ellipse.
The midpoint ellipse method is applied throughout the first quadrant in two parts. Now let us take the start position at (0,ry) and
step along the ellipse path in clockwise order throughout the first quadrant.
According to this there are some properties which have been generated that are:
In region 1 the initial value of a decision parameter is obtained by giving starting position = (0,ry).
When we enter into a region 2 the initial position is taken as the last position selected in region 1 and the initial decision parameter
in region 2 is then:
ALGORITHM
1. Take the input and ellipse centre and obtain the first point on an ellipse centered on the origin as a (x,y 0 )= (0,ry ).
2. Now calculate the initial decision parameter in region 1 as:
p1 0 =ry2 +1/4rx2 -rx2 ry
3. At each xk position in region 1 perform the following task. If p1 k<0 then the next point along the ellipse centered
on (0,0) is (xk+1,yk).
i.e. p1 k+1 =p1 k+2 ry2 xk+1 +ry2
Otherwise the next point along the circle is (xk+1 ,y k -1)
i.e. p1 k+1 =p1 k+2ry2 xk+1 – 2rx2 yk+1 +ry2
4. Now, again calculate the initial value in region 2 using the last point (x0 ,y0 ) calculated in a region 1 as
: p2 0 =ry2 (x0 +1/2)2 +rx2 (y0 -1)2 -rx2 ry2
5. At each yk position in region 2 starting at k =0 perform the following task. If p2 k<0 the next point along the ellipse
centered on (0,0) is (xk , yk-1 )
i.e. p2 k+1 =p2 k-2rx2 yk+1 +rx2
Otherwise the next point along the circle will be (xk+1 ,yk -1)
i.e. p2 k+1 =p2 k+2ry2 xk+1 -2rx2 yk+1 +rx2
6. Now determine the symmetric points in another three quadrants.
7. Plot the coordinate value as: x=x+xc , y=y+yc
8. Repeat the steps for region 1 until 2ry2 x>=2rx2 y.
In this method, a point or seed which is inside region is selected. This point is called a seed point. Then four connected approaches
or eight connected approaches is used to fill with specified color.
The flood fill algorithm has many characters similar to boundary fill. But this method is more suitable for filling multiple colors
boundary. When boundary is of many colors and interior is to be filled with one color we use this algorithm.
In fill algorithm, we start from a specified interior point (x, y) and reassign all pixel values are currently set to a given interior color
with the desired color. Using either a 4-connected or 8-connected approaches, we then step through pixel positions until all interior
points have been repainted.
Disadvantage:
1. Very slow algorithm
2. May be fail for large polygons
3. Initial pixel required more knowledge about surrounding pixels.
Algorithm:
1. Procedure floodfill (x, y,fill_ color, old_color: integer)
2. If (getpixel (x, y)=old_color)
3. {
4. setpixel (x, y, fill_color);
5. fill (x+1, y, fill_color, old_color);
6. fill (x-1, y, fill_color, old_color);
7. fill (x, y+1, fill_color, old_color);
8. fill (x, y-1, fill_color, old_color);
9. }
10. }
Unit 3:
Short Answers: (2 Marks Each)
Q. 1Define 2-Dimension Transformation.
Ans: Transformation means changing some graphics into something else by applying rules. We can have various types of
transformations such as translation, scaling up or down, rotation, shearing, etc. When a transformation takes place on a 2D plane, it
is called 2D transformation.
Transformations play an important role in computer graphics to reposition the graphics on the screen and change their size or
orientation.
Ans: Clipping a point from a given window is very easy. Consider the following figure, where the rectangle indicates the window.
Point clipping tells us whether the given point (X, Y) is within the given window or not; and decides whether we will use the
minimum and maximu m coordinates of the window.
Line Clipping
The concept of line clipping is same as point clipping. In line clipping, we will cut the portion of line which is outside of window
and keep only the portion that is inside the window.
Polygon Clipping
A polygon can also be clipped by specifying the clipping window. Sutherland Hodgeman polygon clipping algorithm is used for
polygon clipping. In this algorithm, all the vertices of the polygon are clipped against each edge of the clipping window.
First the polygon is clipped against the left edge of the polygon window to get new vertices of the polygon. These new vertices are
used to clip the polygon against right edge, top edge, bottom edge, of the clipping window
Ans: The viewing transformation is the operation that maps a perspective view of an object in world coordinates into a physical
device’s display space. In general, this is a complex operation which is best grasped intellectually by the typical computer graphics
technique of dividing the operation into a concatenation of simpler operations
Ans: Clipping means Identifying portions of a scene that are inside (or outside) a specified region Examples Multiple viewports on
a device Deciding how much of a games world the player can see.
In other words,Clipping is the process of removing the graphics parts either inside or outside the given region.
Interior clipping removes the parts outside the given win dow and exterior clipping removes the parts inside the given window.
Ans: window
The window defines a rectangular area in the world coordinates. As well as window can be defined with the GWINDOW
statement. The window can be defined to be larger than, the same size as and smaller than the actual range of the data values , thus
depending on whether we want to show all of the data and only part of data.
Viewport
the viewport can be defines in the normalized coordinates a rectangular area on the display device. where the image of data
appears. the viewport is defined with GPORT command. Thus We can have the graph take up the entire display device and show it
in only a portion, say the upper-right part.
Q.6 why are homogeneous coordinates used for transformation computati on in computer graphics.
Q. 1 Discuss the composite transformation matrices for two successive translations and scaling.
Ans:
Q. 2 Show that the composition of two rotations is additive by concatenating the matrix representations for R(θ1) and
R(θ2)to obtain: R(θ1). R(θ2)= R(θ1+θ2).
Q. 3 Explain with suitable example Cohen Sutherland Line Clipping Technique. Write this technique in Algorithmic form
also.
The Cohen-Sutherland line clipping algorithm quickly detects and dispenses with two common and trivial cases. To clip a line, we
need to consider only its endpoints. If both endpoints of a line lie inside the window, the entire line lies inside the windo w. It
is trivially accepted and needs no clipping. On the other hand, if both endpoints of a line lie entirely to one side of the window, the
line must lie entirely outside of the window. It is trivially rejected and needs to be neither clipped nor displayed.
To determine whether endpoints are inside or outside a window, the algorithm sets up a half-space code for each endpoint. Each
edge of the window defines an infinite line that divides the whole space into two half-spaces, the inside half-space and the outside
half-space, as shown below.
As you proceed around the window, extending each edge and defining an inside half-space and an outside half-space, nine regions
are created - the eight "outside" regions and the one "inside"region. Each of the nine regions associated with the window is assigned
a 4-bit code to identify the region. Each bit in the code is set to either a 1(true) or a 0(false). If the region is to the left of the
window, the first bit of the code is set to 1. If the region is to the top of the window, the second bit of the code is set to 1. If to
the right, the third bit is set, and if to the bottom, the fourth bit is set. The 4 bits in the code then identify each of the nine regions
as shown below.
For any endpoint ( x , y ) of a line, the code can be determined that identifies which region the endpoint lies. The code's bits are set
according to the following conditions:
The sequence for reading the codes' bits is LRBT (Left, Right, Bottom, Top).
Once the codes for each endpoint of a line are determined, the logical AND operation of the codes determines if the line is
completely outside of the window. If the logical AND of the endpoint codes is not zero, the line can be trivally rejected. For
example, if an endpoint had a code of 1001 while the other endpoint had a code of 1010, the logical AND would be 1000 which
indicates the line segment lies outside of the window. On the other hand, if the endpoints had codes of 1001 and 0110, the logical
AND would be 0000, and the line could not be trivally rejected.
The logical OR of the endpoint codes determines if the line is completely inside the window. If the logical OR is zero, the line can
be trivally accepted. For example, if the endpoint codes are 0000 and 0000, the logical OR is 0000 - the line can be trivally accepted.
If the endpoint codes are 0000 and 0110, the logical OR is 0110 and the line can not be trivally accepted.
Algorithm
The Cohen-Sutherland algorithm uses a divide-and-conquer strategy. The line segment's endpoints are tested to see if the line can be
trivally accepted or rejected. If the line cannot be trivally accepted or rejected, an intersection of the line with a window edge is
determined and the trivial reject/accept test is repeated. This process is continued until the line is accepted.
To perform the trivial acceptance and rejection tests, we extend the edges of the window to divide the plane of the window in to the
nine regions. Each end point of the line segment is then assigned the code of the region in which it lies.
If both codes are 0000,(bitwise OR of the codes yields 0000 ) line lies completely inside the window: pass the endpoints to
the draw routine.
If both codes have a 1 in the same bit position (bitwise AND of the codes is not 0000), the line lies outside the window. It
can be trivially rejected.
3. If a line cannot be trivially accepted or rejected, at least one of the two endpoints must lie outside the window and the line
segment crosses a window edge. This line must be clipped at the window edge before being passed to the drawing routine.
4. Examine one of the endpoints, say . Read 's 4-bit code in order: Left-to-Right, Bottom-to-Top.
5. When a set bit (1) is found, compute the intersection I of the corresponding window edge with the line from to .
Replace with I and repeat the algorithm.
Before Clipping
Point A has an outcode of 0000 and point D has an outcode of 1001. The logical AND of these outcodes is zero; therefore,
the line cannot be trivally rejected. Also, the logical OR of the outcodes is not zero; therefore, the line cannot be trivally
accepted. The algorithm then chooses D as the outside point (its outcode contains 1's). By our testing order, we first use the
top edge to clip AD at B. The algorithm then recomputes B's outcode as 0000. With the next iteration of the
algorithm, AB is tested and is trivially accepted and displayed.
2. Consider the line segment EI
Point E has an outcode of 0100, while point I's outcode is 1010. The results of the trivial tests show that the line can neither
be trivally rejected or accepted. Point E is determined to be an outside point, so the algorithm clips the line against the
bottom edge of the window. Now line EI has been clipped to be line FI. Line FI is tested and cannot be trivially accepted
or rejected. Point F has an outcode of 0000, so the algorithm chooses point I as an outside point since its outcode is 1010.
The line FI is clipped against the window's top edge, yielding a new line FH. Line FH cannot be trivally accepted or
rejected. Since H's outcode is 0010, the next iteration of the algorthm clips against the window's right edge, yielding
line FG. The next iteration of the algorithm tests FG, and it is trivially accepted and display.
After Clipping
After clipping the segments AD and EI, the result is that only the line segment AB and FG can be seen in the window.
This is an area filling algorithm. This is used where we have to do an interactive painting in computer graphics, where interior
points are easily selected. If we have a specified boundary in a single color, then the fill algorithm proceeds pixel by pixe l until the
boundary color is encountered. This method is called the boundary-fill algorithm.
1. 4-connected:
In this firstly there is a selection of the interior pixel which is inside the boundary then in reference to that pixel, the adjacent pixel
will be filled up that is top-bottom and left-right.
2. 8-connected:
This is the best way of filling the color correctly in the interior of the area defined. This is used to fill in more complex figures. In
this four diagonal pixel are also included with a reference interior pixel (including top-bottom and left right pixels).
1. It may not fill regions correctly, if same interior pixels are also displayed in fill color.
2. In 4-connected there is a problem. Sometimes it does not fill the corner pixel as it checks only the adjacent position of the
given pixel.
4) End.
Flood-fill Algorithm
By this algorithm, we can recolor an area that is not defined within a single color boundary. In this, we can paint such areas by
replacing a color instead of searching for a boundary color value. This whole approach is termed as flood fill algorithm. This
procedure can also be used to reduce the storage requirement of the stack by filling pixel spans.
begin
if getpixel (x, y) = old color then
begin
setpixel (x ,y, fillcolor)
floodfill4 (x+1, y, fillcolor, oldcolor)
floodfill4 (x-1, y, fillcolor, oldcolor)
floodfill4 (x, y+1, fillcolor, oldcolor)
floodfill4 (x, y-1, fillcolor, oldcolor)
end.
A set of connected lines are considered as polygon; polygons are clipped based on the window and the portion which is inside the
window is kept as it is and the outside portions are clipped. The polygon clipping is required to deal different cases. Usually it clips
the four edges in the boundary of the clip rectangle. The clip boundary determines the visible and invisible regions of polygon
clipping and it is categorized as four.
• Visible region is wholly inside the clip window – saves endpoint
• Visible exits the clip window – Save the interaction
• Visible region is wholly outside the clip window – nothing to save
• Visible enters the clip window – save endpoint and intersection
A convex polygon and a convex clipping area are given. The task is to clip polygon edges using the Sutherland –Hodgman
Algorithm. Input is in the form of vertices of the polygon in clockwise order.
Examples:
Input : Polygon : (100,150), (200,250), (300,200)
Example 2
Input : Polygon : (100,150), (200,250), (300,200)
Clipping Area : (100,300), (300,300), (200,100)
Output : (242, 185) (166, 166) (150, 200) (200, 250) (260, 220)
The Liang-Barsky algorithm is a line clipping algorithm. This algorithm is more efficient than Cohen –Sutherland line clipping
algorithm and can be extended to 3-Dimensional clipping. This algorithm is considered to be the faster parametric line-clipping
algorithm. The following concepts are used in this clipping:
1. The parametric equation of the line.
2. The inequalities describing the range of the clipping window which is used to determine the intersections between the line
and the clip window.
The parametric equation of a line can be given by,
X = x1 + t(x2 -x1 )
Y = y 1 + t(y 2 -y 1 )
Where, t is between 0 and 1.
tp k <= q k
Where k = 1, 2, 3, 4 (correspond to the left, right, bottom, and top boundaries, respectively).
Ans: 3D graphics components are now a part of almost every personal computer and, although traditionally intended for graphics -
intensive software such as games, they are increasingly being used by other applications.
The geometric transformations play a vital role in generating images of three Dimensional objects with the help of these
transformations. The location of objects relative to others can be easily expressed. Sometimes viewpoint changes rapidly, or
sometimes objects move in relation to each other. For this number of transformation can be carried out repeatedly.
(d) Duality
Ans:
They always pass through the first and last control points.
They are contained in the convex hull of their defining control points.
The degree of the polynomial defining the curve segment is one less that the number of defining polygon point. Therefore,
for 4 control points, the degree of the polynomial is 3, i.e. cubic polynomial.
The direction of the tangent vector at the end points is same as that of the vector determined by first and last segments.
The convex hull property for a Bezier curve ensures that the polynomial s moothly follows the control points.
No straight line intersects a Bezier curve more times than it intersects its control polygon.
Bezier curves exhibit global control means moving a control point alters th e shape of the whole curve.
A given Bezier curve can be subdivided at a point t=t0 into two Bezier segments which join together at the point
corresponding to the parameter value t=t0.
Ans:Curve interpolation is a method of constructing new data points within the range of a discrete set of known data points.
Ans: interpolation - all points of the basic figure are located on the created figure called interpolation curve segment
approxi mation - all points of the basic figure need not be located on the created figure called approximation curve segment.
Q. 1 Differentiate B-Spline with Bezier curves. Also differentiate between image space methods and object space method of
visible surface detection.
Bezier curve is discovered by the French engineer Pierre Bézier. These curves can be generated under the control of other points.
Approximate tangents by using control points are used to generate curve. The Bezier curve can be represented mathematically a s −
∑k=0nPiBni(t)∑k=0nPiBin(t)
Where pipi is the set of points and Bni(t)Bin(t) represents the Bernstein polynomials which are given by −
Bni(t)=(ni)(1−t)n −it iBin(t )=(ni)(1−t)n−iti
They generally follow the shape of the control polygon, which consists of the segments joining the control points.
They always pass through the first and last control points.
They are contained in the convex hull of their defining control points.
The degree of the polynomial defining the curve segment is one less that the number of defining polygon point. Therefore,
for 4 control points, the degree of the polynomial is 3, i.e. cubic polynomial.
The direction of the tangent vector at the end points is same as that of the vector determined by first and last segments.
The convex hull property for a Bezier curve ensures that the polynomial smoothly follows the contro l points.
No straight line intersects a Bezier curve more times than it intersects its control polygon.
Bezier curves exhibit global control means moving a control point alters the shape of the whole cu rve.
A given Bezier curve can be subdivided at a point t=t0 into two Bezier segments which join together at the point
corresponding to the parameter value t=t0.
B-Spline Curves
The Bezier-curve produced by the Bernstein basis function has limited flexibility.
First, the number of specified polygon vertices fixes the order of the resulting polynomial which defines the curve.
The second limiting characteristic is that the value of the blending function is nonzero for all parameter values over the
entire curve.
The B-spline basis contains the Bernstein basis as the special case. The B-spline basis is non-global.
A B-spline curve is defined as a linear combination of control points Pi and B-spline basis function Ni,Ni, k (t) given by
k is the order of the polynomial segments of the B-spline curve. Order k means that the curve is made up of piecewise
polynomial segments of degree k - 1,
the Ni,k(t)Ni,k(t ) are the “normalized B-spline blending functions”. They are described by the order k and
ti:i=0,...n+Kt i:i=0,...n+K
and if k > 1,
Ni,k(t)=t−titi+k−1Ni,k−1(t)+ti+k−tti+k−ti+1Ni+1,k−1(t)Ni,k(t)=t−t iti+k−1Ni,k−1(t)+ti+k−tti+k−ti+1Ni+1,k−1(t)
and
tϵ[tk−1,tn+1)tϵ[tk−1,tn+1)
The sum of the B-spline basis functions for any parameter value is 1.
The maximu m order of the curve is equal to the number of vertices of defining polygon.
The degree of B-spline polynomial is independent on the number of vertices of defining polygon.
B-spline allows the local control over the curve surface because each vertex affects the shape of a curve only over a range
of parameter values where its associated basis function is nonzero.
Any affine transformation can be applied to the curve by applying it to the vertices of defining polygon.
The curve line within the convex hull of its defining polygon.
In a 3D objects and viewing specification, we wish to determine which lines or surfaces of the objects are visible, so that we can
display only the visible lines or surfaces. This process is known as hidden surfaces or hidden line elimination, or visible s urface
determination. The hidden line or hidden surface algorithm determines the lines, edges, surfaces or volumes that are visible or
invisible to an observer located at a specific point in space. These algorithms are broadly classified according to wheth er they
deal with object definitions directly or with their projected images. These two approaches are called object-space methods or
object precision methods
and image-space methods, respectively. When we view a picture containing non-transparent objects and surfaces, then we cannot
see those objects from view which are behind from objects closer to eye. We must remove these hidden surfaces to get a realis tic
screen image. The identification and removal of these surfaces is called Hidden-surface problem.
There are two approaches for removing hidden surface problems − Object-Space method and Image-space method.
Object-space method:- Object-space method is implemented in the physical coordinate system in which objects are described. It
compares objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should
label as visible. Object-space methods are generally used in line-display algorithms.
Image-Space method:- Image space method is implemented in the screen coordinate system in which the objects are viewed. In
an image-space algorithm, visibility is decided point by point at each pixel Position on the view plane. Most hidden line/surface
algorithms use image-space methods
Ans: Translation
It is the movement of an object from one position to another position. Translation is done using translation vectors. There a re three
vectors in 3D instead of two. These vectors are in x, y, and z directions. Translation in the x-direction is represented using Tx. The
translation is y-direction is represented using T y . The translation in the z- direction is represented using T z.
If P is a point having co-ordinates in three directions (x, y, z) is translated, then after translation its coordinates will be (x1 y 1 z1 )
after translation. Tx Ty Tz are translation vectors in x, y, and z directions respectively.
x1 =x+ Tx
y 1 =y+Ty
z1 =z+ Tz
Three-dimensional transformations are performed by transforming each vertex of the object. If an ob ject has five corners, then the
translation will be accomplished by translating all five points to new locations. Following figure 1 shows the translation of point
figure 2 shows the translation of the cube.
Point shown in fig is (x, y, z). It become (x1 ,y 1 ,z1 ) after translation. Tx Ty Tz are translation vector.
Scaling
Scaling is used to change the size of an object. The size can be increased or decreased. The scaling three factors are requir ed
Sx Sy and Sz.
Following are steps performed when scaling of objects with fixed point (a, b, c). It can be represented as below:
Note: If all scaling factors Sx=Sy =Sz.Then scaling is called as uniform. If scaling is done with different scaling vectors, it is called a
differential scaling.
In figure (a) point (a, b, c) is shown, and object whose scaling is to done also shown in steps in fig (b), fig (c) and fig (d).
Rotation
It is moving of an object about an angle. Movement can be anticlockwise or clockwise. 3D rotation is complex as compared to t he
2D rotation. For 2D we describe the angle of rotation, but for a 3D angle of rotation and axis of rotation are required. The axis can
be either x or y or z.
It is also called a mirror image of an object. For this reflection axis and reflection of plane is selected. Three -dimensional reflections
are similar to two dimensions. Reflection is 180° about the given axis. For reflection, plane is selected (xy,xz or yz). Following
matrices show reflection respect to all these three planes.
It is change in the shape of the object. It is also called as deformation. Change can be in the x -direction or y -direction or both
directions in case of 2D. If shear occurs in both directions, the object will be distorted. But in 3D shear can occu r in three directions.
Ans: To obtain display of a three-dimensional scene that has been modeled in world coordinates. we must first set up a coordinate
reference for the "camera". This coordinate reference defines the position and orientation for the plane of the carnera film which is
the plane we want to us to display a view of the objects in the scene. Object descriptions are then transferred to the camera reference
coordinates and projected onto the selected display plane. We can then display the objects in wireframe (outline) form, or
we can apply lighting surface rendering techniques to shade the visible surfaces.
PARALLEL PROJECTION
In a parallel projection, parallel lines in the world-coordinate scene projected into parallel lines on the two -dimensional display
plane.
Perspective Projection
Another method for generating a view of a three-dimensional scene is to project points to the display plane along
converging paths. This causes objects farther from the viewing position to be displayed smaller than objects of the same size that are
nearer to the viewing position.In a perspective projection, parallel lines in a scene that are not parallel to the display plane are
projected into converging lines
DEPTH CUEING
A simple method for indicating depth with wireframe displays is to vary the intensity of objects according to their distance
from the viewing position. The viewing position are displayed with the highest intensities, and lines farther away are displayed with
decreasing intensities.
We can also clarify depth relation ships in a wireframe display by identifying visible lines in some way. The simplest method is to
highlight the visible lines or to display them in a different color. Another technique, commonly used for engineering drawing s, is to
display the nonvisible lines as dashed lines. Another approach is to simply remove the nonvisible lines
Surface Rendering
Added realism is attained in displays by setting the surface intensity of objects according to the lighting conditions in the scene and
according to assigned surface characteristics. Lighting specifications include the intensity and positions of light sources and the
general background illumination required for a scene. Surface properties of objects include degree of transparency and how ro ugh or
smooth the surfaces are to be. Procedures can then be applied to generate the correct illumination and shadow regions for the scene.
Exploded and cutaway views of such objects can then be used to show the internal structure and relationship of the object
Parts
Three-dimensional views can be obtained by reflecting a raster image from a vibrating flexible mirror. The vibrations of
the mirror are synchronized with the display of the scene on the CRT. As the mirror vibrates, the focal length varies so that
each point in the scene is projected to a position corresponding to its depth.
Stereoscopic devices present two views of a scene: one for the left eye and the other for the right eye.
Ans: 3D Transforms
3D transforms
| 1 0 0 dx |
T(dx,dy,dz) = | 0 1 0 dy |
| 0 0 1 dz |
|0001 |
Scaling:
| sx 0 0 0 |
S(sx,sy,sz) = | 0 sy 0 0 |
| 0 0 sz 0 |
| 0 0 0 1|
Rotation:
|1 0 0 0|
Rx(A) = | 0 cos A -sin A 0 |
| 0 sin A cos A 0 |
|0 0 0 1|
| cos A 0 sin A 0 |
Ry(A) = | 0 1 0 0|
| -sin A 0 cos A 0 |
| 0 0 0 1|
| cos A -sin A 0 0 |
Rz(A) = | sin A cos A 0 0 |
| 0 0 1 0|
| 0 0 0 1|
3d transforms
--> but when you put these in matrix form, the X and Z are in reversed order, hence the sign change
3D transforms
composing 3D transforms works same as 2D: write each transformation matrix in the order the transformation sequence is
done
o translation & rotations on same axes are additive, while scaling is multiplicative
o however, note that rotations on different axis are NOT commutative!
general transform:
3D transform properties
Q.5 What do you mean by projection? Differentiate between parallel and perspective projectios.
Ans: Projection
It is the process of converting a 3D object into a 2D object. It is also defined as mapping or transformation of the object in
projection plane or view plane. The view plane is displayed surface.
3. perspective projection represents objects in a Parallel projection is much like seeing objects
three-dimensional way. through a telescope, letting parallel light rays into the
eyes which produce visual representations without
depth
4. In perspective projection, objects that are far parallel projection does not create this effect.
away appear smaller, and objects that are
near appear bigger
5. While parallel projection may be best for it is better to use perspective projection.
architectural drawings, in cases wherein
measurements are necessary
7. Types: Types:
1.one point perspective, 1.Orthographic
2.Two point perspective, 2.Oblique
3. Three point perspective,
Q.6 Write Short note on : (i) View Plane (ii) Viewing Pipeline (iii) Spline (iv) Axonometric Projection
Ans: (i) View Plane : view plane The plane onto which an object is projected in a parallel or perspective projection.
(ii) Viewing Pipeline: The viewing-pipeline in 3 dimensions is almost the same as the 2D-viewing-pipeline. Only after the
definition of the viewing direction and orientation (i.e., of the camera) an additional projection step is done, which is the reduction
of 3D-data onto a projection plane:
(iii)Spline : A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design
and control the shape of complex curves and surfaces. The general approach is that the user enters a sequence of points, and a curve
is constructed whose shape closely follows this sequence. The points are called control points. A curve that actually passes through
each control point is called an interpolating curve; a curve that passes near to the control points but not necessarily throu gh them is
called an approximating curve
(iv) Axonometric Projection: The three types of axonometric projection are isometric projection, dimetric projection, and trimetric
projection, depending on the exact angle at which the view deviates from the orthogonal. Typically in axonometric drawing, as in
other types of pictorials, one axis of space is shown as the vertical.
In isometric projection, the most commonly used form of axonometric projection in engineering drawing, the direction of viewing
is such that the three axes of space appear equally foreshortened, and there is a common angle of 120° between them. As the
distortion caused by foreshortening is uniform, the proportionality between lengths is preserved, and the axes share a common scale;
this eases the ability to take measurements directly from the drawing. Another advantage is that 120° angles are easily const ructed
using only a compass and straightedge.
In dimetric projection, the direction of viewing is such that two of the three axes of space appear equally foreshortened, of which
the attendant scale and angles of presentation are determined according to the a ngle of viewing; the scale of the third direction is
determined separately. Dimensional approximations are common in dimetric drawings
In trimetric projection, the direction of viewing is such that all of the three axes of space appear unequally foreshort ened. The
scale along each of the three axes and the angles among them are determined separately as dictated by the angle of viewing.
Dimensional approximations in trimetric drawings are common, and trimetric perspective is seldom used in technical drawing s.
Unit 5:
Short Answers: (2 Marks Each)
Q. 1 Define Light and Light Source.
Ans: Light is electromagnetic radiation within a certain portion of the electromagnetic spectrum. The word usually refers
to visible light, which is the portion of the spectrum that can be perceived by the human eye
• Positions
• Electromagnetic Spectrum
• Shape
• Position
• Reflectance properties
• Position
Ans: Illumination models are used to generate the color of an object’s surface at a given point on that surface. The factors that
govern the illumination model determine the visual representation of that surface. Due to the relationship defined in the mod el
between the surface of the objects and the lights affecting it, illumination models are also called shading models or lighting
models.
Ans: Halftone is the reprographic technique that simulates continuous-tone imagery through the use of dots, varying either in size
or in spacing, thus generating a gradient-like effect."Halftone" can also be used to refer specifically to the image that is produced
by this process
Ans:
Ans: Color Model: Primary Colors -> Sets of colors that can be combined to make a useful range of colors
Color Gamut -> Set of all colors that we can produce from the primary colors.
Complementary Colors-> Pairs of colors which, when combined in the right proportions, produce white. Example, in the RGB
model: red & cyan , green & magenta , blue & yellow.
Ans: The RGB color model is one of the most widely used color representation method in computer graphics. It use a color
coordinate system with three primary colors:
Each primary color can take an intensity value ranging from 0(lowest) to 1(highest). Mixing these three primary colors at different
intensity levels produces a variety of colors. The collection of all the colors obtained by such a linear combination of red, green and
blue forms the cube shaped RGB color space.
HSV Color Model: Hue, Saturation, and Value (HSV) is a color model that is often used in place of the RGB color model in
graphics and paint programs. In using this color model, a color is specified then white or black is added to easily make colo r
adjustments. HSV may also be called HSB (short for hue, saturation and brightness).
At each polygon vertex, we obtain a normal vector by averaging the surface normals of all polygons staring that vertex as shown in
fig:
Thus, for any vertex position V, we acquire the unit vertex normal with the calculation
Once we have the vertex normals, we can determine the intensity at the vertices from a lighting model.
Following figures demonstrate the next step: Interpolating intensities along the polygon edges. For each scan line, the intensities
at the intersection of the scan line with a polygon edge are linearly interpolated from the intensities at the edge endpoints. For
example: In fig, the polygon edge with endpoint vertices at position 1 and 2 is intersected by the scanline at point 4. A fast method
for obtaining the intensities at point 4 is to interpolate between intensities I 1 and I2 using only the vertical displacement of the scan
line.
Similarly, the intensity at the right intersection of this scan line (point 5) is interpolated from the intensity values at v ertices 2 and
3. Once these bounding intensities are established for a scan line, an interior point (such as point P in the previous fig) is
interpolated from the bounding intensities at point 4 and 5 as
Phong Shading: A more accurate method for rendering a polygon surface is to interpolate the normal vector and then apply the
illumination model to each surface point. This method developed by Phong Bui Tuong is called Phong Shading or normal vector
Interpolation Shading. It displays more realistic highlights on a surface and greatly reduces the Match -band effect.
A polygon surface is rendered using Phong shading by carrying out the following steps:
Interpolation of the surface normal along a polynomial edge between two vertices as shown in fig:
Incremental methods are used to evaluate normals between scan lines and along each scan line. At each pixel position along a scan
line, the illumination model is applied to determine the surface intensity at that point.
Intensity calculations using an approximated normal vector at each point along the s can line produce more accurate results than the
direct interpolation of intensities, as in Gouraud Shading. The trade-off, however, is that phong shading requires considerably more
calculations.
Ans: CMY Model: This stands for cyan-magenta-yellow and is used for hardcopy devices. In contrast to color on the monitor, the
color in printing acts subtractive and not additive. A printed color that looks red absorbs the other two components and and
reflects . Thus its (internal) color is G+B=CYA N. Similarly R+B=MAGENTA and R+G=YELLOW. Thus the C-M-Y
coordinates are just the complements of the R-G-B coordinates:
If we want to print a red looking color (i.e. with R-G-B coordinates (1,0,0)) we have to use C-M-Y values of (0,1,1). Note that
color. This is the CMYK-model. Its coordinates are obtained from that of the CMY-model by
, , and .
HLS Model: HLS color model A color model that defines colors by the three parameters hue (H), lightness (L),
and saturation (S). It was introduced by Tektronix Inc. Hue lies on a circle, saturation increases from center to edge of this circle,
lightness goes from black to white. This model uses the same hue plane as the HSV model, but it replaces value (V) by an
extended lightness axis so that the maximu m color gamut is at L=0.5 and decreases in each direction towards white (L=1) and
black (L=0). The HLS color model is represented by a double hexagonal cone, with white at the top apex and black at the bottom.
Ambient
Iag
Iab
kag
kab
Iag kag
Iab kab
Diffuse Reflection
The light that gets reflected from the object's surface in all directions when a light source illuminates its surface. Based on
Lamberts Law for reflected light off a matte surface.
I = Iaka + Ip kd (N·L)
0, otherwise
Specular reflection
The glare seen when a light source is mirrored on the surface of a shiny object Phong Model
n Phong constant
Illumination:
Calculate once for red, once for green and once for blue.
Computational considerations
||L||·||N||
||R||·||V||
Diffuse requires: requires cos(θ) ≥ 0
Backface culling:
If (V·N) <0) then the polygon cannot be seen.
In/Out Coloring:
If (V·N) ≥ 0 then object is color.out
else object is color.in with (outward) normal -N
Painters Algorithm
Sort polygons/surfaces by "distance" from the From Pt.
Then paint the polygons/surfaces starting with th one furthers away first.
"distance" = cetroid, min. vertex??
Triangulate Open Box Example
OK
Not OK
Light Source
Point:
L = P - Q
P given in world coordinates.
Ans: CMKY Color Model: Stands for "Cyan Magenta Yellow Black." These are the four basic colors used for printing color
images. Unlike RGB (red, green, blue), which is used for creating images on your computer sc reen, CMYK colors are
"subtractive." This means the colors get darker as you blend them together. Since RGB colors are used for light, not pigments , the
colors grow brighter as you blend them or increase their intensity.
Technically, adding equal amounts of pure cyan, magenta, and yellow should produce black. However, because of impurities in t he
inks, true black is difficult to create by blending the colors together. This is why black (K) ink is typically included with the three
other colors. The letter "K" is used to avoid confusion with blue in RGB.
YIQ Color Model: This is used for color TV. Here is the luminance (the only component necessary for B&W -TV). The
conversion from RGB to YIQ is given by
more sensible than to color information. So for NTSC TV there are 4MHz assigned to , MHz to and MHz to .
Q.6 Write Short note on : (i) Color Gamut (ii) CIE chromaticity Diagram (iii) Complementary Color
Ans: (i) Color Gamut : The term colour gamut refers to the range of colours a device can reproduce, the larger or wider the gamut
the more rich saturated colours available. As colour gamuts become smaller it is generally these rich saturated colours that are the
first to suffer, a phenomena technically referred to as clipping. This clipping phenomenon is most apparent when converting f rom
RGB to CMYK, with many of the rich saturated colours that were available in RGB no longer being available in CMYK.
Display devices like monitors also have gamuts or ranges of colour they can reproduce. So in order to accurately preview your
images you would ideally like your display gamut to be at least as large, if not larger, than your printer’s gamut, otherwise clipping
will be occurring in your preview.
The negative values in the representation of color by R-G-B-values is unpleasant. Thus the Commission Internationale de
l'Éclairage (CIE) defined in 1931 another base in terms of (virtual) primaries , (the luminous-efficiency function) and ,
which allows to match all visible colors as linear combinations with positive coefficients only (the so called CHROMATICITY
luminous energy . The visible chromatic values in this coordinate system form a horseshoe shaped region, with the
spectrally pure colors on the curved boundary. Warning: brown is orange-red at very low luminance (hence is not shown in this
(iii) Complementary Color : Complementary colors are two colors that are on opposite sides of the color wheel. As an artist,
knowing which colors are complementary to one another can help you make good color decisions. For instance, complementaries
can make each other appear brighter, they can be mixed to create effective neutral hues, or they can be blended together for shadows.
Unit 6:
Short Answers: (2 Marks Each)
Q. 1 What is animation?
Ans: Animation refers to the movement on the screen of the display device created by displaying a sequence of still images.
Animation is the technique of designing, drawing, making layouts and preparation of photographic series which are integrated into
the multimedia and gaming products
2. Entertainment
4. Advertising
5. Presentation
Q. 3 What is illusion?
Ans: An illusion is a distortion of the senses, which can reveal how the human brain normally organizes and interprets sensory
stimulation. Though illusions distort our perception of reality, they are generally shared by most people
Ans: ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and
simulating the effects of its encounters with virtual objects.
Ans: A fractal is a never-ending pattern. Fractals are infinitely complex patterns that are self-similar across different scales. They
are created by repeating a simple process over and over in an ongoing feedback loop. Driven by recursion, fractals are images of
dynamic systems – the pictures of Chaos.
Ans:
2. ANTICIPATION
This movement prepares the audience for a major action the character is about to perform, such as, starting to run, jump or change
expression. A dancer does not just leap off the floor. A backwards motion occurs before the forward action is executed. The
backward motion is the anticipation. A comic effect can be done by not using anticipation after a series of gags that used
anticipation. Almost all real action has major or minor anticipation such as a pitcher's wind -up or a golfers' back swing. Feature
animation is often less broad than short animation unless a scene requires it to develop a characters personality.
3. STAGING
A pose or action should clearly communicate to the audience the attitude, mood, reaction or idea of the character as it relat es to the
story and continuity of the story line. The effective use of long, medium, or close up shots, as well as camera angles also helps in
telling the story. There is a limited amount of time in a film, so each sequence, scene and frame of film must relate to the overall
story. Do not confuse the audience with too many actions at once. Use one action clearly stated to get the idea across, unless you are
animating a scene that is to depict clutter and confusion. Staging directs the audience's attention to the story or idea bein g told. Care
must be taken in background des ign so it isn't obscuring the animation or competing with it due to excess detail behind the
animation. Background and animation should work together as a pictorial unit in a scene.
7. ARCS
All actions, with few exceptions (such as the animation of a mechanical dev ice), follow an arc or slightly circular path. This is
especially true of the human figure and the action of animals. Arcs give animation a more natural action and better flow. Think of
natural movements in the terms of a pendulum swinging. All arm movemen t, head turns and even eye movements are executed on
an arcs.
8. SECONDARY ACTION
This action adds to and enriches the main action and adds more dimension to the character animation, supplementing and/or re -
enforcing the main action. Example: A character is angrily walking toward another character. The walk is forceful, aggressive, and
forward leaning. The leg action is just short of a stomping walk. The secondary action is a few strong gestures of the arms working
with the walk. Also, the possibility of dialogue being delivered at the same time with tilts and turns of the head to accentuate the
walk and dialogue, but not so much as to distract from the walk action. All of these actions should work together in support of one
another. Think of the walk as the primary action and arm swings, head bounce and all other actions of the body as secondary or
supporting action.
9. TIMING
Expertise in timing comes best with experience and personal experimentation, using the trial and error method in refining technique.
The basics are: more drawings between poses slow and smooth the action. Fewer drawings make the action faster and crisper. A
variety of slow and fast timing within a scene adds texture and interest to the movement. Most animation is done on t wos (one
drawing photographed on two frames of film) or on ones (one drawing photographed on each frame of film). Twos are used most o f
the time, and ones are used during camera moves such as trucks, pans and occasionally for subtle and quick dialogue animation.
Also, there is timing in the acting of a character to establish mood, emotion, and reaction to another character or to a situ ation.
Studying movement of actors and performers on stage and in films is useful when animating human or animal characters. This
frame by frame examination of film footage will aid you in understanding timing for animation. This is a great way to learn f rom
the others.
10. EXAGGERATION
Exaggeration is not extreme distortion of a drawing or extremely broad, violent action all the time. Its like a caricature of facial
features, expressions, poses, attitudes and actions. Action traced from live action film can be accurate, but stiff and mec hanical. In
feature animation, a character must move more broadly to look natural. The same is true of facial expressions, but the action should
not be as broad as in a short cartoon style. Exaggeration in a walk or an eye movement or even a head turn will give your film more
appeal. Use good taste and common sense to keep from becoming too theatrical and excessively animated.
Ans: (1) Storyboard Layout-> A storyboard is a graphic organizer that plans a narrative. Storyboards are a powerful way to
visually present information; the linear direction of the cells is perfect for storytelling, explaining a proces s, and showing the
passage of time. At their core, storyboards are a set of sequential drawings to tell a story. By breaking a story into linear, bite-sized
chunks, it allows the author to focus on each cell separately, without distraction.
(2) Object Definition: The method of enabling to animate an object comprises a first step of enabling to animate the object during
a first period on the basis of at least one position of the object in a first animation of the object and on the basis of a f irst part of a
second animation of the object. The method comprises a second step of enabling to animate the object during a second period on
the basis of a second part of the second animation of the object.
(3) Key frame specification-> A keyframe is a frame where we define changes in animation. Every frame is a keyframe when we
create frame by frame animation. When someone creates a 3D animation on a computer, they usually don’t specify the exact
position of any given object on every single frame. They create keyframes.
Keyframes are important frames during which an object changes its size, direction, shape or other properties. The computer th en
figures out all the in-between frames and saves an extreme amount of time for the animator.
(4) In between frame-> Inbetweening is the process of creating transitional frames between two separate objects in order to show
the appearance of movement and evolution of the first object into the second object. It is a common technique used in many types of
animation. The frames between the key frames (the first and last frames of the animation) are called “inbetweens” and they help
make the illusion of fluid motion.
Ans: Ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and
simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual
realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing
best suited for applications where taking a relatively long time to render a frame can be tolerated, such as in still images and film
and television visual effects, and more poorly suited for real-time applications such as video games where speed is critical. Ray
tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion
phenomena (such as chromatic aberration).
{ cast a ray from the eye for every object in the scene find intersections with the ray keep it if closest
Ans:
Ans: . Morphing: Morphing is an animation function which is used to transform object shape from one form to another is called
Morphing. It is one of the most complicated transformations. This function is commonly used in movies, cartoons, advertisemen t,
and computer games.
The process of Morphing involves three steps:
1. In the first step, one initial image and other final image are added to morphing application as shown in fig: I st & 4th object
consider as key frames.
2. The second step involves the selection of key points on both the images for a smooth transition between two images as
shown in 2nd object.
3. In the third step, the key point of the first image transforms to a corresponding key point of the second image as shown in
3rd object of the figure.
2. Wrapping: Wrapping function is similar to morphing function. It distorts only the initial images so that it matches with final
images and no fade occurs in this function.
3. Tweening: Tweening is the short form of 'inbetweening.' Tweening is the process of generating intermediate frames between the
initial & last final images. This function is popular in the film industry.
4. Panning: Usually Panning refers to rotation of the camera in horizontal Plane. In computer graphics, Panning relates to the
movement of fixed size window across the window object in a scene. In which direction the fixed sized window moves, the object
appears to move in the opposite direction as shown in fig:
If the window moves in a backward direction, then the object appear to move in the forward direction and the window moves in
forward direction then the object appear to move in a backward direction.
5. Zooming: In zooming, the window is fixed an object and change its size, the object also appear to change in size. When the
window is made smaller about a fixed center, the object comes inside the window appear more enlarged. This feature is known
as Zooming In.
When we increase the size of the window about the fixed center, the object comes inside the window appear small. This featu re is
known as Zooming Out.
6. Fractals: Fractal Function is used to generate a complex picture by using Iteration. Iteration means the repetition of a single
formula again & again with slightly different value based on the previous iteration result. The se results are displayed on the screen
in the form of the display picture.
Q.6 Write short note on : (i) space filling curves (ii) Grammar based models (iii) turtle graphics
Ans: (i) space filling curves-> a space-filling curve is a curve whose range contains the entire 2-dimensional unit square (or more
generally an n-dimensional unit hypercube). Because Giuseppe Peano (1858–1932) was the first to discover one, space-filling
curves in the 2-dimensional plane are sometimes called Peano curves, but that phrase also refers to the Peano curve, the specific
example of a space-filling curve found by Peano.
(ii) Grammar based models-> Grammar based models are primarily useful for simulating plant development. ... Particle
systems are used to simulate fire, clouds, water, etc. They are primarily useful for animation, but can also create static objects.
(iii) turtle graphics-> In computer graphics, turtle graphics are vector graphics using a relative cursor (the "turtle") upon a
Cartesian plane. Turtle graphics is a key feature of the Logo programming language.