5cs4 4 CGMT Guess Paper Solution 8492

Download as pdf or txt
Download as pdf or txt
You are on page 1of 73

ARYA INSTITUTE OF ENGINEERING & TECHNOLOGY

Department of Computer Science Engineering &IT


MODEL PAPER SOLUTIONS
(B. Tech. V Semester 2020- 2021)
5CS4-04 Computer Graphics & Multimedia
Unit 1:
Short Answers: (2 Marks Each)
Q. 1 What are application of Computer Graphics?

Ans: 1. Education and training

2. Use in biology

3. Computer generated maps

4. Architect

5. Entrainment

6. Computer art

7. Presentation Graphics

8. Animation

9. Printing technology

10. Visualization

Q. 2 Define Computer Graphics.

Ans: Computer graphics are pictures and films created using computers. Usually, the term refers to computer-generated image
data created with the help of specialized graphical hardware and software. It is a vast and recently developed area of computer
science. The phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is oft en
abbreviated as CG, though sometimes erroneously referred to as computer-generated imagery (CGI).

Q. 3 What is difference between computer graphics and image processing?

Ans: computer graphics traditionally referred to the process of creating images from abstract models. A computer game, for
example, might internally keep track of Mario as a large list of points, where each point has three numbers representing its (x, y, z)
coordinates. Then, given the coordinates of the camera, and the direction its facing, the computer will calculate the color at each row
and column in the final image of Mario that you see on you r screen.

Image processing refers to the process of starting with an existing image and refining it in some way to obtain another image. For
example, if you take a picture with your camera, you would use an image processing algorithm to try and make the co lors more
vibrant, or remove the blur, or increase the resolution. The output of an image processing algorithm is another image.

Q. 4 Write is difference between raster scan and random scan?

Ans: Raster scan and random scan are the mechanisms used in displays for rendering the picture of an object on the screen of the
monitor. The main difference between raster scan and random scan lies in the drawing of a picture where the raster scan point s the
electron beam at the entire screen but incorporating just one line at a time in the downward direction. On the other hand, in the
random scan, the electron beam is guided on just those regions of the screen where the picture actually lies.

Q.5 what do you understand by I/O Devices?


Ans: The term I/O is used to describe any program, operation or device that transfers data to or from a computer and to or from
a peripheral device. Every transfer is an output from one device and an input into another. Devices such as keyboards and mouses
are input-only devices while devices such as printers are output-only. A writable CD-ROM is both an input and an output device.

Q.6 Define Pixel and Pixel Value.

Ans: In digital imaging, a pixel or picture element is a physical point in a raster image, or the smallest addressable element in
an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen.

Each of the pixels that represent an image stored inside a computer has a pixel value which describes how bright that pixel is,
and/or what color it should be. In the simplest case of binary images, the pixel value is a 1-bit number indicating either foreground
or background. For a grayscale images, the pixel value is a single number that represents the brightness of the pixel. The most
common pixel format is the byte image, where this number is stored as an 8-bit integer giving a range of possible values from 0 to
255. Typically zero is taken to be black, and 255 is taken to be white. Values in between make up the different shades of gray.

Descriptive Answers: (5 to 20 Marks)

Q. 1 Explain working of CRT (Cathode Ray Tube) with suitable example.

Ans: Operation of CRT

Cathode Ray Tube (CRT) is a computer display screen, used to display the output in a standard composite video signal. The
working of CRT depends on movement of an electron beam which moves back and forth across the back of the screen. The source
of the electron beam is the electron gun; the gun is located in the narrow, cylindrical neck at the extreme rear of a CRT which
produces a stream of electrons through thermionic emission. Usually, A CRT has a fluorescent screen to display the output sig nal. A
simple CRT is shown in below.

The operation of a CRT monitor is basically very simple. A cathode ray tube consist s of one or more electron guns, possibly
internal electrostatic deflection plates and a phosphor target. CRT has three electron beams – one for each (Red, Green, and Blue)
is clearly shown in figure. The electron beam produces a tiny, bright visible spot when it strikes the phosphor-coated screen. In
every monitor device the entire front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster.
An image (raster) is displayed by scanning the electron beam across the screen. The phosphor’s targets are begins to fade after a
short time, the image needs to be refreshed continuously. Thus CRT produces the three color images which are primary colors.
Here we used a 50 Hz rate to eliminate the flicker by refreshing the screen.

Q. 2 Differentiate between beam penetration and shadow methods.

Ans:

Beam Penetration method Shadow Mask method


Where Used It is used with Random Scan System to
display color. It is Used With Raster Scan System to
display color.
Colors It can displays Only four colors i.e. Red , it can display Millions of colors.
Green, Orange and Yellow.
Color Less colors are available because the colors Millions of colors are available because
Dependency in Beam Penetration depends on the speed of the colors in Shadow Mask depends on
the electron beam. the type of the ray.
Cost It is Less Expensive as compared to Shadow It is More Expensive than other
Mask. methods.
Picture Quality of picture is not so good i.e. Poor with Shadow Mask gives realism in picture
Quality Beam Penetration Method. with shadow effect and millions of color.
Resolution It gives High Resolution. It gives Low Resolution.
Criteria In Beam Penetration method, Color display In Shadow Mask Method, there are no
depends on how far electron excites outer Red such criteria for producing colors. It is
layer and then Green layer. used in computers, in color TV etc.

Q. 3 Explain various applications of computer graphics in details .

Ans: Application of Computer Graphics

1. Education and Training: Computer-generated model of the physical, financial and economic system is often used as educational
aids. Model of physical systems, physiological system, population trends or equipment can help trainees to understand the operation
of the system.

For some training applications, particular systems are designed. For example Flight Simulator.

Flight Simulator: It helps in giving training to the pilots of airplanes. These pilots spend much of their training not in a real aircraft
but on the ground at the controls of a Flight Simulator.

Advantages:

1. Fuel Saving
2. Safety
3. Ability to familiarize the training with a large number of the world's airports.

2. Use in Biology: Molecular biologist can display a picture of molecules and gain insight into their structure with the help of
computer graphics.

3. Computer-Generated Maps: Town planners and transportation engineers can use computer-generated maps which display data
useful to them in their planning work.

4. Architect: Architect can explore an alternative solution to design problems at an interactive graphics terminal. In this way, they
can test many more solutions that would not be possible without the computer.

5. Presentation Graphics: Example of presentation Graphics are bar charts, line graphs, pie charts and other displays showing
relationships between multiple parameters. Presentation Graphics is commonly used to summarize

o Financial Reports
o Statistical Reports
o Mathematical Reports
o Scientific Reports
o Economic Data for research reports
o Managerial Reports
o Consumer Information Bulletins
o And other types of reports
6. Computer Art: Computer Graphics are also used in the field of commercial arts. It is used to generate television and advertising
commercial.

7. Entertainment: Computer Graphics are now commonly used in making motion pictures, music videos and television shows.

8. Visualization: It is used for visualization of scientists, engineers, medical personnel, business analysts for the study of a large
amount of information.

9. Educational Software: Computer Graphics is used in the development of educational software for making computer-aided
instruction.

10. Printing Technology: Computer Graphics is used for printing technology and textile design.

Q. 4 Explain the functions of display processor in raster scan display. Compare the merits and demerits of raster and vector
devices.

Ans: Raster Scan : In a raster scan system, the electron beam is swept across the screen, one row at a time from top to bottom. As
the electron beam moves across each row, the beam intensity is turned on and off to create a patter of illuminated spots.

Picture definition is stored in memory area called the Refresh Buffer or Frame Buffer. This memory area holds the set of
intensity values for all the screen points. Stored intensity values are then retrieved from the refresh buffer and “painted” on the
screen one row (scan line) at a time as shown in the following illustration.

Each screen point is referred to as a pixel (picture element) or pel. At the end of each scan line, the electron beam returns to the
left side of the screen to begin displaying the next scan line.

Random Scan (Vector Scan): In this technique, the electron beam is directed only to the part of the screen where the picture is to be
drawn rather than scanning from left to right and top to bottom as in raster scan. It is also called vector display, stroke-writing
display, or calligraphic display.

Picture definition is stored as a set of line-drawing commands in an area of memory referred to as the refresh display file. To
display a specified picture, the system cycles through the set of commands in the display file, drawing each component line in turn.
After all the line-drawing commands are processed, the system cycles back to the first line command in the list.

Random-scan displays are designed to draw all the component lines of a picture 30 to 60 times each second.

Q.5 Explain the following terms in context of display devices

(i) Resolution (ii) Flickering (iii) Interlacing (iv) Refreshing

Ans: (i) Resolution-- In computers, resolution is the number of pixels (individual points of color) contained on a display monitor,
expressed in terms of the number of pixels on the horizontal axis and the number on the vertical axis. The sharpness of the image on
a display depends on the resolution and the size of the monitor. The same pixel resolution will be sharper on a smaller monit or and
gradually lose sharpness on larger monitors because the same number of pixels are being spread out over a larger nu mber of inches.

A given computer display system will have a maximum resolution that depends on its physical ability to focus light (in which case
the physical dot size - the dot pitch - matches the pixel size) and usually several lesser resolutions. For example, a display system
that supports a maximum resolution of 1280 by 1023 pixels may also support 1024 by 768, 800 by 600, and 640 by 480 resolution s.
Note that on a given size monitor, the maximum resolution may offer a sharper image but be spread across a space too small to read
well.

(ii)Flickering --- Flickering is the display of one image over the top of another in rapid succession, the result of this is screen flicker,
where one image can be seen in briefly before another one

(iii) Interlacing-- Interlacing (also known as interleaving) is a method of encoding a bitmap image such that a person who has
partially received it sees a degraded copy of the entire image. When communicating over a slow communications link, this is often
preferable to seeing a perfectly clear copy of one part of the image, as it helps the viewer decide more quickly whether to a bort or
continue the transmission. Interlacing is a form of incremental decoding, because the image can be loaded incrementally. Another
form of incremental decoding is progressive scan. In progressive scan the loaded image is decoded line for line, so instead o f
becoming incrementally clearer it becomes incrementally larger. The main difference between the interlace concept in bitmaps and
in video is that even progressive bitmaps can be loaded over multiple frames. For example: Interlaced GIF is a GIF image that
seems to arrive on your display like an image coming through a slowly open ing Venetian blind. A fuzzy outline of an image is
gradually replaced by seven successive waves of bit streams that fill in the missing lines until the image arrives at its full resolution.
Interlaced graphics were once widely used in web design and before that in the distribution of graphics files over bulletin board
systems and other low-speed communications methods. The practice is much less common today, as common broadband internet
connections allow most images to be downloaded to the user's screen nearly instantaneously, and interlacing is usually an inefficient
method of encoding images.

(iv) Refreshing ----The refresh rate (most commonly the "vertical refresh rate", "vertical scan rate" for cathode ray tubes) is the
number of times in a second that display hardware updates its buffer. This is distinct from the measure of frame rate in that the
refresh rate includes the repeated drawing of identical frames, while frame rate measures how often a video source can feed a n entire
frame of new data to a display. For example, most movie projectors advance from one frame to the next one 24 times each second.
But each frame is illuminated two or three times before the next frame is projected using a shutter in front of its lamp. As a result,
the movie projector runs at 24 frames per second, but has a 48 or 72 Hz refresh rate. On cathode ray tube (CRT) displays, increasing
the refresh rate decreases flickering, thereby reducing eye strain. However, if a refresh rate is specified that is beyond wh at is
recommended for the display, damage to the display can occur. For computer programs or telemetry, the term is also applied to how
frequently a datum is updated with a new external value from another source

Q.6 Write short Note on :

(i) Joystick (ii) Scanner (iii) Light pen (iv) Trackball

Ans: (i) Joystick : Joystick is also a pointing device which is used to move cursor position on a monitor screen. It is a stick
having a spherical ball at its both lower and upper ends. The lower spherical ball moves in a socket. The Joystick can be moved in
all four directions.

The function of joystick is similar to that of a mouse. It is mainly used in Computer Aided Designing (CAD) and playing compu ter
games.

(ii) Scanner : Scanner is an input device which works more like a photocopy machine. It is used when some information is
available on a paper and it is to be transferred to the hard disc of the computer for further manipulation.

Scanner captures images from the source which are then converted into the digital form that can be stored on the disc. These images
can be edited before they are printed.

(iii)Light pen : Light pen is a pointing device which is similar to a pen. It is used to select a displayed menu item or draw pictures
on the monitor screen. It consists of a photocell and an optical system placed in a small tube.

When light pen's tip is moved over the monitor screen and pen button is pressed, its photocell sensing element detects the sc reen
location and sends the corresponding signal to the CPU.
(iv)Trackball : Track ball is an input device that is mostly used in notebook or laptop computer, instead of a mouse. This is a ball
which is half inserted and by moving fingers on ball, pointer can be moved.

Since the whole device is not moved, a track ball requires less space than a mouse. A track ball comes in various shapes like a ball, a
button and a square.

Unit 2:
Short Answers: (2 Marks Each)
Q. 1 What is Line? Write equations for line.

Ans: An important topic of high school algebra is "the equation of a line." This means an equation in x and y whose solution set is a
line in the (x,y) plane.

The most popular form in algebra is the "slope-intercept" form

y = mx + b.

This in effect uses x as a parameter and writes y as a function of x: y = f(x) = mx+b. When x = 0, y = b and the point (0,b) is the
intersection of the line with the y-axis.

Thinking of a line as a geometrical object and not the graph of a function, it makes sense to treat x and y more evenhande dly. The
general equation for a line (normal form) is

ax + by = c,

with the stipulation that at least one of a or b is nonzero. This can easily be converted to slope -intercept form by solving for y:

y = (-a/b) + c/b,

except for the special case b = 0, when the line is parallel to the y-axis.

Q. 2 How DDA differs from Bresenham’s line algorithm.

Ans:

BASIS FOR
DDA ALGORITHM BRESENHAM ALGORITHM
COMPARISON

Efficiency Low High


Calculations involved Complex Simple

Speed Comparatively less More

Operations used Multiplication and division Additions and subtraction

Arithmetic computation Floating point Integer type


values

Precision Low High

Cost Expensive Moderate or cheaper relatively.

Optimization Not provided Provided

Q. 3 Define anti aliasing.

Ans: Antialiasing is a technique used in computer graphics to remove the aliasing effect. The aliasing effect is the appearance of
jagged edges or “jaggies” in a rasterized image (an image rendered using pixels). The problem of jagged edges technically occ urs
due to distortion of the image when scan conversion is done with sampling at a low frequency, which is also known as
Undersampling. Aliasing occurs when real-world objects which comprise of smooth, continuous curves are rasterized using pixels.

Q. 4 What is polygon? What are the different types of polygon?

Ans: A polygon is any 2-dimensional shape formed with straight lines. Triangles, quadrilaterals, pentagons, and hexagons are all
examples of polygons. The name tells you how many sides the shape has. For example, a triangle has three sides, and a
quadrilateral has four sides. So, any shape that can be drawn by connecting three straight lines is called a triangle, and an y shape
that can be drawn by connecting four straight lines is called a quadrilateral.

Shape # of Sides

Triangle 3

Square 4

Rectangle 4

Quadrilateral 4

Pentagon 5

Hexagon 6

Heptagon 7

Octagon 8

Nonagon 9

Decagon 10
n-gon n sides

Q.5 Define Scan Conversion..

Ans: It is a process of representing graphics objects a collection of pixels. The graphics objects are continuous. The pixels used
are discrete. Each pixel can have either on or off state.

Q.6 Define 4-connected and 8-connected Approach.

Ans:

4-Connected Polygon

In this technique 4-connected pixels are used as shown in the figure. We are putting the pixels above, below, to the right,
and to the left side of the current pixels and this process will continue until we find a boundary with different color.

8-Connected Polygon

 In this technique 8-connected pixels are used as s hown in the figure. We are putting pixels above, below, right and left side
of the current pixels as we were doing in 4-connected technique.
 In addition to this, we are also putting pixels in diagonals so that entire area of the current pixel is covered. T his process
will continue until we find a boundary with different color.

Descriptive Answers: (5 to 20 Marks)


Q. 1 What steps are required to scan convert a circle using midpoint algorithm. Also, derive the equation of decision
variable with the help of neat diagram.

Ans: Drawing a circle on the screen is a little complex than drawing a line. There are two popular algorithms for generating a
circle − Bresenham’s Algorithmand Midpoint Circle Algorithm. These algorithms are based on the idea of determining the
subsequent points required to draw the circle. Let us discuss the algorithms in detail

The equation of circle is X2+Y2=r2,X2+Y2=r2, where r is radius.

We cannot display a continuous arc on the raster display. Instead, we have to choose the nearest pixel position to complete the arc.

From the following illustration, you can see that we have put the pixel at (X, Y) location and now need to decide where to pu t the
next pixel − at N (X+1, Y) or at S (X+1, Y-1).

This can be decided by the decision parameter d.

If d <= 0, then N(X+1, Y) is to be chosen as next pixel.

If d > 0, then S(X+1, Y-1) is to be chosen as the next pixel.


Algorithm

Step 1 − Get the coordinates of the center of the circle and radius, and store them in x, y, and R respectively. Set P=0 and Q=R.

Step 2 − Set decision parameter D = 3 – 2R.

Step 3 − Repeat through step-8 while P ≤ Q.

Step 4 − Call Draw Circle (X, Y, P, Q).

Step 5 − Increment the value of P.

Step 6 − If D < 0 then D = D + 4P + 6.


Step 7 − Else Set R = R - 1, D = D + 4(P-Q) + 10.

Step 8 − Call Draw Circle (X, Y, P, Q).

Draw Circle Method(X, Y, P, Q).

Call Putpixel (X + P, Y + Q).

Call Putpixel (X - P, Y + Q).

Call Putpixel (X + P, Y - Q).

Call Putpixel (X - P, Y - Q).

Call Putpixel (X + Q, Y + P).

Call Putpixel (X - Q, Y + P).

Call Putpixel (X + Q, Y - P).

Call Putpixel (X - Q, Y - P).

Mid Point Algorithm

Step 1 − Input radius r and circle center (xc,yc)(xc,yc) and obtain the first point on the circumference of the

circle centered on the origin as

(x0, y0) = (0, r)

Step 2 − Calculate the initial value of decision parameter as

P0P0 = 5/4 – r (See the following description for simplification of this equation.)

f(x, y) = x2 + y2 - r2 = 0

f(xi - 1/2 + e, yi + 1)
= (xi - 1/2 + e)2 + (yi + 1)2 - r2

= (xi- 1/2)2 + (yi + 1)2 - r2 + 2(xi - 1/2)e + e2

= f(xi - 1/2, yi + 1) + 2(xi - 1/2)e + e2 = 0

Let di = f(xi - 1/2, yi + 1) = -2(xi - 1/2)e - e2

Thus,

If e < 0 then di > 0 so choose point S = (xi - 1, yi + 1).

di+1 = f(xi - 1 - 1/2, yi + 1 + 1) = ((xi - 1/2) - 1)2 + ((yi + 1) + 1)2 - r2

= di - 2(xi - 1) + 2(yi + 1) + 1

= di + 2(yi + 1 - xi + 1) + 1

If e >= 0 then di <= 0 so choose point T = (xi, yi + 1)

di+1 = f(xi - 1/2, yi + 1 + 1)

= di + 2yi+1 + 1

The initial value of di is

d0 = f(r - 1/2, 0 + 1) = (r - 1/2)2 + 12 - r2

= 5/4 - r {1-r can be used if r is an integer}

When point S = (xi - 1, yi + 1) is chosen then

di+1 = di + -2xi+1 + 2yi+1 + 1

When point T = (xi, yi + 1) is chosen then


di+1 = di + 2yi+1 + 1

Step 3 − At each XKXK position starting at K=0, perform the following test −

If PK < 0 then next point on circle (0,0) is (XK+1,YK) and

PK+1 = PK + 2XK+1 + 1

Else

PK+1 = PK + 2XK+1 + 1 – 2YK+1

Where, 2XK+1 = 2XK+2 and 2YK+1 = 2YK-2.

Step 4 − Determine the symmetry points in other seven octants.

Step 5 − Move each calculate pixel position (X, Y) onto the circular path centered on (XC,YC)(XC,YC) and plot the coordinate
values.

X=X+XC, Y=Y+YC

Step 6 − Repeat step-3 through 5 until X >= Y.

Q. 2 Explain Bresenham’s Line Algorithm with example.

Ans: Bresenham’s Line Generation

This algorithm is used for scan converting a line. It was developed by Bresenham. It is an efficient method because it involv es only
integer addition, subtractions, and multiplication operations. These operations can be performed very rapidly so lines can be
generated quickly.

In this method, next pixel selected is that one who has the least distance from true line.

The method works as follows:

Assume a pixel P1 '(x1 ',y 1 '),then select subsequent pixels as we work our may to the night, one pixel position at a time in the
horizontal direction toward P2 '(x2 ',y 2 ').

Once a pixel in choose at any step


The next pixel is

1. Either the one to its right (lower-bound for the line)


2. One top its right and up (upper-bound for the line)

The line is best approximated by those pixels that fall the least distance from the path between P 1 ',P2 '.

To chooses the next one between the bottom pixel S and top pixel T.
If S is chosen
We have xi+1 =xi +1 and y i+1 =y i
If T is chosen
We have xi+1 =xi +1 and y i+1 =y i +1

The actual y coordinates of the line at x = xi+1 is


y=mxi+1 +b

The distance from S to the actual line in y direction


s = y-y i

The distance from T to the actual line in y direction


t = (y i +1)-y

Now consider the difference between these 2 distance values


s -t

When (s-t) <0 ⟹ s < t

The closest pixel is S

When (s-t) ≥0 ⟹ s < t

The closest pixel is T

This difference is
s-t = (y-yi)-[(yi+1)-y]
= 2y - 2yi -1

Substituting m by and introducing decision variable


d i =△x (s-t)

d i =△x (2 (xi +1)+2b-2y i -1)


=2△xy i -2△y-1△x.2b-2y i △x-△x
d i =2△y.xi -2△x.y i +c

Where c= 2△y+△x (2b-1)

We can write the decision variable d i+1 for the next slip on
d i+1 =2△y.xi+1 -2△x.y i+1 +c
d i+1 -d i =2△y.(xi+1 -xi )- 2△x(y i+1 -y i )

Since x_(i+1)=xi +1,we have


d i+1 +d i =2△y.(xi +1-xi )- 2△x(y i+1 -y i )

Special Cases

If chosen pixel is at the top pixel T (i.e., d i ≥0)⟹ y i+1 =y i +1


d i+1 =d i +2△y-2△x

If chosen pixel is at the bottom pixel T (i.e., d i <0)⟹ y i+1 =y i


d i+1 =d i +2△y

Finally, we calculate d 1
d 1 =△x[2m(x1 +1)+2b -2y 1 -1]
d 1 =△x[2(mx1 +b-y 1 )+2m-1]

Since mx1 +b-y i =0 and m = , we have


d 1 =2△y-△x
Advantage:

1. It involves only integer arithmetic, so it is simple.

2. It avoids the generation of duplicate points.

3. It can be implemented using hardware because it does not use multiplication and division.

4. It is faster as compared to DDA (Digital Differential Analyzer) because it does not involve floating point calculations like DDA
Algorithm.

Disadvantage:

1. This algorithm is meant for basic line drawing only Initializing is not a part of Bresenham's line algorithm. So to draw s mooth
lines, you should want to look into a different algorithm.

Bresenham's Line Algorithm:

Step1: Start Algorithm

Step2: Declare variable x1 ,x2 ,y 1 ,y 2 ,d,i1 ,i2 ,dx,dy

Step3: Enter value of x1 ,y 1 ,x2 ,y 2


Where x1 ,y 1 are coordinates of starting point
And x2 ,y 2 are coordinates of Ending point

Step4: Calculate dx = x2 -x1


Calculate dy = y 2 -y 1
Calculate i1 =2*dy
Calculate i2 =2*(dy-dx)
Calculate d=i1 -dx

Step5: Consider (x, y) as starting point and xend as maximu m possible value of x.
If dx < 0
Then x = x2
y = y2
xend =x1
If dx > 0
Then x = x1
y = y1
xend =x2

Step6: Generate point at (x,y)coordinates.

Step7: Check if whole line is generated.


If x > = xend
Stop.

Step8: Calculate co-ordinates of the next pixel


If d < 0
Then d = d + i1
If d ≥ 0
Then d = d + i2
Increment y = y + 1

Step9: Increment x = x + 1
Step10: Draw a point of latest (x, y) coordinates

Step11: Go to step 7

Step12: End of Algorithm

Example: Starting and Ending position of the line are (1, 1) and (8, 5). Find intermediate points.

Solution: x1 =1
y 1 =1
x2 =8
y 2 =5
dx= x2 -x1 =8-1=7
dy=y 2 -y 1 =5-1=4
I1 =2* ∆y=2*4=8
I2 =2*(∆y-∆x)=2*(4-7)=-6
d = I1 -∆x=8-7=1

x y d=d+I1 or I2

1 1 d+I2 =1+(-6)=-5

2 2 d+I1 =-5+8=3

3 2 d+I2 =3+(-6)=-3

4 3 d+I1 =-3+8=5

5 3 d+I2 =5+(-6)=-1

6 4 d+I1 =-1+8=7

7 4 d+I2 =7+(-6)=1

8 5
Q. 3 What is aliasing and explain different types of anti aliasing techniques .

Ans: In the line drawing algorithms, we have seen that all rasterized locations do not match with the true line and we have to select
the optimum raster locations to represent a straight line. This problem is severe in low resolution screens. In such screens line
appears like a stair-step, as shown in the figure below. This effect is known as aliasing. It is dominant for lines having gentle and
sharp slopes.

The aliasing effect can be reduced by adjusting intensities of the pixels along the line. The process of adjusting intensities of the
pixels along the line to minimize the effect of aliasing is called antialiasing.

The aliasing effect can be minimized by increasing resolution of the raster display. By increasing resolution and making it t wice the
original one, the line passes through twice as many column of pixels and therefore has twice as many jags, but each jag is half as
large in x and in y direction.
As shown in the figure above, line looks better in twice resolution, but this improvement comes at the price of quadrupling t he cost
of memory, bandwidth of memory and scan-conversion time. Thus increasing resolution is an expensive method for reducing
aliasing effect.

With raster system that are capable of displaying more than two intensity levels (colour and gray scale), we can apply antialiasing
methods to modify pixel intensities. By appropriately varying the intensities of pixels along the line or object boundaries, we can
smooth the edges to lessen the stair-step or the jagged appearance.

Antialiasing methods are basically classified as :-

Supersampling or Postfiltering:-

Supersampling or Postfiltering is the process by which aliasing effects in graphics are reduced by increasing the frequency o f the
sampling grid and then averaging the results down. This process means calculating a virtua l image at a higher spatial resolution than
the frame store resolution and then averaging down to the final resolution. It is called Postfiltering as the filtering is ca rried out after
sampling.

Supersampling is basically a three stage process:

1. A continuous image I(x , y) is sampled at n times the frame resolution. This is a virtual image.
2. The virtual image is then lowpass filtered.
3. The filtered image is then resampled at the final frame resolution.

Area sampling or Prefiltering:-

In this antialiasing method pixel intensity is determined by calculating the areas of overlap of each pixel with the objects to be
displayed. Antialiasing by computing area is referred to as Area sampling or Prefiltering. A modification to Bresenham's
algorithm was developed by Pitteway and Watkinson. In this algorithm, each pixel is given intensity depending on the area of
overlap of the pixel and the line. So, due to the blurring effect along the line edges, the effect of anti-aliasing is not very prominent,
although it still exists. For sampling shapes other than polygons, this can be very computationally intensive.
Q. 4 Write DDA in algorithmic form. Also explain the algorithm with the help of suitable example.

Ans: DDA stands for Digital Differential Analyzer. It is an incremental method of scan conversion of line. In this method
calculation is performed at each step but by using results of previous steps.

Suppose at step i, the pixels is (xi ,y i )

The line of equation for step i


y i =mxi+b ......................equation 1

Next value will be


y i+1 =mxi+1 +b.................equation 2

m=
y i+1 -y i =∆y.......................equation 3
y i+1 -xi =∆x......................equation 4
y i+1 =y i +∆y
∆y=m∆x
y i+1 =y i +m∆x
∆x=∆y/m
xi+1 =xi +∆x
xi+1 =xi +∆y/m

Case1: When |M|<1 then (assume that x1 <x2)


x= x1,y=y1 set ∆x=1
yi+1=y1+m, x=x+1
Until x = x2</x

Case 2: When |M|<1 then (assume that y1<y2)


x= x1,y=y1 set ∆y=1

xi+1= , y=y+1
Until y → y2</y

Advantage:
1. It is a faster method than method of using direct use of line equation.

2. T his method does not use multiplication theorem.

3. It allows us to detect the change in the value of x and y ,so plotting of same point twice is not possible.

4. T his method gives overflow indication when a point is repositioned.

5. It is an easy method because each step involves just two addition s.

Disadvantage:

1. It involves floating point additions rounding off is done. Accumulations of round off error cause accumulation of error.

2. Rounding off operations and floating point operations consumes a lot of time.

3. It is more suitable for generating line using the software. But it is less suited for hardware implementation.

DDA Algorithm:

Ste p1: Start Algorithm

Ste p2: Declare x1,y1,x2,y2,dx,dy,x,y as integer variables.

Ste p3: Enter value of x1,y1,x2,y2.

Ste p4: Calculate dx = x2-x1

Ste p5: Calculate dy = y2-y1

Ste p6: If ABS (dx) > ABS (dy)


T hen step = abs (dx)
Else

Ste p7: xinc=dx/step


yinc=dy/step
assign x = x1
assign y = y1

Ste p8: Set pixel (x, y)

Ste p9: x = x + xinc


y = y + yinc
Set pixels (Round (x), Round (y))

Ste p10: Repeat step 9 until x = x2

Ste p11: End Algorithm

Example : If a line is drawn from (2, 3) to (6, 15) with use of DDA. How many points will needed to generate such line?

Solution: P1 (2,3) P11 (6,15)


x1=2
y1=3
x2= 6
y2=15
dx = 6 - 2 = 4
dy = 15 - 3 = 12

m=

For calculating next value of x takes x = x +


Q.5 Draw ellipse using midpoint algorithm.

Ans: Ellipse is defined as the locus of a point in a plane which moves in a plane in such a manner that the ratio of its distance from a
fixed point called focus in the same plane to its distance from a fixed straight line called directrix is a lways constant, which should
always be less than unity.

If the distance to the two foci from any point P=(x,y) on the ellipse are labeled d1 and d2 then the general equation of the ellipse can
be stated as- d1 +d2 =constant.

For expressing the distances d1 and d2 in terms of focal coordinates F1 and F2 we have:- Ax2 +By2 +Cxy+Dx+Ey+F=0 where A, B,
C, D,E, and F are evaluated in terms of focal coordinates and dimensions of the major and minor axes of the ellipse.

Midpoint ellipse algorithm

The midpoint ellipse method is applied throughout the first quadrant in two parts. Now let us take the start position at (0,ry) and
step along the ellipse path in clockwise order throughout the first quadrant.

Ellipse function can be defined as:

fellipse(x,y)=ry2 x2 +rx2 y2 -rx2 ry2

According to this there are some properties which have been generated that are:

1. fellipse(x,y)<0 which means (x,y) is inside the ellipse boundary.


2. fellipse(x,y)>0 which means (x,y) is outside the ellipse boundary.
3. fellipse(x,y)=0 which means (x,y) is on the ellipse boundary.
Initial decision parameter

In region 1 the initial value of a decision parameter is obtained by giving starting position = (0,ry).

i.e. p1 0 =ry2 +1/4rx2 -rx2 ry

When we enter into a region 2 the initial position is taken as the last position selected in region 1 and the initial decision parameter
in region 2 is then:

p2 0 =ry2 (x0 +1/2)2 +rx2 (y0 -1)2 -rx2 ry2

ALGORITHM

1. Take the input and ellipse centre and obtain the first point on an ellipse centered on the origin as a (x,y 0 )= (0,ry ).
2. Now calculate the initial decision parameter in region 1 as:
p1 0 =ry2 +1/4rx2 -rx2 ry
3. At each xk position in region 1 perform the following task. If p1 k<0 then the next point along the ellipse centered
on (0,0) is (xk+1,yk).
i.e. p1 k+1 =p1 k+2 ry2 xk+1 +ry2
Otherwise the next point along the circle is (xk+1 ,y k -1)
i.e. p1 k+1 =p1 k+2ry2 xk+1 – 2rx2 yk+1 +ry2
4. Now, again calculate the initial value in region 2 using the last point (x0 ,y0 ) calculated in a region 1 as
: p2 0 =ry2 (x0 +1/2)2 +rx2 (y0 -1)2 -rx2 ry2
5. At each yk position in region 2 starting at k =0 perform the following task. If p2 k<0 the next point along the ellipse
centered on (0,0) is (xk , yk-1 )
i.e. p2 k+1 =p2 k-2rx2 yk+1 +rx2
Otherwise the next point along the circle will be (xk+1 ,yk -1)
i.e. p2 k+1 =p2 k+2ry2 xk+1 -2rx2 yk+1 +rx2
6. Now determine the symmetric points in another three quadrants.
7. Plot the coordinate value as: x=x+xc , y=y+yc
8. Repeat the steps for region 1 until 2ry2 x>=2rx2 y.

Q.6 Explain flood fill algorithm with suitable example.

Ans: Flood Fill Algorithm:

In this method, a point or seed which is inside region is selected. This point is called a seed point. Then four connected approaches
or eight connected approaches is used to fill with specified color.

The flood fill algorithm has many characters similar to boundary fill. But this method is more suitable for filling multiple colors
boundary. When boundary is of many colors and interior is to be filled with one color we use this algorithm.
In fill algorithm, we start from a specified interior point (x, y) and reassign all pixel values are currently set to a given interior color
with the desired color. Using either a 4-connected or 8-connected approaches, we then step through pixel positions until all interior
points have been repainted.

Disadvantage:
1. Very slow algorithm
2. May be fail for large polygons
3. Initial pixel required more knowledge about surrounding pixels.

Algorithm:
1. Procedure floodfill (x, y,fill_ color, old_color: integer)
2. If (getpixel (x, y)=old_color)
3. {
4. setpixel (x, y, fill_color);
5. fill (x+1, y, fill_color, old_color);
6. fill (x-1, y, fill_color, old_color);
7. fill (x, y+1, fill_color, old_color);
8. fill (x, y-1, fill_color, old_color);
9. }
10. }
Unit 3:
Short Answers: (2 Marks Each)
Q. 1Define 2-Dimension Transformation.

Ans: Transformation means changing some graphics into something else by applying rules. We can have various types of
transformations such as translation, scaling up or down, rotation, shearing, etc. When a transformation takes place on a 2D plane, it
is called 2D transformation.

Transformations play an important role in computer graphics to reposition the graphics on the screen and change their size or
orientation.

Q. 2 Define Line clipping and polygon clipping.

Ans: Clipping a point from a given window is very easy. Consider the following figure, where the rectangle indicates the window.
Point clipping tells us whether the given point (X, Y) is within the given window or not; and decides whether we will use the
minimum and maximu m coordinates of the window.

Line Clipping

The concept of line clipping is same as point clipping. In line clipping, we will cut the portion of line which is outside of window
and keep only the portion that is inside the window.

Polygon Clipping

A polygon can also be clipped by specifying the clipping window. Sutherland Hodgeman polygon clipping algorithm is used for
polygon clipping. In this algorithm, all the vertices of the polygon are clipped against each edge of the clipping window.

First the polygon is clipped against the left edge of the polygon window to get new vertices of the polygon. These new vertices are
used to clip the polygon against right edge, top edge, bottom edge, of the clipping window

Q. 3 what is viewing transformation?

Ans: The viewing transformation is the operation that maps a perspective view of an object in world coordinates into a physical
device’s display space. In general, this is a complex operation which is best grasped intellectually by the typical computer graphics
technique of dividing the operation into a concatenation of simpler operations

Q. 4 what do you understand by interior and exterior clipping.

Ans: Clipping means Identifying portions of a scene that are inside (or outside) a specified region Examples Multiple viewports on
a device Deciding how much of a games world the player can see.

In other words,Clipping is the process of removing the graphics parts either inside or outside the given region.

Interior clipping removes the parts outside the given win dow and exterior clipping removes the parts inside the given window.

Q.5 what is difference between window and viewport?

Ans: window

The window defines a rectangular area in the world coordinates. As well as window can be defined with the GWINDOW
statement. The window can be defined to be larger than, the same size as and smaller than the actual range of the data values , thus
depending on whether we want to show all of the data and only part of data.

Viewport

the viewport can be defines in the normalized coordinates a rectangular area on the display device. where the image of data
appears. the viewport is defined with GPORT command. Thus We can have the graph take up the entire display device and show it
in only a portion, say the upper-right part.

Q.6 why are homogeneous coordinates used for transformation computati on in computer graphics.

Ans: They simplify and unify the mathematics used in graphics:

 They allow you to represent translations with matrices.


 They allow you to represent the division by depth in perspective projections.
The first one is related to affine geometry. The second one is related to projective geometry.

Descriptive Answers: (5 to 20 Marks)

Q. 1 Discuss the composite transformation matrices for two successive translations and scaling.

Ans:

Q. 2 Show that the composition of two rotations is additive by concatenating the matrix representations for R(θ1) and
R(θ2)to obtain: R(θ1). R(θ2)= R(θ1+θ2).

Ans: The Rotation matrix R is given as,

Q. 3 Explain with suitable example Cohen Sutherland Line Clipping Technique. Write this technique in Algorithmic form
also.

Ans: Cohen-Sutherland Line Clipping

The Cohen-Sutherland line clipping algorithm quickly detects and dispenses with two common and trivial cases. To clip a line, we
need to consider only its endpoints. If both endpoints of a line lie inside the window, the entire line lies inside the windo w. It
is trivially accepted and needs no clipping. On the other hand, if both endpoints of a line lie entirely to one side of the window, the
line must lie entirely outside of the window. It is trivially rejected and needs to be neither clipped nor displayed.

Inside-Outside Window Codes

To determine whether endpoints are inside or outside a window, the algorithm sets up a half-space code for each endpoint. Each
edge of the window defines an infinite line that divides the whole space into two half-spaces, the inside half-space and the outside
half-space, as shown below.
As you proceed around the window, extending each edge and defining an inside half-space and an outside half-space, nine regions
are created - the eight "outside" regions and the one "inside"region. Each of the nine regions associated with the window is assigned
a 4-bit code to identify the region. Each bit in the code is set to either a 1(true) or a 0(false). If the region is to the left of the
window, the first bit of the code is set to 1. If the region is to the top of the window, the second bit of the code is set to 1. If to
the right, the third bit is set, and if to the bottom, the fourth bit is set. The 4 bits in the code then identify each of the nine regions
as shown below.

For any endpoint ( x , y ) of a line, the code can be determined that identifies which region the endpoint lies. The code's bits are set
according to the following conditions:

The sequence for reading the codes' bits is LRBT (Left, Right, Bottom, Top).

Once the codes for each endpoint of a line are determined, the logical AND operation of the codes determines if the line is
completely outside of the window. If the logical AND of the endpoint codes is not zero, the line can be trivally rejected. For
example, if an endpoint had a code of 1001 while the other endpoint had a code of 1010, the logical AND would be 1000 which
indicates the line segment lies outside of the window. On the other hand, if the endpoints had codes of 1001 and 0110, the logical
AND would be 0000, and the line could not be trivally rejected.

The logical OR of the endpoint codes determines if the line is completely inside the window. If the logical OR is zero, the line can
be trivally accepted. For example, if the endpoint codes are 0000 and 0000, the logical OR is 0000 - the line can be trivally accepted.
If the endpoint codes are 0000 and 0110, the logical OR is 0110 and the line can not be trivally accepted.

Algorithm

The Cohen-Sutherland algorithm uses a divide-and-conquer strategy. The line segment's endpoints are tested to see if the line can be
trivally accepted or rejected. If the line cannot be trivally accepted or rejected, an intersection of the line with a window edge is
determined and the trivial reject/accept test is repeated. This process is continued until the line is accepted.

To perform the trivial acceptance and rejection tests, we extend the edges of the window to divide the plane of the window in to the
nine regions. Each end point of the line segment is then assigned the code of the region in which it lies.

1. Given a line segment with endpoint and


2. Compute the 4-bit codes for each endpoint.

If both codes are 0000,(bitwise OR of the codes yields 0000 ) line lies completely inside the window: pass the endpoints to
the draw routine.

If both codes have a 1 in the same bit position (bitwise AND of the codes is not 0000), the line lies outside the window. It
can be trivially rejected.

3. If a line cannot be trivially accepted or rejected, at least one of the two endpoints must lie outside the window and the line
segment crosses a window edge. This line must be clipped at the window edge before being passed to the drawing routine.
4. Examine one of the endpoints, say . Read 's 4-bit code in order: Left-to-Right, Bottom-to-Top.
5. When a set bit (1) is found, compute the intersection I of the corresponding window edge with the line from to .
Replace with I and repeat the algorithm.

Before Clipping

1. Consider the line segment AD.

Point A has an outcode of 0000 and point D has an outcode of 1001. The logical AND of these outcodes is zero; therefore,
the line cannot be trivally rejected. Also, the logical OR of the outcodes is not zero; therefore, the line cannot be trivally
accepted. The algorithm then chooses D as the outside point (its outcode contains 1's). By our testing order, we first use the
top edge to clip AD at B. The algorithm then recomputes B's outcode as 0000. With the next iteration of the
algorithm, AB is tested and is trivially accepted and displayed.
2. Consider the line segment EI

Point E has an outcode of 0100, while point I's outcode is 1010. The results of the trivial tests show that the line can neither
be trivally rejected or accepted. Point E is determined to be an outside point, so the algorithm clips the line against the
bottom edge of the window. Now line EI has been clipped to be line FI. Line FI is tested and cannot be trivially accepted
or rejected. Point F has an outcode of 0000, so the algorithm chooses point I as an outside point since its outcode is 1010.
The line FI is clipped against the window's top edge, yielding a new line FH. Line FH cannot be trivally accepted or
rejected. Since H's outcode is 0010, the next iteration of the algorthm clips against the window's right edge, yielding
line FG. The next iteration of the algorithm tests FG, and it is trivially accepted and display.

After Clipping

After clipping the segments AD and EI, the result is that only the line segment AB and FG can be seen in the window.

Q. 4 Explain flood fiIl algorithm and Boundary filI algorithm.

Ans: Boundary-fill Algorithm

This is an area filling algorithm. This is used where we have to do an interactive painting in computer graphics, where interior
points are easily selected. If we have a specified boundary in a single color, then the fill algorithm proceeds pixel by pixe l until the
boundary color is encountered. This method is called the boundary-fill algorithm.

In this, generally two methods are given that are:

1. 4-connected:
In this firstly there is a selection of the interior pixel which is inside the boundary then in reference to that pixel, the adjacent pixel
will be filled up that is top-bottom and left-right.
2. 8-connected:
This is the best way of filling the color correctly in the interior of the area defined. This is used to fill in more complex figures. In
this four diagonal pixel are also included with a reference interior pixel (including top-bottom and left right pixels).

Problem with boundary fill

1. It may not fill regions correctly, if same interior pixels are also displayed in fill color.
2. In 4-connected there is a problem. Sometimes it does not fill the corner pixel as it checks only the adjacent position of the
given pixel.

Algorithm for boundary fill (4-connected)


Boundary fill (x, y, fill, boundary)
1) Initialize boundary of the region, and variable fill with color.
2) Let the interior pixel(x,y)
(Now take an integer called current and assign it to (x,y))
current=getpixel(x,y)
3) If current is not equal to boundary and current is not equal
to fill then set pixel (x, y, fill)
boundary fill 4(x+1,y,fill,boundary)
boundary fill 4(x-1,y,fill,boundary)
boundary fill 4(x,y+1,fill,boundary)
boundary fill 4(x,y-1,fill,boundary)

4) End.
Flood-fill Algorithm

By this algorithm, we can recolor an area that is not defined within a single color boundary. In this, we can paint such areas by
replacing a color instead of searching for a boundary color value. This whole approach is termed as flood fill algorithm. This
procedure can also be used to reduce the storage requirement of the stack by filling pixel spans.

Algorithm for Flood fill algorithm


floodfill4 (x, y, fillcolor, oldcolor: integer)

begin
if getpixel (x, y) = old color then
begin
setpixel (x ,y, fillcolor)
floodfill4 (x+1, y, fillcolor, oldcolor)
floodfill4 (x-1, y, fillcolor, oldcolor)
floodfill4 (x, y+1, fillcolor, oldcolor)
floodfill4 (x, y-1, fillcolor, oldcolor)
end.

Q.5 Explain Polygon Clipping Methods.

Ans: Polygon Clipping

A set of connected lines are considered as polygon; polygons are clipped based on the window and the portion which is inside the
window is kept as it is and the outside portions are clipped. The polygon clipping is required to deal different cases. Usually it clips
the four edges in the boundary of the clip rectangle. The clip boundary determines the visible and invisible regions of polygon
clipping and it is categorized as four.
• Visible region is wholly inside the clip window – saves endpoint
• Visible exits the clip window – Save the interaction
• Visible region is wholly outside the clip window – nothing to save
• Visible enters the clip window – save endpoint and intersection
A convex polygon and a convex clipping area are given. The task is to clip polygon edges using the Sutherland –Hodgman
Algorithm. Input is in the form of vertices of the polygon in clockwise order.
Examples:
Input : Polygon : (100,150), (200,250), (300,200)

Clipping Area : (150,150), (150,200), (200,200),

(200,150) i.e. a Square

Output : (150, 162) (150, 200) (200, 200) (200, 174)

Example 2
Input : Polygon : (100,150), (200,250), (300,200)
Clipping Area : (100,300), (300,300), (200,100)
Output : (242, 185) (166, 166) (150, 200) (200, 250) (260, 220)

Q.6 Explain liang- bersky line clipping Algorithm with example.

Ans: Liang-Barsky Algorithm

The Liang-Barsky algorithm is a line clipping algorithm. This algorithm is more efficient than Cohen –Sutherland line clipping
algorithm and can be extended to 3-Dimensional clipping. This algorithm is considered to be the faster parametric line-clipping
algorithm. The following concepts are used in this clipping:
1. The parametric equation of the line.
2. The inequalities describing the range of the clipping window which is used to determine the intersections between the line
and the clip window.
The parametric equation of a line can be given by,

X = x1 + t(x2 -x1 )
Y = y 1 + t(y 2 -y 1 )
Where, t is between 0 and 1.

Then, writing the point-clipping conditions in the parametric form:

xwmin <= x1 + t(x2 -x1 ) <= xwmax


ywmin <= y 1 + t(y 2 -y 1 ) <= ywmax
The above 4 inequalities can be expressed as,

tp k <= q k
Where k = 1, 2, 3, 4 (correspond to the left, right, bottom, and top boundaries, respectively).

The p and q are defined as,

p 1 = -(x2 -x1 ), q 1 = x1 - xwmin (Left Boundary)


p 2 = (x2 -x1 ), q 2 = xwmax - x1 (Right Boundary)
p 3 = -(y 2 -y 1 ), q 3 = y 1 - ywmin (Bottom Boundary)
p 4 = (y 2 -y 1 ), q 4 = ywmax - y 1 (Top Boundary)
When the line is parallel to a view window boundary, the p value for that boundary is zero.
When p k < 0, as t increase line goes from the outside to inside (entering).
When p k > 0, line goes from inside to outside (exiting).
When p k = 0 and q k < 0 then line is trivially invisible because it is outside view window.
When p k = 0 and q k > 0 then the line is inside the corresponding window boundary.
Using the following conditions, the position of line can be determined:

CONDITION POSITION OF LINE

pk = 0 parallel to the clipping boundaries

p k = 0 and q k < 0 completely outside the boundary

p k = 0 and q k >= 0 inside the parallel clipping boundary

pk < 0 line proceeds from outside to inside

pk > 0 line proceeds from inside to outside


Parameters t 1 and t 2 can be calculated that define the part of line that lies within the clip rectangle.
When,
1. p k < 0, maximu m(0, q k /p k ) is taken.
2. p k > 0, minimum(1, q k /p k ) is taken.
If t1 > t2 , the line is completely outside the clip window and it can be rejected. Otherwise, the endpoints of the clipped line are
calculated from the two values of parameter t.
Algorithm –
1. Set t min =0, t max=1.
2. Calculate the values of t (t(left), t(right), t(top), t(bottom)),
(i) If t < t min ignore that and move to the next edge.
(ii) else separate the t values as entering or exiting values using the inner product.
(iii) If t is entering value, set t min = t; if t is existing value, set t max = t.
3. If t min < t max, draw a line from (x1 + t min (x2 -x1 ), y 1 + t min (y 2 -y 1 )) to (x1 + t max(x2 -x1 ), y 1 + t max(y 2 -y 1 ))
4. If the line crosses over the window, (x1 + t min (x2 -x1 ), y 1 + t min (y 2 -y 1 )) and (x1 + t max(x2 -x1 ), y 1 + t max(y 2 -y 1 )) are the
intersection point of line and edge.
Unit 4:
Short Answers: (2 Marks Each)
Q. 1 Define 3D Transformation.

Ans: 3D graphics components are now a part of almost every personal computer and, although traditionally intended for graphics -
intensive software such as games, they are increasingly being used by other applications.

The geometric transformations play a vital role in generating images of three Dimensional objects with the help of these
transformations. The location of objects relative to others can be easily expressed. Sometimes viewpoint changes rapidly, or
sometimes objects move in relation to each other. For this number of transformation can be carried out repeatedly.

Q. 2 What are the advantages of homogeneous coordinates?

Ans: (a) Simpler formulas ,

(b) Fewer special cases ,

(c) Unification and extension of concepts ,

(d) Duality

Q. 3 Define parallel and perspective projection?

Ans:

S.NO. PERSPECTIVE PROJECTIONS PARALLEL PROJECTION

1. If COP[ Centre Of Projection] is located If COP [ Centre Of Projection] is located at


at a finite point in 3 space , the result is a infinity, all the projectors are parallel and the
perspective projection. result is a parallel projection.

2. Perspective projection is representing or parallel projection is used in drawing objects


drawing objects which resemble the real when perspective projection cannot be used.
thing

Q. 4 Write properties of Bezier Curve.

Ans: Properties of Bezier Curves

Bezier curves have the following properties −


 They generally follow the shape of the control polygon, which consists of the segments joining the control points.

 They always pass through the first and last control points.

 They are contained in the convex hull of their defining control points.

 The degree of the polynomial defining the curve segment is one less that the number of defining polygon point. Therefore,
for 4 control points, the degree of the polynomial is 3, i.e. cubic polynomial.

 A Bezier curve generally follows the shape of the defining polygon.

 The direction of the tangent vector at the end points is same as that of the vector determined by first and last segments.

 The convex hull property for a Bezier curve ensures that the polynomial s moothly follows the control points.

 No straight line intersects a Bezier curve more times than it intersects its control polygon.

 They are invariant under an affine transformation.

 Bezier curves exhibit global control means moving a control point alters th e shape of the whole curve.

 A given Bezier curve can be subdivided at a point t=t0 into two Bezier segments which join together at the point
corresponding to the parameter value t=t0.

Q.5What is curve interpolation?

Ans:Curve interpolation is a method of constructing new data points within the range of a discrete set of known data points.

Q.6 Define interpolation spline and approximation spline.

Ans: interpolation - all points of the basic figure are located on the created figure called interpolation curve segment

approxi mation - all points of the basic figure need not be located on the created figure called approximation curve segment.

Descriptive Answers: (5 to 20 Marks)

Q. 1 Differentiate B-Spline with Bezier curves. Also differentiate between image space methods and object space method of
visible surface detection.

Ans: Bezier Curves

Bezier curve is discovered by the French engineer Pierre Bézier. These curves can be generated under the control of other points.
Approximate tangents by using control points are used to generate curve. The Bezier curve can be represented mathematically a s −

∑k=0nPiBni(t)∑k=0nPiBin(t)

Where pipi is the set of points and Bni(t)Bin(t) represents the Bernstein polynomials which are given by −
Bni(t)=(ni)(1−t)n −it iBin(t )=(ni)(1−t)n−iti

Where n is the polynomial degree, i is the index, and t is the variable.


The simplest Bézier curve is the straight line from the point P0P0 to P1P1. A quadratic Bezier curve is determined by three c ontrol
points. A cubic Bezier curve is determined by four control points.

Properties of Bezier Curves

Bezier curves have the following properties −

They generally follow the shape of the control polygon, which consists of the segments joining the control points.

They always pass through the first and last control points.

They are contained in the convex hull of their defining control points.

The degree of the polynomial defining the curve segment is one less that the number of defining polygon point. Therefore,
for 4 control points, the degree of the polynomial is 3, i.e. cubic polynomial.

A Bezier curve generally follows the shape of the defining polygon.

The direction of the tangent vector at the end points is same as that of the vector determined by first and last segments.

The convex hull property for a Bezier curve ensures that the polynomial smoothly follows the contro l points.

No straight line intersects a Bezier curve more times than it intersects its control polygon.

They are invariant under an affine transformation.

Bezier curves exhibit global control means moving a control point alters the shape of the whole cu rve.

A given Bezier curve can be subdivided at a point t=t0 into two Bezier segments which join together at the point
corresponding to the parameter value t=t0.

B-Spline Curves

The Bezier-curve produced by the Bernstein basis function has limited flexibility.
First, the number of specified polygon vertices fixes the order of the resulting polynomial which defines the curve.
The second limiting characteristic is that the value of the blending function is nonzero for all parameter values over the
entire curve.
The B-spline basis contains the Bernstein basis as the special case. The B-spline basis is non-global.

A B-spline curve is defined as a linear combination of control points Pi and B-spline basis function Ni,Ni, k (t) given by

C(t)=∑ni=0PiNi,k(t),C(t)=∑i=0nPiNi,k(t ), n≥k−1,n≥k−1, tϵ[tk−1,tn+1]tϵ[tk−1,tn+1]


Where,

{pipi: i=0, 1, 2….n} are the control points

k is the order of the polynomial segments of the B-spline curve. Order k means that the curve is made up of piecewise
polynomial segments of degree k - 1,

the Ni,k(t)Ni,k(t ) are the “normalized B-spline blending functions”. They are described by the order k and

by a non-decreasing sequence of real numbers normally called the “knot sequence”.

ti:i=0,...n+Kt i:i=0,...n+K

The Ni, k functions are described as follows −

Ni,1(t)={1,0,ifuϵ[ti,t i+1)OtherwiseNi,1(t)={1,ifuϵ[ti,t i+1)0,Otherwise

and if k > 1,

Ni,k(t)=t−titi+k−1Ni,k−1(t)+ti+k−tti+k−ti+1Ni+1,k−1(t)Ni,k(t)=t−t iti+k−1Ni,k−1(t)+ti+k−tti+k−ti+1Ni+1,k−1(t)

and

tϵ[tk−1,tn+1)tϵ[tk−1,tn+1)

Properties of B-spline Curve

B-spline curves have the following properties −

The sum of the B-spline basis functions for any parameter value is 1.

Each basis function is positive or zero for all parameter values.


Each basis function has precisely one maximu m value, except for k=1.

The maximu m order of the curve is equal to the number of vertices of defining polygon.

The degree of B-spline polynomial is independent on the number of vertices of defining polygon.

B-spline allows the local control over the curve surface because each vertex affects the shape of a curve only over a range
of parameter values where its associated basis function is nonzero.

The curve exhibits the variation diminishing property.

The curve generally follows the shape of defining polygon.

Any affine transformation can be applied to the curve by applying it to the vertices of defining polygon.

The curve line within the convex hull of its defining polygon.

In a 3D objects and viewing specification, we wish to determine which lines or surfaces of the objects are visible, so that we can
display only the visible lines or surfaces. This process is known as hidden surfaces or hidden line elimination, or visible s urface
determination. The hidden line or hidden surface algorithm determines the lines, edges, surfaces or volumes that are visible or
invisible to an observer located at a specific point in space. These algorithms are broadly classified according to wheth er they
deal with object definitions directly or with their projected images. These two approaches are called object-space methods or
object precision methods

and image-space methods, respectively. When we view a picture containing non-transparent objects and surfaces, then we cannot
see those objects from view which are behind from objects closer to eye. We must remove these hidden surfaces to get a realis tic
screen image. The identification and removal of these surfaces is called Hidden-surface problem.

There are two approaches for removing hidden surface problems − Object-Space method and Image-space method.

Object-space method:- Object-space method is implemented in the physical coordinate system in which objects are described. It
compares objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should
label as visible. Object-space methods are generally used in line-display algorithms.

Image-Space method:- Image space method is implemented in the screen coordinate system in which the objects are viewed. In
an image-space algorithm, visibility is decided point by point at each pixel Position on the view plane. Most hidden line/surface
algorithms use image-space methods

Q. 2 Explain 3D Transformation with suitable example.

Ans: Translation

It is the movement of an object from one position to another position. Translation is done using translation vectors. There a re three
vectors in 3D instead of two. These vectors are in x, y, and z directions. Translation in the x-direction is represented using Tx. The
translation is y-direction is represented using T y . The translation in the z- direction is represented using T z.
If P is a point having co-ordinates in three directions (x, y, z) is translated, then after translation its coordinates will be (x1 y 1 z1 )
after translation. Tx Ty Tz are translation vectors in x, y, and z directions respectively.

x1 =x+ Tx
y 1 =y+Ty
z1 =z+ Tz

Three-dimensional transformations are performed by transforming each vertex of the object. If an ob ject has five corners, then the
translation will be accomplished by translating all five points to new locations. Following figure 1 shows the translation of point
figure 2 shows the translation of the cube.

Matrix for translation


Matrix representation of point translation

Point shown in fig is (x, y, z). It become (x1 ,y 1 ,z1 ) after translation. Tx Ty Tz are translation vector.

Scaling

Scaling is used to change the size of an object. The size can be increased or decreased. The scaling three factors are requir ed
Sx Sy and Sz.

Sx=Scaling factor in x- direction


Sy =Scaling factor in y-direction
Sz=Scaling factor in z-direction

Matrix for Scaling


Scaling of the object relative to a fixed point

Following are steps performed when scaling of objects with fixed point (a, b, c). It can be represented as below:

1. Translate fixed point to the origin


2. Scale the object relative to the origin
3. Translate object back to its original position.

Note: If all scaling factors Sx=Sy =Sz.Then scaling is called as uniform. If scaling is done with different scaling vectors, it is called a
differential scaling.

In figure (a) point (a, b, c) is shown, and object whose scaling is to done also shown in steps in fig (b), fig (c) and fig (d).
Rotation

It is moving of an object about an angle. Movement can be anticlockwise or clockwise. 3D rotation is complex as compared to t he
2D rotation. For 2D we describe the angle of rotation, but for a 3D angle of rotation and axis of rotation are required. The axis can
be either x or y or z.

Following figures shows rotation about x, y, z- axis


Following figure show rotation of the object about the Y axis

Following figure show rotation of the object about the Z axis


Reflection

It is also called a mirror image of an object. For this reflection axis and reflection of plane is selected. Three -dimensional reflections
are similar to two dimensions. Reflection is 180° about the given axis. For reflection, plane is selected (xy,xz or yz). Following
matrices show reflection respect to all these three planes.

Reflection relative to XY plane


Reflection relative to YZ plane

Reflection relative to ZX plane


Shearing

It is change in the shape of the object. It is also called as deformation. Change can be in the x -direction or y -direction or both
directions in case of 2D. If shear occurs in both directions, the object will be distorted. But in 3D shear can occu r in three directions.

Matrix for shear


Q. 3 Explain 3D Display methods in details .

Ans: To obtain display of a three-dimensional scene that has been modeled in world coordinates. we must first set up a coordinate
reference for the "camera". This coordinate reference defines the position and orientation for the plane of the carnera film which is
the plane we want to us to display a view of the objects in the scene. Object descriptions are then transferred to the camera reference
coordinates and projected onto the selected display plane. We can then display the objects in wireframe (outline) form, or
we can apply lighting surface rendering techniques to shade the visible surfaces.
PARALLEL PROJECTION

In a parallel projection, parallel lines in the world-coordinate scene projected into parallel lines on the two -dimensional display
plane.

Perspective Projection

Another method for generating a view of a three-dimensional scene is to project points to the display plane along
converging paths. This causes objects farther from the viewing position to be displayed smaller than objects of the same size that are
nearer to the viewing position.In a perspective projection, parallel lines in a scene that are not parallel to the display plane are
projected into converging lines

DEPTH CUEING

A simple method for indicating depth with wireframe displays is to vary the intensity of objects according to their distance
from the viewing position. The viewing position are displayed with the highest intensities, and lines farther away are displayed with
decreasing intensities.

Visible Line and Surface Identification

We can also clarify depth relation ships in a wireframe display by identifying visible lines in some way. The simplest method is to
highlight the visible lines or to display them in a different color. Another technique, commonly used for engineering drawing s, is to
display the nonvisible lines as dashed lines. Another approach is to simply remove the nonvisible lines

Surface Rendering

Added realism is attained in displays by setting the surface intensity of objects according to the lighting conditions in the scene and
according to assigned surface characteristics. Lighting specifications include the intensity and positions of light sources and the
general background illumination required for a scene. Surface properties of objects include degree of transparency and how ro ugh or
smooth the surfaces are to be. Procedures can then be applied to generate the correct illumination and shadow regions for the scene.

Exploded and Cutaway View

Exploded and cutaway views of such objects can then be used to show the internal structure and relationship of the object
Parts

Three-Dimensional and Stereoscopic View

Three-dimensional views can be obtained by reflecting a raster image from a vibrating flexible mirror. The vibrations of
the mirror are synchronized with the display of the scene on the CRT. As the mirror vibrates, the focal length varies so that
each point in the scene is projected to a position corresponding to its depth.
Stereoscopic devices present two views of a scene: one for the left eye and the other for the right eye.

Q. 4 Explain Composite Transformation matrix representation for 3D objects.

Ans: 3D Transforms

 3D space: add a Z coordinate (X, Y, Z)


o 3D homogeneous coordinates: 4 dimensions
 Text and OpenGL use a Right-handed rotation system: grab the Z axis with right hand, and curl fingers from +ve X axis to
+ve Y axis: thumb points out at you, which is direction of +ve Z axis
o implies that +ve Z axis points at you when facing the screen with +ve Y pointing up and +ve X pointing right

3D transforms

 All transformations directly extend to 3D


 Translation:

| 1 0 0 dx |
T(dx,dy,dz) = | 0 1 0 dy |
| 0 0 1 dz |
|0001 |

 Scaling:

| sx 0 0 0 |
S(sx,sy,sz) = | 0 sy 0 0 |
| 0 0 sz 0 |
| 0 0 0 1|

 Rotation:

|1 0 0 0|
Rx(A) = | 0 cos A -sin A 0 |
| 0 sin A cos A 0 |
|0 0 0 1|

| cos A 0 sin A 0 |
Ry(A) = | 0 1 0 0|
| -sin A 0 cos A 0 |
| 0 0 0 1|

| cos A -sin A 0 0 |
Rz(A) = | sin A cos A 0 0 |
| 0 0 1 0|
| 0 0 0 1|

3d transforms

 2D rotations is given by Z axis rotation matrix


 Note that rotation matrix about Y axis uses "different' signs
o reason: when you look down Y axis, you have this:

z' = z cos A - x sin A

x' = z sin A + x cos A (as before)

--> but when you put these in matrix form, the X and Z are in reversed order, hence the sign change

 All these transformation matrices have inverses similar to 2D case

3D transforms

 composing 3D transforms works same as 2D: write each transformation matrix in the order the transformation sequence is
done
o translation & rotations on same axes are additive, while scaling is multiplicative
o however, note that rotations on different axis are NOT commutative!
 general transform:

| r11 r12 r13 tx |


M = | r21 r22 r23 ty |
| r31 r32 r33 tz |
| 0 0 0 1|

 one trick: inverse of the top-left 3 x 3 submatrix: transpose it

| r11 r12 r13 |


R = | r21 r22 r23 |
| r31 r32 r33 |

| r11 -r21 r13 |


R^(-1) = (1/Det R)* | -r12 r22 -r32 |
| r31 -r23 r33 |

3D transform properties

 lines are preserved


 parallelism is preserved
 proportional distances are preserved
 (volume after transformation) / (volume before) = | det M |

Q.5 What do you mean by projection? Differentiate between parallel and perspective projectios.

Ans: Projection

It is the process of converting a 3D object into a 2D object. It is also defined as mapping or transformation of the object in
projection plane or view plane. The view plane is displayed surface.

S.NO. PERSPECTIVE PROJECTIONS PARALLEL PROJECTION

1. If COP[ Centre Of Projection] is located at a If COP [ Centre Of Projection] is located at infinity,


finite point in 3 space , the result is a all the projectors are parallel and the result is a
perspective projection. parallel projection.

2. Perspective projection is representing or parallel projection is used in drawing objects when


drawing objects which resemble the real perspective projection cannot be used.
thing

3. perspective projection represents objects in a Parallel projection is much like seeing objects
three-dimensional way. through a telescope, letting parallel light rays into the
eyes which produce visual representations without
depth

4. In perspective projection, objects that are far parallel projection does not create this effect.
away appear smaller, and objects that are
near appear bigger

5. While parallel projection may be best for it is better to use perspective projection.
architectural drawings, in cases wherein
measurements are necessary

6. perspective projections require a distance In parallel projection the center of projection is at


between the viewer and the target point. infinity, while in prospective projection, the center of
projection is at a point.

7. Types: Types:
1.one point perspective, 1.Orthographic
2.Two point perspective, 2.Oblique
3. Three point perspective,

Q.6 Write Short note on : (i) View Plane (ii) Viewing Pipeline (iii) Spline (iv) Axonometric Projection

Ans: (i) View Plane : view plane The plane onto which an object is projected in a parallel or perspective projection.

(ii) Viewing Pipeline: The viewing-pipeline in 3 dimensions is almost the same as the 2D-viewing-pipeline. Only after the
definition of the viewing direction and orientation (i.e., of the camera) an additional projection step is done, which is the reduction
of 3D-data onto a projection plane:

(iii)Spline : A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design
and control the shape of complex curves and surfaces. The general approach is that the user enters a sequence of points, and a curve
is constructed whose shape closely follows this sequence. The points are called control points. A curve that actually passes through
each control point is called an interpolating curve; a curve that passes near to the control points but not necessarily throu gh them is
called an approximating curve

(iv) Axonometric Projection: The three types of axonometric projection are isometric projection, dimetric projection, and trimetric
projection, depending on the exact angle at which the view deviates from the orthogonal. Typically in axonometric drawing, as in
other types of pictorials, one axis of space is shown as the vertical.
In isometric projection, the most commonly used form of axonometric projection in engineering drawing, the direction of viewing
is such that the three axes of space appear equally foreshortened, and there is a common angle of 120° between them. As the
distortion caused by foreshortening is uniform, the proportionality between lengths is preserved, and the axes share a common scale;
this eases the ability to take measurements directly from the drawing. Another advantage is that 120° angles are easily const ructed
using only a compass and straightedge.
In dimetric projection, the direction of viewing is such that two of the three axes of space appear equally foreshortened, of which
the attendant scale and angles of presentation are determined according to the a ngle of viewing; the scale of the third direction is
determined separately. Dimensional approximations are common in dimetric drawings
In trimetric projection, the direction of viewing is such that all of the three axes of space appear unequally foreshort ened. The
scale along each of the three axes and the angles among them are determined separately as dictated by the angle of viewing.
Dimensional approximations in trimetric drawings are common, and trimetric perspective is seldom used in technical drawing s.
Unit 5:
Short Answers: (2 Marks Each)
Q. 1 Define Light and Light Source.

Ans: Light is electromagnetic radiation within a certain portion of the electromagnetic spectrum. The word usually refers
to visible light, which is the portion of the spectrum that can be perceived by the human eye

Light Source: (a) Point Source-> Sun, Bulb

(b) Distributed Source-> Tube light

Q. 2 What are parameters of illumination model.

Ans: (a) The light source parameters:

• Positions

• Electromagnetic Spectrum

• Shape

(b) The surface parameters

• Position

• Reflectance properties

• Position of nearby surfaces

(c) The eye (camera) parameters

• Position

• Sensor spectrum sensitivities

Q. 3 Define illumination with example.

Ans: Illumination models are used to generate the color of an object’s surface at a given point on that surface. The factors that
govern the illumination model determine the visual representation of that surface. Due to the relationship defined in the mod el
between the surface of the objects and the lights affecting it, illumination models are also called shading models or lighting
models.

Q. 4 Define Halftone Methods.

Ans: Halftone is the reprographic technique that simulates continuous-tone imagery through the use of dots, varying either in size
or in spacing, thus generating a gradient-like effect."Halftone" can also be used to refer specifically to the image that is produced
by this process

Q.5 Define Properties of Light.

Ans:

 Light travels in straight line.


 It is made of electric and magnetic vectors that oscillate mutually perpendicular.
 Direction of wave propagation is perpendicular to both electric and magnetic vectors.
 It is made of photons. These photons have rest mass zero and they travel with a speed of 3 × 10^8 m/s.
 It does not need material medium for propagation.
 It is transverse in nature.
 It shows the phenomena of reflection, refraction, diffraction and polarisation.
 Light rays carry no charge and hence they don't deviate in electric and magnetic fields.
Q.6 What is color model?

Ans: Color Model: Primary Colors -> Sets of colors that can be combined to make a useful range of colors

Color Gamut -> Set of all colors that we can produce from the primary colors.

Complementary Colors-> Pairs of colors which, when combined in the right proportions, produce white. Example, in the RGB
model: red & cyan , green & magenta , blue & yellow.

Descriptive Answers: (5 to 20 Marks)

Q. 1 Explain in RGB color model, Compare it with HSV color models.

Ans: The RGB color model is one of the most widely used color representation method in computer graphics. It use a color
coordinate system with three primary colors:

R(red), G(green), B(blue)

Each primary color can take an intensity value ranging from 0(lowest) to 1(highest). Mixing these three primary colors at different
intensity levels produces a variety of colors. The collection of all the colors obtained by such a linear combination of red, green and
blue forms the cube shaped RGB color space.

HSV Color Model: Hue, Saturation, and Value (HSV) is a color model that is often used in place of the RGB color model in
graphics and paint programs. In using this color model, a color is specified then white or black is added to easily make colo r
adjustments. HSV may also be called HSB (short for hue, saturation and brightness).

Q. 2 Explain Gouraud shading and compare it with Phong shading.


Ans: Gouraud Shading: Renders the polygon surface by linearly interpolating intensity values across the surface.

Gouraud Shading Algorithm:

1. Determine the normal at each polygon vertex

2. Apply an illumination model to each vertex to calculate the vertex intensity

3. Linearly interpolate the vertex intensities over the surface polygon

At each polygon vertex, we obtain a normal vector by averaging the surface normals of all polygons staring that vertex as shown in
fig:

Thus, for any vertex position V, we acquire the unit vertex normal with the calculation

Once we have the vertex normals, we can determine the intensity at the vertices from a lighting model.

Following figures demonstrate the next step: Interpolating intensities along the polygon edges. For each scan line, the intensities
at the intersection of the scan line with a polygon edge are linearly interpolated from the intensities at the edge endpoints. For
example: In fig, the polygon edge with endpoint vertices at position 1 and 2 is intersected by the scanline at point 4. A fast method
for obtaining the intensities at point 4 is to interpolate between intensities I 1 and I2 using only the vertical displacement of the scan
line.
Similarly, the intensity at the right intersection of this scan line (point 5) is interpolated from the intensity values at v ertices 2 and
3. Once these bounding intensities are established for a scan line, an interior point (such as point P in the previous fig) is
interpolated from the bounding intensities at point 4 and 5 as

Phong Shading: A more accurate method for rendering a polygon surface is to interpolate the normal vector and then apply the
illumination model to each surface point. This method developed by Phong Bui Tuong is called Phong Shading or normal vector
Interpolation Shading. It displays more realistic highlights on a surface and greatly reduces the Match -band effect.

A polygon surface is rendered using Phong shading by carrying out the following steps:

1. Determine the average unit normal vector at each polygon vertex.


2. Linearly & interpolate the vertex normals over the surface of the polygon.
3. Apply an illumination model along each scan line to calculate projected pixel intensities for the surface points.

Interpolation of the surface normal along a polynomial edge between two vertices as shown in fig:
Incremental methods are used to evaluate normals between scan lines and along each scan line. At each pixel position along a scan
line, the illumination model is applied to determine the surface intensity at that point.

Intensity calculations using an approximated normal vector at each point along the s can line produce more accurate results than the
direct interpolation of intensities, as in Gouraud Shading. The trade-off, however, is that phong shading requires considerably more
calculations.

Q. 3 Explain CMY and HLS models.

Ans: CMY Model: This stands for cyan-magenta-yellow and is used for hardcopy devices. In contrast to color on the monitor, the
color in printing acts subtractive and not additive. A printed color that looks red absorbs the other two components and and
reflects . Thus its (internal) color is G+B=CYA N. Similarly R+B=MAGENTA and R+G=YELLOW. Thus the C-M-Y
coordinates are just the complements of the R-G-B coordinates:

If we want to print a red looking color (i.e. with R-G-B coordinates (1,0,0)) we have to use C-M-Y values of (0,1,1). Note that

absorbs , similarly absorbs and hence absorbs all but .

Black ( ) corresponds to which should in principle absorb , and .


But in practice this will appear as some dark gray. So in order to be able to p roduce better contrast printers often use black as

color. This is the CMYK-model. Its coordinates are obtained from that of the CMY-model by

, , and .

HLS Model: HLS color model A color model that defines colors by the three parameters hue (H), lightness (L),
and saturation (S). It was introduced by Tektronix Inc. Hue lies on a circle, saturation increases from center to edge of this circle,
lightness goes from black to white. This model uses the same hue plane as the HSV model, but it replaces value (V) by an
extended lightness axis so that the maximu m color gamut is at L=0.5 and decreases in each direction towards white (L=1) and
black (L=0). The HLS color model is represented by a double hexagonal cone, with white at the top apex and black at the bottom.

Q. 4 Explain basic illumination models.


Ans: hree Components to Illumination of Objects
 Ambient - Total of all the light bouncing.
 Diffuse reflection - Matted surfaces.
 Specular reflection - Mirror effect.

Ambient

Ia = Iar - intensity of the ambient light.

Iag

Iab

ka = kar - ambient reflection coefficient.

kag

kab

Ia is a property of the scene and

ka is a material property (varies with object) Illumination (due to ambient).

I = Iaka = Iar kar

Iag kag

Iab kab

Diffuse Reflection
The light that gets reflected from the object's surface in all directions when a light source illuminates its surface. Based on
Lamberts Law for reflected light off a matte surface.

Ip point light source's intensitykd material's diffuse-reflection coefficient


Illumination: I = Ip kd cos(θ)

Assumptions: 0 ≤ θ ≤ pi otherwise diffuse contribution is zero.

Normalized vectors ||N|| = ||L|| = 1


cos(θ) = (N·L)

Ambient and Diffuse


Illumination:

I = Iaka + Ip kd (N·L)

(N·L) = { (N·L), if (N·L) ≥ 0

0, otherwise

Specular reflection
The glare seen when a light source is mirrored on the surface of a shiny object Phong Model

ks specular reflection coefficien

n Phong constant

Illumination:

I = Iaka + Ip kd ·cos(θ) + ks ·cos n (α)

Calculate once for red, once for green and once for blue.

Computational considerations

cos(θ) = (L·N) (dot product)

||L||·||N||

cos(α) = (R·V) (dot product)

||R||·||V||
Diffuse requires: requires cos(θ) ≥ 0

Specular requires: recuires cos(α) ≥ 0 and cos(θ) ≥ 0

Calculation Reflection Vector


R = 2N(N·L) - L

Backface culling (and inside/otside coloring)

Backface culling:
If (V·N) <0) then the polygon cannot be seen.
In/Out Coloring:
If (V·N) ≥ 0 then object is color.out
else object is color.in with (outward) normal -N

Painters Algorithm
Sort polygons/surfaces by "distance" from the From Pt.
Then paint the polygons/surfaces starting with th one furthers away first.
"distance" = cetroid, min. vertex??
Triangulate Open Box Example
OK
Not OK

Light Source
Point:

L = P - Q
P given in world coordinates.

Q.5 Explain CMKY and YIQ Color Models.

Ans: CMKY Color Model: Stands for "Cyan Magenta Yellow Black." These are the four basic colors used for printing color
images. Unlike RGB (red, green, blue), which is used for creating images on your computer sc reen, CMYK colors are
"subtractive." This means the colors get darker as you blend them together. Since RGB colors are used for light, not pigments , the
colors grow brighter as you blend them or increase their intensity.

Technically, adding equal amounts of pure cyan, magenta, and yellow should produce black. However, because of impurities in t he
inks, true black is difficult to create by blending the colors together. This is why black (K) ink is typically included with the three
other colors. The letter "K" is used to avoid confusion with blue in RGB.
YIQ Color Model: This is used for color TV. Here is the luminance (the only component necessary for B&W -TV). The
conversion from RGB to YIQ is given by

for standard NTSC RGB phosphor with chromaticity values


R G B
x 0.67 0.21 0.14
y 0.33 0.71 0.08
The advantage of this model is that more bandwidth can be assigned to the Y-component (luminance) to which the human eye is

more sensible than to color information. So for NTSC TV there are 4MHz assigned to , MHz to and MHz to .

Q.6 Write Short note on : (i) Color Gamut (ii) CIE chromaticity Diagram (iii) Complementary Color

Ans: (i) Color Gamut : The term colour gamut refers to the range of colours a device can reproduce, the larger or wider the gamut
the more rich saturated colours available. As colour gamuts become smaller it is generally these rich saturated colours that are the
first to suffer, a phenomena technically referred to as clipping. This clipping phenomenon is most apparent when converting f rom
RGB to CMYK, with many of the rich saturated colours that were available in RGB no longer being available in CMYK.

Display devices like monitors also have gamuts or ranges of colour they can reproduce. So in order to accurately preview your
images you would ideally like your display gamut to be at least as large, if not larger, than your printer’s gamut, otherwise clipping
will be occurring in your preview.

(ii) CIE chromaticity Diagram: CIE Chromaticity Diagram

The negative values in the representation of color by R-G-B-values is unpleasant. Thus the Commission Internationale de
l'Éclairage (CIE) defined in 1931 another base in terms of (virtual) primaries , (the luminous-efficiency function) and ,
which allows to match all visible colors as linear combinations with positive coefficients only (the so called CHROMATICITY

VALUES ), i.e. any visible color can be expressed as , see


Figure: CIE 1931 primaries

Normalization to gives new coordinates , (and ), which are independent on

luminous energy . The visible chromatic values in this coordinate system form a horseshoe shaped region, with the
spectrally pure colors on the curved boundary. Warning: brown is orange-red at very low luminance (hence is not shown in this

diagram). Standard white light (approximative sunlight) is located at point near .


Figure: CIE 1931 Chromaticity Diagram
From:cie-1.html

(iii) Complementary Color : Complementary colors are two colors that are on opposite sides of the color wheel. As an artist,
knowing which colors are complementary to one another can help you make good color decisions. For instance, complementaries
can make each other appear brighter, they can be mixed to create effective neutral hues, or they can be blended together for shadows.

Unit 6:
Short Answers: (2 Marks Each)
Q. 1 What is animation?

Ans: Animation refers to the movement on the screen of the display device created by displaying a sequence of still images.
Animation is the technique of designing, drawing, making layouts and preparation of photographic series which are integrated into
the multimedia and gaming products

Q. 2 What are applications of animation?

Ans: 1. Education and Training

2. Entertainment

3. Computer Aided Design (CAD)

4. Advertising

5. Presentation

Q. 3 What is illusion?
Ans: An illusion is a distortion of the senses, which can reveal how the human brain normally organizes and interprets sensory
stimulation. Though illusions distort our perception of reality, they are generally shared by most people

Q. 4 What is Ray Tracing?

Ans: ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and
simulating the effects of its encounters with virtual objects.

Q.5 Define sequence of Animation.

Ans: (1) Storyboard layout

(2) Object definitions

(3) Key frame specification

(4) Generation of in-between frames

Q.6 Define fractals images.

Ans: A fractal is a never-ending pattern. Fractals are infinitely complex patterns that are self-similar across different scales. They
are created by repeating a simple process over and over in an ongoing feedback loop. Driven by recursion, fractals are images of
dynamic systems – the pictures of Chaos.

Descriptive Answers: (5 to 20 Marks)

Q. 1 Explain Principles of Animation.

Ans:

1. SQUASH AND STRETCH


This action gives the illusion of weight and volume to a character as it moves. Also squash and stretch is useful in animating
dialogue and doing facial expressions. How extreme the use of squash and stretch is, depends on what is required in animating the
scene. Usually it's broader in a short style of picture and subtler in a feature. It is used in all forms of character animation from a
bouncing ball to the body weight of a person walking. This is the most important element you will be required to master and w ill be
used often.

2. ANTICIPATION
This movement prepares the audience for a major action the character is about to perform, such as, starting to run, jump or change
expression. A dancer does not just leap off the floor. A backwards motion occurs before the forward action is executed. The
backward motion is the anticipation. A comic effect can be done by not using anticipation after a series of gags that used
anticipation. Almost all real action has major or minor anticipation such as a pitcher's wind -up or a golfers' back swing. Feature
animation is often less broad than short animation unless a scene requires it to develop a characters personality.

3. STAGING
A pose or action should clearly communicate to the audience the attitude, mood, reaction or idea of the character as it relat es to the
story and continuity of the story line. The effective use of long, medium, or close up shots, as well as camera angles also helps in
telling the story. There is a limited amount of time in a film, so each sequence, scene and frame of film must relate to the overall
story. Do not confuse the audience with too many actions at once. Use one action clearly stated to get the idea across, unless you are
animating a scene that is to depict clutter and confusion. Staging directs the audience's attention to the story or idea bein g told. Care
must be taken in background des ign so it isn't obscuring the animation or competing with it due to excess detail behind the
animation. Background and animation should work together as a pictorial unit in a scene.

4. STRAIGHT AHEAD AND POSE TO POSE ANIMATION


Straight ahead animation starts at the first drawing and works drawing to drawing to the end of a scene. You can lose size, volume,
and proportions with this method, but it does have spontaneity and freshness. Fast, wild action scenes are done this way. Pos e to
Pose is more planned out and charted with key drawings done at intervals throughout the scene. Size, volumes, and proportions are
controlled better this way, as is the action. The lead animator will turn charting and keys over to his assistant. An assista nt can be
better used with this method so that the animator doesn't have to draw every drawing in a scene. An animator can do more scenes
this way and concentrate on the planning of the animation. Many scenes use a bit of both methods of animation.
5. FOLLOW THROUGH AND OVERLAPPING ACTION
When the main body of the character stops all other parts continue to catch up to the main mass of the character, such as arms, long
hair, clothing, coat tails or a dress, floppy ears or a long tail (these follow the path of action). Nothing stops all at once. This is
follow through. Overlapping action is when the character changes direction while his clothes or hair continues forward. The
character is going in a new direction, to be followed, a number of frames later, by his clothes in the new direction. "DRAG," in
animation, for example, would be when Goofy starts to run, but his head, ears, upper body, and clothes do not keep up with his legs.
In features, this type of action is done more subtly. Example: When Snow White starts to dance, her dress does not begin to move
with her immediately but catches up a few frames later. Long hair and animal tail will also be handled in the same manner. Timing
becomes critical to the effectiveness of drag and the overlapping action.

6. SLOW-OUT AND SLOW-IN


As action starts, we have more drawings near the starting pose, one or two in the middle, and more drawings near the next pos e.
Fewer drawings make the action faster and more drawings make the action slower. Slow-ins and slow-outs soften the action,
making it more life-like. For a gag action, we may omit some slow-out or slow-ins for shock appeal or the surprise element. This
will give more snap to the scene.

7. ARCS
All actions, with few exceptions (such as the animation of a mechanical dev ice), follow an arc or slightly circular path. This is
especially true of the human figure and the action of animals. Arcs give animation a more natural action and better flow. Think of
natural movements in the terms of a pendulum swinging. All arm movemen t, head turns and even eye movements are executed on
an arcs.

8. SECONDARY ACTION
This action adds to and enriches the main action and adds more dimension to the character animation, supplementing and/or re -
enforcing the main action. Example: A character is angrily walking toward another character. The walk is forceful, aggressive, and
forward leaning. The leg action is just short of a stomping walk. The secondary action is a few strong gestures of the arms working
with the walk. Also, the possibility of dialogue being delivered at the same time with tilts and turns of the head to accentuate the
walk and dialogue, but not so much as to distract from the walk action. All of these actions should work together in support of one
another. Think of the walk as the primary action and arm swings, head bounce and all other actions of the body as secondary or
supporting action.

9. TIMING
Expertise in timing comes best with experience and personal experimentation, using the trial and error method in refining technique.
The basics are: more drawings between poses slow and smooth the action. Fewer drawings make the action faster and crisper. A
variety of slow and fast timing within a scene adds texture and interest to the movement. Most animation is done on t wos (one
drawing photographed on two frames of film) or on ones (one drawing photographed on each frame of film). Twos are used most o f
the time, and ones are used during camera moves such as trucks, pans and occasionally for subtle and quick dialogue animation.
Also, there is timing in the acting of a character to establish mood, emotion, and reaction to another character or to a situ ation.
Studying movement of actors and performers on stage and in films is useful when animating human or animal characters. This
frame by frame examination of film footage will aid you in understanding timing for animation. This is a great way to learn f rom
the others.

10. EXAGGERATION
Exaggeration is not extreme distortion of a drawing or extremely broad, violent action all the time. Its like a caricature of facial
features, expressions, poses, attitudes and actions. Action traced from live action film can be accurate, but stiff and mec hanical. In
feature animation, a character must move more broadly to look natural. The same is true of facial expressions, but the action should
not be as broad as in a short cartoon style. Exaggeration in a walk or an eye movement or even a head turn will give your film more
appeal. Use good taste and common sense to keep from becoming too theatrical and excessively animated.

11. SOLID DRAWING


The basic principles of drawing form, weight, volume solidity and the illusion of three dimension apply to animation as it does to
academic drawing. The way you draw cartoons, you draw in the classical sense, using pencil sketches and drawings for
reproduction of life. You transform these into color and movement giving the characters the illusion of three -and four-dimensional
life. Three dimensional is movement in space. The fourth dimension is movement in time.
12. APPEAL
A live performer has charisma. An animated character has appeal. Appealing animation does not mean just being cute and cuddly .
All characters have to have appeal whether they are heroic, villainous, comic or cute. Appeal, as you will use it, includes a n easy
to read design, clear drawing, and personality development that will capture and involve the audience's interest. Early carto ons
were basically a series of gags strung together on a main theme. Over the years, the artists have learned that to produce a feature
there was a need for story continuity, character development and a higher quality of artwork throughout the entire production . Like
all forms of story telling, the feature has to appeal to the mind as well as to the eye.

Q. 2 Describe the Design of animation sequence.

Ans: (1) Storyboard Layout-> A storyboard is a graphic organizer that plans a narrative. Storyboards are a powerful way to
visually present information; the linear direction of the cells is perfect for storytelling, explaining a proces s, and showing the
passage of time. At their core, storyboards are a set of sequential drawings to tell a story. By breaking a story into linear, bite-sized
chunks, it allows the author to focus on each cell separately, without distraction.

(2) Object Definition: The method of enabling to animate an object comprises a first step of enabling to animate the object during
a first period on the basis of at least one position of the object in a first animation of the object and on the basis of a f irst part of a
second animation of the object. The method comprises a second step of enabling to animate the object during a second period on
the basis of a second part of the second animation of the object.

(3) Key frame specification-> A keyframe is a frame where we define changes in animation. Every frame is a keyframe when we
create frame by frame animation. When someone creates a 3D animation on a computer, they usually don’t specify the exact
position of any given object on every single frame. They create keyframes.

Keyframes are important frames during which an object changes its size, direction, shape or other properties. The computer th en
figures out all the in-between frames and saves an extreme amount of time for the animator.

(4) In between frame-> Inbetweening is the process of creating transitional frames between two separate objects in order to show
the appearance of movement and evolution of the first object into the second object. It is a common technique used in many types of
animation. The frames between the key frames (the first and last frames of the animation) are called “inbetweens” and they help
make the illusion of fluid motion.

Q. 3 Explain Ray Tracing? Explain basic ray tracing algorithm.

Ans: Ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and
simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual
realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing
best suited for applications where taking a relatively long time to render a frame can be tolerated, such as in still images and film
and television visual effects, and more poorly suited for real-time applications such as video games where speed is critical. Ray
tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion
phenomena (such as chromatic aberration).

Basic Ray Tracing Algorithm

for every pixel

{ cast a ray from the eye for every object in the scene find intersections with the ray keep it if closest

compute color at the intersection point }


Q. 4 Explain Computer animation tools.

Ans:

2D&3D Keyframe Motion Beginner's


Rating Free/Paid
supported timeline Capture support

Blender 4 Free Yes Yes Yes Yes

Cinema 4D Studio 4.5 Paid(has trial) Yes Yes Yes No

Aurora 3D 4.5 Paid Yes Yes No Yes


Animation maker

Autodesk Maya 4 Paid(has trial) Yes Yes Yes Yes

Mixamo 3.5 Paid (free sign Yes Yes No Yes


up)

Q.5 Explain morphing and tweening.

Ans: . Morphing: Morphing is an animation function which is used to transform object shape from one form to another is called
Morphing. It is one of the most complicated transformations. This function is commonly used in movies, cartoons, advertisemen t,
and computer games.
The process of Morphing involves three steps:

1. In the first step, one initial image and other final image are added to morphing application as shown in fig: I st & 4th object
consider as key frames.
2. The second step involves the selection of key points on both the images for a smooth transition between two images as
shown in 2nd object.

3. In the third step, the key point of the first image transforms to a corresponding key point of the second image as shown in
3rd object of the figure.

2. Wrapping: Wrapping function is similar to morphing function. It distorts only the initial images so that it matches with final
images and no fade occurs in this function.

3. Tweening: Tweening is the short form of 'inbetweening.' Tweening is the process of generating intermediate frames between the
initial & last final images. This function is popular in the film industry.
4. Panning: Usually Panning refers to rotation of the camera in horizontal Plane. In computer graphics, Panning relates to the
movement of fixed size window across the window object in a scene. In which direction the fixed sized window moves, the object
appears to move in the opposite direction as shown in fig:

If the window moves in a backward direction, then the object appear to move in the forward direction and the window moves in
forward direction then the object appear to move in a backward direction.

5. Zooming: In zooming, the window is fixed an object and change its size, the object also appear to change in size. When the
window is made smaller about a fixed center, the object comes inside the window appear more enlarged. This feature is known
as Zooming In.

When we increase the size of the window about the fixed center, the object comes inside the window appear small. This featu re is
known as Zooming Out.
6. Fractals: Fractal Function is used to generate a complex picture by using Iteration. Iteration means the repetition of a single
formula again & again with slightly different value based on the previous iteration result. The se results are displayed on the screen
in the form of the display picture.

Q.6 Write short note on : (i) space filling curves (ii) Grammar based models (iii) turtle graphics

Ans: (i) space filling curves-> a space-filling curve is a curve whose range contains the entire 2-dimensional unit square (or more
generally an n-dimensional unit hypercube). Because Giuseppe Peano (1858–1932) was the first to discover one, space-filling
curves in the 2-dimensional plane are sometimes called Peano curves, but that phrase also refers to the Peano curve, the specific
example of a space-filling curve found by Peano.

(ii) Grammar based models-> Grammar based models are primarily useful for simulating plant development. ... Particle
systems are used to simulate fire, clouds, water, etc. They are primarily useful for animation, but can also create static objects.

(iii) turtle graphics-> In computer graphics, turtle graphics are vector graphics using a relative cursor (the "turtle") upon a
Cartesian plane. Turtle graphics is a key feature of the Logo programming language.

You might also like