53 Block 1
53 Block 1
53 Block 1
Computer Graphics
1.0 Introduction 5
1.1 Objectives 6
1.2 What is Computer Graphics? 6
1.3 Applications 7
1.3.1 Presentation Graphics 8
1.3.2 Painting and Drawing 10
1.3.3 Photo Editing 13
1.3.4 Scientific Visualisation 14
1.3.5 Image Processing 15
1.3.6 Education, Training, Entertainment and CAD 17
1.3.7 Simulations 21
1.3.8 Animation and Games 23
1.4 Graphics Hardware 25
1.4.1 Input and Output Devices 25
1.4.2 Display Devices 31
1.5 Summary 40
1.6 Solutions/Answers 41
1.0 INTRODUCTION
Early man used drawings to communicate even before he learnt to talk, write, or
count. Incidentally, these ancient hieroglyphics (picture-writings) communicate as
well today, as they must have done thousands of years ago, this fully supports the
saying that “A picture is worth a thousand words” and in the era of computers we can
add on to it or we may as well revise the saying to “A computer is worth a million
pictures!” ; so, you can estimate the power of a computer as a communication
system.
Now, with the advances in computer hardware and software, graphics has come a full
circle and, more and more people are teaching and learning, communicating and
sharing their ideas through the medium of graphics. By graphics, we mean any
sketch, drawing, special artwork or other material that pictorially depict an object or a
process or otherwise conveys information, as a supplement to or instead of written
descriptions, and the utilisation of computers to accomplish such tasks leads to a new
discipline of computer graphics. Traditionally, graphics has referred to engineering
drawings of buildings, bridges, machine parts etc. and scientific drawings such as x-y
curves, network and process flowcharts. In recent decades, graphics has ventured into
industrial design, advertising and other artistic endeavours. During the last few years,
even newspapers and periodicals aimed at the common man have begun to utilise
graphics to present quantitative news such as selection results and production
statistics. Computer graphics can do all this and more. In fact, the power and easy
availability of computer graphics have increased the use of pictures to replace and
augment words to describe, educate, or inform a wide variety of audiences, on a wide
variety of subjects.
In this unit, we shall concentrate on the graphic capabilities and potential of the
digital computer plus we will discuss the meaning of the term graphics and its types,
in addition to which, we will also discuss the hardware used for practical application
of graphics in different streams of life. The software section however, will be
discussed in Block 4 of this course.
5
Raster Graphics and
Clipping
1.1 OBJECTIVES
The meaning of the term Graphics, is Graphical Tricks. Every image or picture is in
fact a graph and when different mathematical tricks are used to manipulate some
change in its properties like shape, size, motion etc., through the help of computers
then, the representation is nothing but computer graphics, so we can say that
“Computer Graphics (CG) is the field of visual computing, where one utilises
computers both to generate visual images synthetically and to integrate or alter
visual and spatial information sampled from the real world.” Or “Computer
Graphics is the pictorial representation manipulation of data by a computer” Or
“Computer Graphics refers to any sketch, drawing, special artwork or other material
generated with the help of computer to pictorially depict an object or a process or
otherwise convey information, as a supplement to or instead of written descriptions”.
Computer Graphics is a complex and diversified field. A Picture is a fundamental
cohesive concept in Computer Graphics. Each picture consists of points called pixels
(Picture- element). If we consider a complex picture, then complex database for
pixels are considered, hence, complex algorithm are required to access them. These
complex database contain data organised in various data structures such as ring
structures,
B-tree etc.
In our earlier courses CS-60, we had learned mathematical tricks to do jugglery with
the graphs that resulted from different functions, now let us learn how to juggle with
graphics by using computers. There are many algorithms, which can be materialised
to produce graphical effects on the screen through several graphical tools based on
different languages that are available in the market.
We will discuss the various classes of computer graphics mentioned above in the
following sections of this unit.
The most familiar and useful class of computer graphics involves movies and video
games. Movies generally need graphics that are indistinguishable from physical
reality, whereas video games need graphics that can be generated quickly enough to
be perceived as smooth motion. These two needs are incompatible, but they define
two-ways of communications between users and computations. In video games, the
6
Introduction to
Computer Graphics
Passive Computer Graphic which has no option for users to interact or use
computer graphics e.g., movies. We will discuss details about this type of graphic
systems in Block 4.
1.3 APPLICATIONS
Once you get into computer graphics, you’ll hear about all kinds of applications that
do all kinds of things. This section will discuss not only the applications but also the
software suitable for that type of application, so it is necessary to give you an
understanding of what various applications do. While working on a project, you may
need images, brochures, a newsletter, a PowerPoint presentation, poster, DVD etc.
Thus, the question arises what software do I need to get my job done. The section
will help to straighten all of that out in your head. Hopefully, if you know what does
what, you won’t waist money duplicating purchases, and when other designers or
co-workers are talking shop, you’ll know what is going on.
Graphic design applications are basically broken down on the basis of a few
considerations. The first two considerations are, “Is your project for print, or web”.
When I say web, what I really mean is monitor based publishing. This means that you
are going to see your work on a computer screen, and television, or a big screen
projector. So, as you read through this section, whenever we say “web based”, we
mean monitor based. Beyond print and web, here are the various categories that we
can think of that various applications would fit into; Image Manipulation; Vector
Graphics; Page Layout; Web sight development; Presentation Software; Video
Editing; DVD Production; Animation and Interactivity etc. If you are creating, or
learning to create graphic design, computer art, or maybe “Digital Media” is the term
that we should use, then it’s a good thing to understand the function of each
application. There are many applications in the market and most of them are
expensive. A few of the various application areas that are influenced by Computer
graphics are:
• Presentation Graphics
• Painting and Drawing
• Photo Editing
• Scientific Visualisation
• Image Processing
7
Raster Graphics and
Clipping
The moment you are going to represent yourself or your company or product or
research paper etc. simply standing and speaking is quite ineffective. Now, in such a
situation where no one stands with you, your ultimate companions are the slides
which have some information either in the form of text, charts, graphs etc., which
make your presentation effective. If you think more deeply, these are nothing but the
ways some curves (text/graph/charts) which are used in some sequence and to create
such things, graphics is the ultimate option, this application of graphics is known as
presentation graphics, which can be done very effectively through computers now-a-
days. There are some softwares which helps you present you and your concerned
effectively. Such application softwares are known as Presentation Graphics
softwares – which is a software that shows information in the form of a slide
show (A slideshow is a display of a series of chosen images, which is done for artistic
or instructional purposes. Slideshows are conducted by a presenter using an apparatus
which could be a computer or a projector).
The program that helps users to create presentations such as visual aids, handouts,
and overhead slides to process artwork, graphics, and text and produce a series of
‘slides’– which help speakers get their message across are presenation graphics
softwares.
We may say that Presentation graphics is more than just power point presentation
because it includes any type of slide presentation, bar chart, pie chart, graphs and
multimedia presentation. The key advantage of this software is that it help you show
abstracts of representation of work.
Note: There are some softwares like canvas that improves the presentation created
through powerpoint or keynote software. Although these software packages
contain a lot of handy features, they lack many vector and image creation
capabilities, therefore, creating a need for a graphic/illustration program.
Scientists, engineers, and other technically-oriented professionals often call
8
Introduction to
Computer Graphics
upon Canvas and its host of vector, image editing, and text features to create
the exciting visual components for their presentation projects.
General questions that strike many graphic designers, students, and engineers rushing
to import their illustrations and images into presentations are:
Let us discuss in brief the suitability of the technique (Vector or Bitmap), and the file
format appropriate to the creation of a better presentation.
Resolution
1) Vector, and
2) Bitmap.
You may wonder which one of these is a better format when exporting to some
software PowerPoint or Keynote or impress. The truth is that there are different
situations that call for different methods, but here are some things to look out for.
For instance, vectors are objects that are defined by anchor points and paths, while
bitmapped graphics are digital images composed of pixels. The advantage of using
vector graphics is that they are size independent, meaning that they could be resized
with no loss in quality. Bitmapped graphics, on the other hand, provide a richer depth
of colour but are size dependent and appear at the stated 72 dpi size.
File format
Say, we want an image of a Fly. The wings are partially transparent and to represent
that in our presentation what be problematic if proper file format is not there. This
choice of file format is hidden in the software that you may be using. Two cases for
the same situation are discussed below:
• The right file format that will allow us to create a transparent background
in Keynote presentation. Even though Keynote could import all common file
formats such as GIF, JPG, and BMP, there is one format that will work
particularly well which is .PSD. Using .PSD (Photoshop format) we are able to
easily place a transparent image, even partially transparent sections of the image,
such as the wings of the fly, as well as retain their properties.
• The right file format that will allow us to create a transparent background
in PowerPoint. Even though PowerPoint could import all common file formats
such as GIF, JPG, and BMP, there are two particular file formats that will work
exceptionally well: TIFF and PNG. Using TIFF (Tagged-Image File Format) or
PNG (Portable Network Graphic), we could easily remove the unwanted
background quickly and easily in PowerPoint, a feature not available to the other
mentioned file formats.
Note: TIFF or PNG: TIFF has been around longer than PNG, which was originally
designed to replace GIF on the Web. PowerPoint works well with both these
files when creating transparent backgrounds but generally PNG creates smaller
file sizes with no loss of quality.
9
Raster Graphics and
Clipping
When we talk about graphics, we mean pictures, and pictures can be either
illustrations or photographs. If you want to get graphics into a Web page or
multimedia presentation, you either have to create them in some kind of graphics
application by drawing or painting them right there in the application, or bringing
them into the application via a digital camera or scanner, and then editing and saving
them in a form suitable to your medium.
Many software applications offer a variety of features for creating and editing
pictures on the computer. Even multimedia authoring and word processing programs
include some simple features for drawing on the computer.
So, painting and drawing application in computer graphics allows the user to pick and
edit any object at any time. The basic difference is as follows:
Drawing in a software application means using tools that create “objects,” such as
squares, circles, lines or text, which the program treats as discrete units. If you draw a
square in PowerPoint, for example, you can click anywhere on the square and move it
around or resize it. It’s an object, just like typing the letter “e” in a word processor.
i.e., a drawing program allows a user to position standard shape (also called symbols,
templates, or objects) which can be edited by translation, rotations and scaling
operations on these shapes.
Painting functions, on the other hand, don’t create objects. If you look at a computer
screen, you’ll see that it’s made up of millions of tiny dots called pixels. You’ll see
the same thing in a simpler form if you look at the colour comics in the Sunday
newspaper — lots of dots of different colour ink that form a picture. Unlike a
drawing function, a paint function changes the colour of individual pixels based on
the tools you choose. In a photograph of a person’s face, for example, the colours
change gradually because of light, shadow and complexion. You need a paint
function to create this kind of effect; there’s no object that you can select or move the
way you can with the drawn square i.e., a painting program allows the user to paint
arbitrary swaths using brushes of various sizes, shapes, colour and pattern. More
painting program allows placement of such predefined shapes as rectangles, polygon
and canvas. Any part of the canvas can be edited at pixel level.
The reason why the differences are important is that, as noted earlier, many different
kinds of programs offer different kinds of graphics features at different levels of
sophistication, but they tend to specialise in one or the other. For example:
1) Many word processors, like Word, offer a handful of simple drawing functions.
They aren’t that powerful, but if all you need is a basic illustration made up of
simple shapes to clarify a point, they’re fine.
10
Introduction to
Computer Graphics
complex, but their main purpose is composition, not image creation or editing.
That is, they allow you to create or import text and graphics and, perhaps, sound,
animation and video.
Most of the graphics features in these types of programs are limited to drawing
functions because they assume that you will do more complex work in a program
dedicated to other functions (e.g., writing in a word processor, editing photos in a
paint program), then import your work to arrange the different pieces in the
composition program. (Some multimedia authoring systems, however, also offer
painting and drawing functions.)
By the way, the differences in composition programs are mainly in the form of
their output: Page layout programs, such as PageMaker and QuarkXPress, are for
composing printed pages; presentation and multimedia authoring programs, such
as PowerPoint and HyperStudio, are for slide shows and computer displays; and
Web development applications, like Netscape Composer, are for, well, Web
pages.
a) Quark Express
b) Page Maker (Adobe)
c) Indesign (Adobe)
d) Publisher (Microsoft)
The Queen of Page Layout is Quark Express, owned by Quark Express and
Indesign is the King owned by Adobe and finally there is Microsoft Publisher,
which is very easy to use.
11
Raster Graphics and
Clipping
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
………
6) Give some softwares that are suitable for Page Lay out generators like multipage
menu for a restaurant?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
………
Photo-editing programs are paint programs—it’s just that they include many
sophisticated functions for altering images and for controlling aspects of the image,
like light and colour balance.
For the most part, any paint program can open and display a digital photo image, but
it will probably not offer the range and depth of features that a true photo-editing
program like PhotoShop does.
12
Introduction to
Computer Graphics
Note:
Frontpage is known for writing lots of code that you don’t need. Go Live is great, but
I have never met a person that uses it.
We listed Netscape Composer basically because it is free. It’s not a bad product for
free. We teach a lot of people, “Intro do web design,” and if they don’t have any
software to make web pages, and if they don’t want to learn html, we show them
Composer.
It is difficult for the human brain to make sense out of the large volume of numbers
produced by a scientific computation. Numerical and statistical methods are useful
13
Raster Graphics and
Clipping
for solving this problem. Visualisation techniques are another approach for
interpreting large data sets, providing insights that might be missed by statistical
methods. The pictures they provide are a vehicle for thinking about the data.
Thus, we can say scientific visualisation is a scientists tool kit, which helps to
simulate insight and understanding of any scientific issue, thus, helping not only in
solving or analysing the same but also producing appropriate presentations of the
same. This concept of scientific visualisation fits well with modeling and simulation.
The Figure 1 describes steps for visualisation of any scientific problem under
consideration, these steps are followed recursively to visualize any complex situation.
Problem
Model
Geometry
Image
Figure 1: Steps for Visualisation of any scientific problem
Modern digital technology has made it possible for the manipulation of multi-
dimensional signals with systems that range from simple digital circuits to advanced
14
Introduction to
Computer Graphics
parallel computers. The goal of this manipulation can be divided into three
categories:
We will focus on the fundamental concepts of image processing. We can only make a
few introductory remarks about image analysis here, as to go into details would be
beyond the scope of this unit. Image understanding requires an approach that differs
fundamentally from the theme of this section. Further, we will restrict ourselves to
two-dimensional (2D) image processing although, most of the concepts and
techniques that are to be described can be extended easily to three or more
dimensions.
We begin with certain basic definitions. An image defined in the “real world” is
considered to be a function of two real variables, for example, a(x,y) with a as the
amplitude (e.g., brightness) of the image at the real coordinate position (x,y). An
image may be considered to contain sub-images sometimes referred to as
regions-of-interest, ROIs, or simply regions. This concept reflects the fact that
images frequently contain collections of objects each of which can be the basis for a
region. In a sophisticated image processing system it should be possible to apply
specific image processing operations to selected regions. Thus, one part of an image
(region) might be processed to suppress motion blur while another part might be
processed to improve colour rendition.
The amplitudes of a given image will almost always be either real numbers or integer
numbers. The latter is usually a result of a quantisation process that converts a
continuous range (say, between 0 and 100%) to a discrete number of levels. In certain
image-forming processes, however, the signal may involve photon counting which
implies that the amplitude would be inherently quantised. In other image forming
procedures, such as magnetic resonance imaging, the direct physical measurement
yields a complex number in the form of a real magnitude and a real phase.
Let us discuss details of digitization. The 2D continuous image a(x,y) is divided into
N rows and M columns. The intersection of a row and a column is termed a pixel. The
value assigned to the integer coordinates [m,n] with {m=0,1,2,...,M –1} and
{n=0,1,2,...,N –1} is a[m,n]. In fact, in most cases a(x,y) – which we might consider
to be the physical signal that impinges on the face of a 2D sensor – is actually a
function of many variables including depth (z), colour ( ), and time (t). The effect of
digitisation is shown in Figure 2.
COLUMN
pixel at coordinates [m=10, n=3])
R
O
W
15
Raster Graphics and
Clipping
The image shown in Figure 1 has been divided into N = 30 rows and M = 30 columns
for digitisation of a continuous image. The value assigned to every pixel (pixel at
coordinates [m=10, n=3]) is the average brightness in the pixel rounded to the nearest
integer value. The process of representing the amplitude of the 2D signal at a given
coordinate as an integer value with L different gray levels is usually referred to as
amplitude quantisation or simply quantisation.
Certain tools are central to the processing of digital images. These include
mathematical tools such as convolution, Fourier analysis, and statistical descriptions,
and manipulative tools such as chain codes and run codes. But these tools are worked
with at very core levels, in general we use some software to process the image with
the help of computers. Some of the categories of image processing software with
their respective examples and features are listed below:
1) Graphics Image Processing: The most commonly used software is: Photoshop.
Features:
• Most common image processing software.
• Focuses on creating a pretty picture.
• Usually limited to popular graphics formats such as: TIFF, JPEG, GIF
• Best suited for working with RGB (3-band) images.
• Does not treat an image as a “map”.
2) Geographic Information Systems (GIS): The most commonly used software is:
ArcMap.
Features:
• Works within a geographic context.
• Great for overlaying multiple vector and raster layers.
• It has a somewhat limited analysis although capability, these limitations are being
reduced.
• More common than remote sensing software.
Features:
• Best suited for satellite imagery.
• Uses geo-spatial information.
• Easily works with multi-spectral data.
• Provides analysis functions commonly used for remote sensing applications.
• Often easy to use but it helps to be familiar with remote sensing.
•
4) Numerical Analysis Packages: Commonly used software is: MatLab.
Features:
• Focus usually on numeric processing.
• Programming or mathematical skills usually helpful.
• Used to build more user-friendly applications.
Features:
• Image display, roam, zoom.
• Image enhancement.
• Simple image processing.
16
Introduction to
Computer Graphics
Note:
1) Images are the final product of most processes in computer graphics. The ISO
(International Standards Organization) defines computer graphics as the sum total
of methods and techniques for concerning data for a graphics device by
computer, it summarise computer graphics as converting data into images, also
called visualisation.
Data Image
CAD (or CADD) is an acronym that, depending on who you ask, can stand for:
• Computer Aided Design.
• Computer Aided Drafting.
• Computer Assisted Design.
• Computer Assisted Drafting.
• Computer Assisted Design and Drafting.
• Computer Aided Design and Drafting.
In general acronym for CAD is Computer-Aided Design. In CAD interactive graphics
is used to design components and systems of mechanical, electrical, and electronic
devices. Actually CAD system is a combination of hardware and software that
enables engineers and architects to design everything from furniture to airplanes. In
addition to the software, CAD systems require a high-quality graphics monitor; a
mouse, light pen or digitised tablets for drawing; and a special printer or plotter for
printing design specifications.
CAD systems allow an engineer to view a design from any angle with the push of a
button and to zoom in or out for close-ups and long-distance views. In addition, the
computer keeps track of design dependencies so that when the engineer changes one
value, all other values that depend on it are automatically changed accordingly.
Generally we use CAD as a tool for imparting education and training to the
engineers, so that, they can produce beautifully carved and engineered pieces in bulk
with the same amount of finishing and perfection. Generally a few terms are used
repeatedly with CAD and they are CAM and CNC. Let us discuss “What are
CAD/CAM and CAD/CNC(or NC)”?
CAD/CAM software uses CAD drawing tools to describe geometries used by the
CAM portion of the program to define a tool path that will direct the motion of a
17
Raster Graphics and
Clipping
machine tool to machine the exact shape that is to be drawn on the computer. Let us
discuss the terms in brief.
Note:
• Numerically-Controlled Machines: Before the development of Computer-aided
design, the manufacturing world adopted tools controlled by numbers and letters
to fill the need for manufacturing complex shapes in an accurate and repeatable
manner. During the 1950s these Numerically-Controlled machines used the
existing technology of paper tapes with regularly spaced holes punched in them
(think of the paper roll that makes an old-fashioned player piano work, but only
one inch wide) to feed numbers into controller machines that were wired to the
motors positioning the work on machine tools. The electro-mechanical nature of
the controllers allowed digital technologies to be easily incorporated as they were
developed. NC tools immediately raised automation of manufacturing to a new
level once feedback loops were incorporated (the tool tells the computer where it
is, while the computer tells it where it should be).
Now-a-days many new machine tools incorporate CNC technologies. These tools
are used in every conceivable manufacturing sector, like CNC technology is
related to Computer Integrated Manufacturing (CIM), Computer Aided Process
Planning (CAPP) and other technologies such as Group Technology (GT) and
Cellular Manufacturing. Flexible Manufacturing Systems (FMS) and Just-In-
Time Production (JIT) are made possible by Numerically-Controlled Machines.
The development of Computer-aided design had little effect on CNC initially due to
the different capabilities and file formats used by drawing and machining programs.
However, as CAD applications such as SolidWorks and AutoCad incorporate CAM
intelligence, and as CAM applications such as MasterCam adopt sophisticated CAD
tools, both designers and manufacturers are now enjoying an increasing variety of
capable CAD/CAM software. Most CAD/CAM software was developed for product
development and the design and manufacturing of components and moulds, but they
are being used by architects with greater frequency. Thus, a CAD program introduces
the concept of real-world measurement. For example, a car or building can be drawn
as if it were life-size, and later arranged into sheets and printed on paper at any
desired scale.
Note:
1) CAD (or CADD) stands for Computer-Aided Design and Drafting. It differs from
both “paint” and “draw” programs in that (i.e., CAD) measurement is central to
its abilities. Whereas a “paint” program lets you manipulate each pixel in an array
of pixels that make up an image, and a “draw” program goes a step further – it is
composed of separate entities or objects, such as circles, lines, etc. It may provide
facilities to group these into any object.
No. While true-scale, structurally valid drawings are the reason for CAD’s
existence, its use is as diverse as our customer’s imaginations. For instance, it
may be used for:
18
Introduction to
Computer Graphics
(a) page layout, web graphics (when scaling and relationships are important to an
image, making the image in CAD and exporting it as a bitmap for touchup
and conversion can be very productive),
(b) visually accessed databases (imagine a map with detail where you can zoom
into an area and edit textual information “in place” and you can then see what
other items of interest are “in the neighborhood” - our program’s ability to
work very rapidly with large drawings is a real plus here),
(c) sign layout, laser-cutting patterns for garment factories, schematic design
(where CAD’s symbol library capabilities come in handy), and printed-circuit
board layout (This was the application that our first CAD program, created in
1977).
Software packages for CAD applications typically provide designer with a multi-
window environment. Animations are often used in CAD application, Real-time
animations using wire frame displays on a video monitor are useful for testing the
performances of a vehicle or a system. The inter frame system allows the user to
study the interior of the vehicle and its behaviour. When the study of behaviour is
completed, realistic visualising models, surface rendering are used for background
scenes and realistic display.
There are many CAD software applications. Some of them with their respective
vendors are listed below:
There are many more applications not listed in the list given above.
These applications replicate the old drafting board as a means to draw and create
designs. As CAD applications run on computers they provide a great deal more
functionality than a drafting board, and a great deal more complexity. The lines and
text created in CAD are vectors. This means that their shapes and positions are
described in mathematical terms. These vectors are stored on computer systems in
CAD files.
There are a great many different file formats for CAD. Most CAD applications
produce their own proprietary file format. The CAD applications from AutoDesk Inc.
are used widely. As a result their DWG format is very common. Many other CAD
applications from other vendors can produce and open DWG files, as well as their
own proprietary formats. CAD data is often exchanged using DXF format.
Note: The DWG file format is a CAD vector format developed by the Autodesk
and created by their AutoCAD application. DXF is also a CAD vector
format. It is designed to allow the exchange of vector information between
different CAD applications. Most CAD applications can save to and read
from DXF format.
When CAD drawings are sent to printers the format commonly used is HPGL. HPGL
files typically have the extension .plt.
19
Raster Graphics and
Clipping
Note: The HPGL file format is a vector format developed by Hewlett Packard for
driving plotters. The file extensions used include .plt, .hpg, .hp2, .pl2 and
sometimes .prn. However, the use of the .prn extension is not an absolute
indicator that the file contains HPGL code. They are often referred to as ‘plot
files’. Trix Systems offers several options for handling HPGL and the later
HPGL2 file formats.
Check Your Progress 2
1) What is Photo Editing? What are the softwares used for image editing?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
………
1.3.7 Simulations
20
Introduction to
Computer Graphics
Conceptual Model
Declarative Model
Model Design Functional Model
Constraint Model
Spatial Model
To simulate something physical, you will first need to create a mathematical model,
which represents that physical object. Models can take many forms including
declarative, functional, constraint, spatial or multimodel. A multimodel is a model
containing multiple integrated models each of which represents a level of granularity
for the physical system. The next task, once a model has been developed, is to
execute the model on a computer – that is, you need to create a computer program
which steps through time while updating the state and event variables in your
mathematical model. There are many ways to “step through time”. You can, for
instance, leap through time using event scheduling or you can employ small time
increments using time slicing. You can also execute (i.e., simulate) the program on a
massively parallel computer. This is called parallel and distributed simulation. For
many large-scale models, this is the only feasible way of getting answers back in a
reasonable amount of time.
You may want to know why to do simulation? Is there any other way to do the tasks?
To discuss these issues lets briefly discuss the cases in which simulation is essential.
There are many methods of modeling systems which do not involve simulation but
which involve the solution of a closed-form system (such as a system of linear
equations). Let us not go into these issues, as they are not part of our current
discussion.
21
Raster Graphics and
Clipping
1) The model is very complex with many variables and interacting components;
2) The underlying variables relationships are nonlinear;
3) The model contains random variates;
4) The model output is to be visual as in a 3D computer animation.
The Advantage of Simulation is that – even for easily solvable linear systems – a
uniform model execution technique can be used to solve a large variety of systems
without resorting to a “bag of tricks” where one must choose special-purpose and
sometimes arcane solution methods to avoid simulation. Another important aspect of
the simulation technique is that one builds a simulation model to replicate the actual
system. When one uses the closed-form approach, the model is sometimes twisted to
suit the closed-form nature of the solution method rather than to accurately represent
the physical system. A harmonious compromise is to tackle system modeling with a
hybrid approach using both closed-form methods and simulation. For example, we
might begin to model a system with closed-form analysis and then proceed later with
a simulation. This evolutionary procedure is often very effective.
Note: Monte Carlo methods are a widely used class of computational algorithm for
simulating the behaviour of various physical and mathematical systems. They
are distinguished from other simulation methods (such as molecular
dynamics) by being stochastic, that is non-deterministic in some manner –
usually by using random number – as opposed to deterministic algorithms.
Because of the repetition of algorithms and the large number of calculations
involved, Monte Carlo is a method suited to calculation using a computer,
utilising many techniques of computer simulation. Further, Monte Carlo
algorithm is a numerical Monte Carlo method used to find solutions to
mathematical problems (which may have many variables) that cannot easily
be solved, for example, by integral calculus, or other numerical methods. For
many types of problems, its efficiency relative to other numerical methods
increases as the dimensions of the problem increases.
In our childhood, we have all seen the flip books of cricketers which came free along
with some soft drink, where several pictures of the same person in different batting or
bowling actions are sequentially arranged on separate pages, such that when we flip
the pages of the book the picture appears to be in motion. This was a flipbook
(several papers of the same size with an individual drawing on each paper so the
viewer could flip through them). It is a simple application of the basic principle of
physics called persistence of vision. This low tech animation was quite popular in the
1800s when the persistence of vision (which is 1/16th of a second) was discovered.
This discovery led to some more interesting low tech animation devices like the
22
Introduction to
Computer Graphics
zoetrope, wheel of life, etc. Later, depending on many basic mathematics and physics
principles, several researches were conducted which allowed us to generate 2d/3d
animations. In units 1 and 2 of block 2 we will study the transformations involved in
computer graphics but you will notice that all transformations are related to space and
not to time. Here lies the basic difference between animation and graphics. The
difference is that animation adds to graphics the dimension of time which vastly
increases the amount of information to be transmitted, so some methods are used to
handle this vast information and these methods are known as animation methods
which are classified as:
First Method: In this method, the artist creates a succession of cartoon frames,
which are then combined into a film.
Second Method: Here, the physical models are positioned to the image to be
recorded. On completion, the model moves to the next image for recording and this
process is continued. Thus the historical approach of animation has classified
computer animation into two main categories:
(b) Computer Generated Animation is the animation presented via film or video,
which is again based on the concept of persistence of vision because the eye-
brain assembles a sequence of images and interprets them as a continuous
movement and if the rate of change of pictures is quite fast then it induces the
sensation of continuous motion.
This motion specification for computer-generated animation is further divided into 2
categories:
Low Level Techniques (Motion Specific) techniques are used to control the motion
of any graphic object in any animation scene fully. Such techniques are also referred
as motion specific techniques because we can specify the motion of any graphic
object in the scene. Techniques such as interpolation, approximation etc., are used in
motion specification of any graphic object. Low level techniques are used when
animator usually has a fairly specific idea of the exact motion that s/he wants.
High Level Techniques (Motion Generalised) are techniques used to describe the
general motion behaviour of any graphic object. These techniques are algorithms or
models used to generate motion using a set of rules or constraints. The animator sets
up the rules of the model, or chooses an appropriate algorithm, and selects initial
values or boundary values. The system is then set into motion and the motion of the
objects is controlled by the algorithm or model. This approach often relies on fairly
sophisticated computation such as, vector algebra and numerical techniques among
others.
So, the animation concept can be defined as: A time based phenomenon for
imparting visual changes in any scene according to any time sequence. The visual
changes could be incorporated through the Translation of the object, scaling of the
object, or change in colour, transparency, surface texture etc.
Before the advent of computer animation, all animation was done by hand, which
involved an enormous amount of work. You may have an idea of the amount of work
23
Raster Graphics and
Clipping
by considering that each second of animation film contains 24 frames (film). Then,
one can imagine the amount of work in creating even the shortest of animated films.
Without going into details of traditional methods let us categorise computer
animation technique. Computer animation can be categorised in two ways:
Passive Computer Animations: which has no option for users to use computer
graphics today is largely interactive e.g., movies. Frame animation is non-interactive
animation and is generally used in generating Cartoon movies. This is an “internal”
animation method, i.e., it is animation inside a rectangular frame. It is similar to
cartoon movies: a sequence of frames that follow each other at a fast rate, fast enough
to convey fluent motion. It is typically pre-compiled and non-interactive. The frame
is typically rectangular and non-transparent. Frame animation with transparency
information is also referred to as “cel” animation. In traditional animation, a cel is a
sheet of transparent acetate on which a single object (or character) is drawn.We will
discuss this in Block 4.
There are various software which are used to generate computer animations. Some of
them are:
• Flash: Learning MacroMedia’s Flash can be quite complex, but you can do
almost anything with it. You can develop presentations, websites, portions of
websites, games, or full-length feature, animated cartoons.
You can import just about anything into Flash. You can drop in images of almost
any file format, video clips, sounds and more. It is generally a 2D program.
• Poser: Poser by Curious Labs Creates 3D complex models that you can view,
from any angle, distance or perspective. You can make the model look like any
body you want it to. For instance, if you wanted to make a model that looks just
like your Grandmother, you would do it in Poser (the learning curve is vast).
Taking that to another level, you could then animate your Grandmother and make
her run down a picture of a beach.
There are many more software related to this animation, we will discuss them in the
Unit 1 of Block 4.
24
Introduction to
Computer Graphics
……………………………………………………………………………………
……………………………………………………………………………………
…………
No matter with which advance graphic software you are working with, if your output
device is not good, or hardware handling that software is not good, then ultimate
result will be not good, as it could be. We want to say, hardwares also dominate the
world of graphics. So, let us discuss some hardware devices which helps us to work
with graphic packages.
Input and Output devices are quite important for any software because an
inappropriate selection of the concerned hardware may produce some erroneous
results or may process data of some other format. So, in the following sections we
have planned to discuss some of the input and output devices such as:
• Touch Panel
• Light Pens
• Graphics Tablet
• Plotters
• Film Recorders.
Touch Panels
Touch panels allow displayed object or screen positions to be selected with the touch
of the finger and is also known as Touch Sensitive Screens (TSS). A typical
application of touch panels is for the selection of processing options that are
represented with graphical icons. Touch input can be recorded using optical
electrical or acoustical methods.
Optical touch panels employ a line of intra red LEDS (light emitting diodes) along
one vertical edge and along one horizontal edge of frame. The opposite vertical and
horizontal edges contain light detection. These detections are used to record the
beams that may have been interrupted when the panel was touched. Two crossing
beams that are interrupted identify the horizontal and vertical coordinates of screen
position selected.
25
Raster Graphics and
Clipping
Light Pen
Light pen is a pointing device. It has a light sensitive tip which is excited when the
light is emitted and an illuminated point on the screen comes in its field of view.
Unlike other devices which have associated hardware to track the device and
determine x and y values, the light pen needs software support (some kind of tracking
program). Pointing operations are easily programmed for light pens.
Figure 4 shows two typical applications of a light pen. It has a light sensitive tip and
a photocell mounted in a pen-like case. If the light pen is pointed at an item on the
screen it generates information from which the item can be identified by the program.
When the light pen senses an illuminated phosphor, it interrupts the display
processor’s interpreting of the file display. The processor’s instruction register tells
which instruction in the display file was being executed. By identifying the
instruction responsible for the illuminated point, the machine can discover which
object the pen is pointing to.
A light pen is an event driven device. The processor has to wait till it comes across
an illuminated point on the screen to obtain any information. The keyboard is another
typical example of an event driven device. The processor has to wait for a key to be
pressed before it can determine what the user wants to input. Event driven devices
can be handled in two ways as follows:
(a) Polling: The status of each device is periodically checked in a repetitive manner
by a polling loop. When an event occurs, the loop is exited and the corresponding
event is handled by executing some special event-handling routine or task. Again
the polling continues. The disadvantage is that the processor has to wait in an idle
state until some event occurs. Data entered can be lost if an event occurs at a time
when the main program is not in its polling loop.
26
Introduction to
Computer Graphics
(b) Interrupts: An alternative to polling is the interrupt feature. The device sends an
interrupt signal to the processor when an event occurs. The processor breaks from
its normal execution and executes some special interrupt-handling routine or task.
After the task is complete the control returns to the main program. To handle
situations when more than one event occurs, different priorities are assigned to
tasks so that higher priority tasks may interrupt tasks of lower priority.
Several events may occur before the program is ready for them. When more than one
event occurs, the associate information is entered into the event queue. A polling loop
can be employed to check the status of the event queue. The event queue can then
pass input data from the polling task to the main program in the correct order. The
main program takes events off the head of the queue and invokes the appropriate
process. The devices need not be checked repeatedly for occurrence of events.
Devices can interrupt even with the processor being unaware of it.
Two kinds of light pen interrupts may occur. If the user points the pen at an item on
the screen to select it, as in Figure 4(a), a selection interrupt occurs. If the user is
positioning with the pen, as in Figure 4(b) a pattern called tracking pattern in
displayed along the pen’s movement and tracking interrupts occur when the pen sees
the tracking pattern.
Modified versions of the light pen may also be used to draw lines, read barcodes, or
do transformation operations on objects on the screen (or on a tablet).
Graphics Tablet
Before going into details on the graphic tablet, we need to know what we mean by
tablet in computer terminology because, in other disciplines, the word tablet carries
different meanings. In terms of computer science “Tablet is a special flat surface
with a mechanism for indicating positions on it, normally used as a locator”. This
small digitiser is used for interactive work on a graphics workstation. Actually this
device is essential when someone wants to do free hand drawing or to trace any solid
geometrical shape. So a graphic tablet is a drawing tablet used for sketching new
images or tracing old ones. Or we may say that a graphics tablet is a computer input
device that allows one to hand-draw images and graphics, similar to the way one
draws images with a pencil on paper. Or a Graphics tablet is a computer peripheral
device that allows one to hand images directly to a computer, generally through an
imaging program. Graphics tablets consists of a flat surface upon which the user may
‘draw’ an image using an attached pen-like drawing apparatus using which the user
contacts the surface of the tablet, this apparatus is categorised into two types known
as pen (or stylus) and puck (a flat block with cross-hairs and some switch keys),
which may be wired or wireless. Often mistakenly called a mouse, the puck is
officially the “tablet cursor.” The image drawn or traced generally does not appear on
the tablet itself but rather is displayed on the computer monitor.
The tablet and a hand-held pointer in the form of a stylus (pen) or puck, can serve one
or more of these three functions:
(i) For selecting positions (on a drawing or on a menu) on the screen by moving the
stylus on the tablet, in a sense using the stylus and tablet as pen on paper.
(ii) For issuing a command or to input a parameter by pressing the stylus at specified
pre-programmed locations of a menu on the tablet.
(iii) For digitising the location on a drawing or map placed on the tablet with the
stylus or puck.
This device is more accurate and efficient than a light pen. These are two types in
use:
(a) Voltage or Electro-Magnetic Field Tablet and Pointer: This has a grid of
wires, embedded in the tablet surface, with different voltages or magnetic fields
corresponding to different coordinates. Intermediate positions within a cell can
also be interpolated.
27
Raster Graphics and
Clipping
(b) Acoustic or Sonic (Radio-Wave) Tablet and Pointer: The sound of a spark at
the tip of the stylus is picked up by strip microphones along two edges of the
tablet. From the arrival time of the sound pulse at the microphones, the
perpendicular distances of the stylus tip from the two axes are known. The
acoustic method suffers from its inherent noisiness as well as its susceptibility to
interference from other noise.
A combination of electric pulses and time-delay detection by a sensor in the stylus,
called Electro-acoustic Table is also available.
2) Mouse Mode: Mouse mode moves the screen pointer relative to any starting
position on the tablet surface, just like a mouse.
When drawing or tracing on the tablet, a series of x-y coordinates (vector graphics)
are created, either as a continuous stream of coordinates, or as end points. Further the
drawings created or traced on tablets are stored as mathematical line segments; and
these features of tablets help to produce, tablet computers, tablet PCs and pen tablets.
Y axis
X axis
Pen
Puck
Note: Objects are drawn with a pen (or stylus) or puck, but are traced with the puck
only.
Pen Tablet: A digitiser tablet that is specialised for handwriting and hand marking.
LCD-based tablets emulate the flow of ink as the tip touches the surface and pressure
is applied. Non-display tablets display the handwriting on a separate computer screen.
28
Introduction to
Computer Graphics
Y-Motion
X-Motion
1) Plotters print their output by moving a pen across the surface of a piece of paper.
This means that plotters are restricted to line art, rather than raster graphics as
with other printers. They can draw complex line art, including text, but do so
very slowly because of the mechanical movement of the pen.
2) Another difference between the plotter and the printer is that, the printer is aimed
primarily at printing text. Thus, the printer is enough to generate a page of
output, but this is not the case with the line art of the plotter.
Film Recorders
Higher-quality film recorders called LVT (Light Value Transfer) use laser to write
the image directly onto the film, one pixel at a time. This method is better suited to
print to large-format media such as poster-size prints. In any case, the exposed film is
developed and printed by regular photographic chemical processing. Self-developing
(polaroid) film can be used for immediate feedback.
Film recorders are used in digital printing to generate master negatives for offset and
other bulk printing processes. They are also used to produce the master copies of
movies that use computer animation or other special effects based on digital image
processing. For preview, archiving, and small-volume reproduction, film recorders
have been rendered obsolete by modern printers that produce photographic-quality
hardcopies directly on plain paper.
Film recorders were also commonly used to produce slides for slide projectors; but
this need is now largely met by video projectors that project images straight from a
computer to a screen.
Film recorders were among the earliest computer graphics output devices. Nowadays,
film recorders are primarily used in the motion picture film-out process for the ever
29
Raster Graphics and
Clipping
2) What are touch panels? Discuss different touch panels that are currently available
for use?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
………
As the importance of input and output devices has been discussed above, let us now
focus our discussion specifically on display devices, which present the output to the
end user who may not be a technically sound client. If the output display is appealing
then your creation will definitely receive a word of appreciation otherwise you may
be at the receiving end. Hence, it is pertinent to discuss some of the display devices
next.
30
Introduction to
Computer Graphics
(PI) (DC)
Picture Display (DD)
Information Controller Display
Device
Actually the picture information is given through the stream of electrons (e–) and its
flow is controlled by the display controller (the control is as per the information
supplied by the picture) finally the controlled e– flow produces scintillations on the
screen of the display device and the image is formed. The display is known as a
refreshing display because display is continuously created by the continuous
impugnation/striking of electrons on the screen at the same point. (i.e., same image is
continuously made 40-50 times per second i.e., continuous refreshing occurs).
Screen
(PI) (DC) (DD)
Picture Display Display
ormation Controller Device
e–
e– Phosphor coating
e–gun
e–
Horizontal Vertical
Deflection Deflection
Figure 8: CRT
For proper image we also need to implant a “Digital to Analog converter” (DAC – it
is an electronic circuit for digital to analog conversion) between the display controller
and the display device.
PI DC DAC DD
There are two kinds of refresh monitors, namely, the Random Scan and Raster Scan,
which will be described separately.
Note:
1) In a Random Scan System, the Display buffer stores the picture information.
Further, the device is capable of producing pictures made up of lines but not of
curves. Thus, it is also known as “Vector display device or Line display device or
Calligraphic display device”.
2) In a Raster Scan System, the Frame buffer stores the picture information, which
is the bit plane (with m rows and n columns).
Because of this type of storage the system is capable of producing realistic images,
but the limitation is that, the line segments may not appear to be smooth.
31
Raster Graphics and
Clipping
The original CRT, developed in the late fifties and early sixties, created charts and
pictures, line by line on the tube surface in any (random) order or direction given, in
a vectorial fashion. The electron beam was moved along the particular direction and
for the particular length of the line as specified. For this reason, the type of device
was known as a Vector, Calligraphic or Stroke (because it drew lines just as our
ancestors drew letters like pictures, stroke by stroke). The process was similar to a
hand-sketch or pen-plot.
The display by this system is called Line Drawing Display. The sequence operates
the following stages, illustrated in Figure 10:
1) Graphics Commands
2) Display-File Translator
3) Display-File Program
4) Display (File) Processor
5) VDU
The process is quite similar to the way a person or a plotter draws a line segment, the
pen being moved along a certain direction for a certain length, then changing
direction or lifting the pen and moving to a new point, and so on.
For instance, the image of a hut shown in Figure 11 would be traced as five line
segments, the left and right roof slopes, the left and right walls, and the floor line.
Efficiency is achieved by minimising wasted motion as would happen if a line
segment starts at a point different from the end point of the previous segment –
mequivalent to picking up the pen and putting it down at another point to draw the
next segment. Thus, it would be better to trace the hut as ABCDE, rather than as CB,
CD, BA, DE, and AE, as a draftsman might draw.
C C
B D B D
A E A E
(a) (b)
a) CB-CD-BA-DE-AE b) ABCDEA (--- beam off, beam on)
Figure 11: Vector Scan Display
32
Introduction to
Computer Graphics
Current day screen display is also based on CRT technology, except that instead of
displaying the picture tracing along one vector after another, the image is displayed
as a collection of phosphor dots of regular shape arranged in a matrix form. These
regular shapes are the pixels (picture elements) and cover the entire screen. The
pixels could be (as in earlier times) rectangular, round, or (as is common now)
square.
A pixel is the smallest unit addressable by the computer and is made up of a number
of smaller dots, comprising the smallest phosphor particles in the CRT coating.
However, in this text, we shall use the word “dot” synonymously with “pixel”.
The reasons as to why the original CRT did not become popular with the people
using computer was because, the refresh procedure required a large memory and high
speed, and the equipment to provide these was too expensive. It had to yield to the
cheaper storage tube display.
The electron beam covers the display area horizontally, row by row, from top to
bottom in a sweeping motion called Raster Scan, each dot lighting up with the
intensity and shade of gray or a colour as directed by the Display Controller. Each
complete sweep from top left to bottom right of the screen is one complete cycle,
called the Refresh Cycle.
When viewed from a distance, all the dots together make up the effect of a picture,
whether it is a scene from a serial as in the TV or a drawing in computer graphics.
The picture looks smooth or coarse-grained depending on the screen resolution, that
is the number of dots. Typically, on a computer monitor, there are 640 dots in the
horizontal direction and 200 to 480 dots in the vertical direction. (The home TV is
much finer grained, about three times or more, the scanning being done with analog
signals for an entire line at a time, as against the digital information for each dot in
the computer monitor). Today’s monitors can have resolutions of 1024 by 768 or
even higher.
Even with the advent of raster scan, the concept of vector (random scan) graphics has
not been completely eliminated. Certain standard geometric shapes such as straight-
line segments, circles and ellipses are built into compilers and packages as equations,
and developed into pixel graphics for the particular size and parameters specified.
Similarly, and more recently, even many fonts in text typography have been reduced
to equations so that the input letters, number etc. are really drawn from computed
lines, as a form of vector graphics. Before going into more details on raster scan
display device, let us discuss what is Raster Scan. It is the action, which is very
similar to that of a dot matrix printer, or even a typewriter that is used to print a pretty
border around a message, one line at a time. The image grows from top to bottom,
one line at a time, unit completed only when the last line is printed.
The Raster Scan Proceeds as follows: Starting from the top left corner of the
screen, the electron gun scans (sweeps) horizontally from left to right, one scan line,
that is, one row at a time, jumping (without tracing the line) to the left end of the next
lower row until the bottom right corner is reached. Then it jumps (again without
tracing) to the top left corner and starts again, finishing one complete refresh cycle.
Figure 12 shows the track of a raster cycle.
33
Raster Graphics and
Clipping
This technology became very cost-effective, even inexpensive, and because of the
availability of large memory and of its high refresh speed, it has become very
popular. It takes us back to the refresh-type displays, and especially caters to the
interactive and dynamic nature of modern-day computer graphics.
The main disadvantage of the raster scan is the jagged nature of the lines, arising
from the fact that the pixels are aligned along regular rows and columns, and points
on a line will not, in general, fall on the exact centres of the pixels. But the
advantages far outweigh this disadvantage and further developments have diminished
this jaggedness problem as well. Hence, almost all the monitors used today have
raster display, using pixels, and all subsequent discussions in this text will be
concerned with this last type.
All of the preceding characteristics of raster scan will apply to any image on the
screen, whether it is graphics proper in the sense of a drawing, or it is a character
(letter, number or other symbol).
Three components are necessary for the raster scan display. They are:
1) The Frame Buffer which is also called the Refresh Buffer or Bitmap. It is the
refresh storage area in the digital memory, in which the matrix (array) of intensity
values and other parameters (called attributes) of all the pixels making up the
image are stored in binary form.
2) The display device, which converts the electrical signals into visible images,
namely the VDU.
3) The Display Controller, the interface that transmits the contents of the frame
buffer to the VDU in a form compatible with the display device, a certain number
of (30 or more) times a second. The display controller also adjusts and makes
allowances for the differences in the operating speeds of the various devices
involved, and may also generate line segments and text characters.
Creating points, lines, characters, as well as filling in areas with shades or colours are
all accomplished by this scan technique, known as the Frame Buffer Display. A
common method for storing characters is to store the pixel information for the entire
matrix (5*7 to 9*14, horizontal to vertical) assigned to represent a character.
1) Graphics Commands
2) Display Processor (Scan Conversion)
3) Frame Buffer
4) Display Controller
5) VDU
Display
Host CPU Processor Frame Display Visual
Graphics (Scan Buffer Controller Display
Commands Conversion Unit
34
Introduction to
Computer Graphics
Frame Buffers
The display system cycles through the refresh buffer, row-by-row at speeds of 30
or 60 times per second to produce the image on the display. The intensity values
picked up from the frame buffer are routed to the Digital/Analog converter which
produces the necessary deflection signals to generate the raster scan. A flicker-
free image is produced by interlacing all odd-numbered scan lines that are
displayed first from, top to bottom and then, all even-numbered scan lines that
are displayed. The effective refresh rate to produce the picture becomes much
greater than 30 Hz.
bright
1 1 1 1 1 1 1
1 0 0 0 0 0 0
1 0 0 0 0 0 0
1 0 0 0 0 0 0
1 0 0 1 0 0 0 DC DAC
1 0 0 0 0 0 0
1 0 0 0 0 0 0
1 0 0 0 0 0 0
1 0 0 1 0 1 0
1 0 0 0 0 0 0
Figure 14: If information stored in frame buffer is 1 then, the corresponding pixel is made bright
on the screen and if it is zero then no brightness appears i.e., 0→off; 1 → ON so the
image obtained on the screen is discrete.
Different kinds of memory have been used in frame buffers: The earliest type of
frame buffers used drums and disks with rotational frequency compatible to the rate
of refresh. Such frame buffers were called rotating-memory frame buffers. But the
relatively lower costs of integrated-circuit shift registers saw the rotating-memory
frame buffer being replaced by the shift-register frame buffers.
A frame buffer can be constructed with a number of shift registers, each representing
one column of pixels on the screen. Each shift register contributes one bit per
horizontal scan line. However, changing a given spot on the screen is not very easy
with shift registers. So they are not suitable for interactive applications.
Modern frame buffers use random-scan integrated circuits (discussed earlier) where
the pixel intensities are represented by 1,2,4,8,16 or 24 bits. Encoding text and simple
images does not require more than say, 8 bit per pixel. But to produce a good quality
coloured image more than 8 bits, something like 24 bits, are required.
One of the best methods to encode coloured pictures involves the use of a colour
map. The pixel values in the frame buffer are treated as addresses of a look-up-table,
35
Raster Graphics and
Clipping
which has entries for every pixel’s red, green and blue components. The entry value
is used to control the intensity or colour on the CRT; each of the colour components
can be defined to high precision providing accurate control over the colours
displayed.
Another type of frame buffer is the multiple-plane frame buffer, where the frame
buffer can be treated as consisting of several frames or planes, each containing
information (intensity and/or colour) values of a separate image. An 8-bit per pixel
frame buffer can be made to represent a single image with 8-bits of intensity
precision or it can represent tow images each of 4-bit intensity precision or eight
black and white images with 1-bit intensity precision each. A variety of image mixing
can be done. For example, in animation systems, several moving objects can be
displayed as separate planes.
Note:
1) In a frame buffer, information storage starts from top left corner and goes till the
bottom right corner.
2) Using this type of buffer we can draw curves too.
3) So in order to draw live images of objects of day-to-day life, enormous picture
information is required and this is the limitation.
4) How to vary intensity of a point (bright point) in Raster scan display device)
BIT PLANE
(i, j)
DC DAC
s
Screen
Picture Information
Here, the picture information is stored in the form of bit plans (on each bit plane full
information of picture is stored) and for brightest intensity 3 bit plane (say for
simplicity) should have 1 as respective information in the frame buffer, if it is zero
then, the intensity will be the least; some of the intensity combination are discussed
below:
bit plane
A B C
0 0 0 min. intensity of a point
0 0 1
0 1 0 value of a pixel in 2nd bit plane
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1 max intensity of a point
36
Introduction to
Computer Graphics
Maximum intensity of a pixel implies that the value for that point on each bit plane is
1 which generates maximum intensity (to have a bit low intensity any one of the bit
plane is given information 0 for that point and so on):
mxn (i, j)
mxn (i, j) mxn (i, j)
Picture information
R G B
So, 0 0 0 no display
0 0 1 Blue display
0 1 0 Green display
0 1 1 Green-blue mix display and so on.
Similarly, for more colours we should have more bit planes and hence more numbers
of digits signify more colour combinations.
37
Raster Graphics and
Clipping
We have more bit planes for each colour say 3 bit planes for R, 3 for G and 3 for B
(each bit plane of size mxn) so there exist a digit number and by respective 1s and 0s
we can control colour intensity too. Here, the amount of information needed is 29 or
amount of information needed is 23, 23, 23 for each R, G, B. Thus we can generate
light red, light green or combination of light red, dark green, medium blue and so on.
Example 1: What is the number of memory bits required for 3 bit –plane frame
buffer for a 512 x 512 raster.
Example 2: What is the refresh rate in a 512 x 512 raster if pixels are accessed at the
rate of 200 nanoseconds.
Solution. For individual access of pixels rate is 200x10-9 seconds, for 512x512 pixels
0.0524 seconds required.
Plasma Panel
It is an inherent memory device. Images can be written onto the surface point by
point and they remain stable, without flicker, for a long time after being intensified.
An inert mixture of noble gases (neon and xenon) gas is filled and sealed between
two glass sheets with fine-mesh gold-wire electrodes attached to their inner faces.
The particles of the gas behave in a bi-stable manner, that is stable at two levels
(on/off). When voltage is applied across the electrodes the gas molecules get ionised
and are activated giving rise to a glow.
Advantage
Disadvantage
LCD
Liquid Crystal Display is a type of display used in digital watches and many portable
computers. These work with polarised ambient (outside source) light consisting of
liquid crystal (a material which polarises light when a voltage is applied to it), with a
38
Introduction to
Computer Graphics
conductive coating for the voltage application, set between two transparent glass or
plastic plates and a polarised film on one side which will show the excited portions of
the liquid crystal as dark dots or lines. (The seven-segment display of most digital
watches is an example of LCD by lines). This technology is now applied to Data
Projectors to project computer generated or stored images directly on to big screens
in auditoriums.
LCD displays utilise two sheets of polarising material with a liquid crystal solution
between them. An electric current passed through the liquid causes the crystals to
align so that light cannot pass through them. Each crystal, therefore, is like a shutter,
either allowing light to pass through or blocking the light.
Monochrome LCD images usually appear as blue or dark gray images on top of a
grayish-white background. Colour LCD displays use two basic techniques for
producing colour: Passive matrix is the less expensive of the two technologies. The
other technology, called Thin Film Transistor (TFT) or active-matrix, produces
colour images that are as sharp as traditional CRT displays, but the technology is
expensive. Recent passive-matrix displays using new CSTN and DSTN technologies
produce sharp colours rivaling active-matrix displays. Most LCD screens used in
notebook computers are backlit, or transmissive, to make them easier to read.
1.5 SUMMARY
In this unit, we have discussed the conceptual meaning of computer graphics, with its
application in various fields right from presentation graphics to animation and games.
We have also discussed the variety of software and their respective file formats used
in various applications of computer graphics. In the end, we have discussed the
working of various input and output devices. Finally, the discussion on display
devices was done with coverage to Refreshing Display Devices, and Plasma Panels,
and LCD Display Devices.
1.6 SOLUTIONS/ANSWERS
39
Raster Graphics and
Clipping
• Presentation Graphics
• Painting Drawing
• Photo Editing
• Scientific Visualisation
• Image Processing
• Digital Art
• Education, Training, Entertainment and CAD
• Simulation
• Animation and games.
3) TIFF or PNG: TIFF has been around longer than PNG, which was originally
designed to replace GIF on the Web. PowerPoint works well with both of these
files when creating transparent backgrounds but generally PNG creates smaller
file sizes with no loss of quality.
4)
Drawing Painting
Drawing is a software Painting functions, on the other hand, don’t create
application means using tools objects. If you look at a computer screen, you’ll see
that create “objects,” such as that it’s made up of millions of tiny dots called
squares, circles, lines or text, pixels. You’ll see the same thing in a simpler form
which the program treats as if you look at the colour comics in the Sunday
discrete units. If you draw a newspaper—lots of dots of different colour ink that
square in PowerPoint, for form a picture. Unlike a drawing function, a paint
example, you can click function changes the colour of individual pixels
anywhere on the square and based on the tools you choose. In a photograph of a
move it around or resize it. It’s person’s face, for example, the colours change
an object, just like typing the gradually because of light, shadow and complexion.
letter “e” in a word processor, You need a paint function to create this kind of
i.e., a drawing program allows effect; there’s no object that you can select or move
a user to position standard the way you can with the drawn square, i.e., a
shape (also called symbols, painting program allows the user to paint arbitrary
templates, or objects) which swaths using a brush various size, shape, colour
can be edited by translation, and pattern. More painting programs allows
rotations and scaling operations placement of such predefined shapes as rectangles,
on these shapes. polygon and canvas. Any part of the canvas can be
edited at the pixel level.
The reason why the differences are important is that, as noted earlier, many different
kinds of programs offer different kinds of graphics features at different levels of
sophistication, but they tend to specialise in one or the other.
5) To Create posters, brochures, business cards, stationary, coffee cup mug design,
cereal boxes, candy wrappers, orange juice gallon jugs, cups, or anything else
you see in print. Most designers will use vectorised programs to make these
things come to life. Vectors are wonderful because they print extremely well, and
you can scale them up to make them large, or scale them down to make them
small, and there is no distortion. Adobe Illustrator is the King of Vector
Programs. In Adobe Illustrator, you can create a 12 foot, by 12 foot document.
40
Introduction to
Computer Graphics
7) No, other example of softwares are Apple’s Keynote, Openoffice’s (Star Office-
by Sun microsystems), Impress, Microsoft Powerpoint and (for multimedia
presentations, incorporating moving pictures, and sounds) Macromedia Director.
Custom graphics can also be created in other programs such as Adobe Photoshop
or Adobe Illustrator.
1) Photo-editing stream involves programs, which are not just paint programs—but
they include many sophisticated functions for altering images and for controlling
aspects of the image, like light and colour balance. Some of the professionally
used software for photo editing are PhotoShop (Adobe), FireWorks (Macro
Media), Corel (owned by Corel) etc.
Data Image
41
Raster Graphics and
Clipping
(e) Web-based Services: Commonly used software example is: Protected Area
Archive.
4) No. While true-scale, structurally valid drawings are the reason for CAD’s
existence, its use is as diverse as our customer’s imaginations.
(a) Page layout, web graphics (when scaling and relationships are important to
an image, making the image in CAD and exporting it as a bitmap for touchup
and conversion can be very productive).
(b) Visually accessed databases (imagine a map with details where you can zoom
into an area and edit textual information “in place” and you can then see what
other items of interest are “in the neighbourhood” – our program’s ability to
work very rapidly with large drawings is a real plus here).
(c) Sign layout, laser-cutting patterns for garment factories, schematic design
(where CAD’s symbol library capabilities come in handy), and printed-circuit
board layout (This was the application that our first CAD program, created in
1977).
5) The DWG file format is a CAD vector format developed by Autodesk and
created by their AutoCAD application. DXF is also a CAD vector format. It is
designed to allow the exchange of vector information between different CAD
applications. Most CAD applications can save to and read from DXF format.
When CAD drawings are sent to printers the format commonly used is HPGL.
HPGL files typically have the extension .plt.
The HPGL file format is a vector format developed by Hewlett Packard for
driving plotters. The file extensions used include .plt, .hpg, .hp2, .pl2 and
sometimes .prn. However, the use of the .prn extension is not an absolute
indicator that the file contains an HPGL code. They are often referred to as ‘plot
files’. Trix Systems offers several options for handling HPGL and the later
HPGL2 file formats.
Check Your Progress 3
1) Computer simulation is the discipline of designing a model of an actual or
theoretical physical system, executing the model on a digital computer, and
analysing the execution output. Simulation embodies the principle of “learning
by doing” – to learn about the system we must first build a model of some sort
and then operate the model. Simulation is often essential in the following cases:
• the model is very complex with many variables and interacting components;
• the underlying variables relationships are nonlinear;
• the model contains random variates;
• the model output is to be visual as in a 3D computer animation.
The Advantage of Simulation is that – even for easily solvable linear systems – a
uniform model execution technique can be used to solve a large variety of systems
without resorting to a “bag of tricks” where one must choose special-purpose and
sometimes arcane solution methods to avoid simulation. Another important aspect of
the simulation technique is that one builds a simulation model to replicate the actual
system. When one uses the closed-form approach, the model is sometimes twisted to
suit the closed-form nature of the solution method rather than to accurately represent
the physical system. A harmonious compromise is to tackle system modeling with a
hybrid approach using both closed-form methods and simulation. For example, we
might begin to model a system with closed-form analysis and then proceed later with
a simulation. This evolutionary procedure is often very effective.
42
Introduction to
Computer Graphics
1) A graphic tablet is a drawing tablet used for sketching new images or tracing old
ones. Or we can say that a graphics tablet is a computer input device that allows
one to hand-draw images and graphics, similar to the way one draws images with
a pencil or paper. Or Graphics tablet is a computer peripheral device that allows
one to hand images directly into a computer, generally through an imaging
program.
Difference between pen and puck: Objects are drawn with a pen (or stylus) or
puck, but are traced with the puck only.
2) Touch panels allow displayed objects or screen positions to be selected with the
touch of the finger, also known as Touch Sensitive Screens (TSS). Four types of
touch panels commonly in use, are Electrical, Electro-Mechanical, Optical,
Acoustic touch panels.
(a) Plotters print their output by moving a pen across the surface of piece of a
paper. This means that plotters are restricted to line art, rather than raster
graphics as with other printers. They can draw complex line art, including
text, but do so very slowly because of mechanical movement of pen.
(b) Another difference between plotter and printer is that the printer is aimed
primarily at printing text. Thus, the printer is enough to generate a page of
output, but this is not the case with the line art on a plotter.
1) Refreshing display devices are those display devices in which the picture
continuously refreshes. The general refresh rate is 40 to 60 times per second
example CRT. There are two kinds of refreshing display devices, namely the
random scan and raster scan, the limitation is that these devices are not competent
to provide smooth and good quality images.
43
Raster Graphics and
Clipping
2) In Random Scan system the Display buffer stores the picture information, further
the device is capable of producing pictures made up of lines but not of curves.
Thus, it is also known as “Vector display device or Line display device or
Calligraphic display device.
In Raster Scan system the Frame buffer stores the picture information which is
the bit plane (with m rows and n columns) because of this type of storage the
system is capable of producing realistic images, but the limitation is that the line
segments may not appear to be smooth.
44
Raster Graphics and
Clipping
UNIT 2 GRAPHIC PRIMITIVES
Structure Page Nos.
2.1 Introduction 46
2.2 Objectives 46
2.3 Points and Lines 46
2.4 Line Generation Algorithms 48
2.4.1 DDA Algorithm 49
2.4.2 Bresenhams Line Generation Algorithm 54
2.5 Circle-Generation Algorithms 59
2.5.1 Properties of Circle 59
2.5.2 Mid Point Circle Generation Algorithm 61
2.6 Polygon Filling Algorithm 63
2.7 Summary 68
2.8 Solutions/Answers 68
2.1 INTRODUCTION
In unit 1 of this block, we have discussed refreshing display devices and its types that
are Random Scan display devices and Raster Scan display devices. We have also
discussed the limitations of these display devices. In Raster Scan display device,
which we have discussed in the previous unit, the picture information is stored in the
frame buffer, the concept of frame buffer conveys that the information for the image
to be projected on the screen is stored in the form of 0s and 1s, making respective
pixels activate and deactivate on the screen, and it is the concept itself which
contributes to the discreteness in the picture under display. So, in this unit we are
going to extend our discussion to algorithms, which are responsible for actual display
of the geometries like line, circle etc., on display devices, such that there is decrease in
discreteness and hence, more realistic finished image is the outcome.
2.2 OBJECTIVES
After going through this unit, you should be able to:
(i.e., the scan line numbers) start at bottom and increase upwards.
8
7 C
6
5
y 4
3 Scale line
2 B
1
0
A
0 1 2 3 4 5 6 7 8 9
x
Figure 1: Scan lines
Figure 1, shows the Array of square pixels on the display surface. Coordinate of pixel
A: 0, 0; B: 2, 2; C: 6, 7. A coordinate position (6.26, 7.25) is represented by C,
whereas (2.3, 2.5) is represented by B. Because, in order to plot a pixel on the screen,
we need to round off the coordinates to a nearest integer. Further, we need to say that,
it is this rounding off, which leads to distortion of any graphic image.
47
Raster Graphics and
Clipping
2.4 LINE GENERATION ALGORITHMS
• In unit 1, we have discussed the case of frame buffer where information about the
image to be projected on the screen is stored in an m*n matrix, in the form of 0s
and 1s; the 1s stored in an m* n matrix positions are brightened on the screen
and 0’s are not brightened on the screen and this section which may or may not
be brightened is known as the Pixel (picture element). This information of 0s and
1s gives the required pattern on the output screen i.e., for display of information.
In such a buffer, the screen is also in the form of m* n matrix , where each
section or niche is a pixel (i.e., we have m* n pixels to constitute the output).
Picture line
information
0 0 0 1 0 0
0 0 1 0 0 0
0 1 0 0 0 0
DC
1 0 0 0 0 0
0 1 0 0 0 0 DC: Display
Controller
0 0 1 0 0 0
0 0 0 1 0 0
0 0 0 0 1 0
Frame Output
Now, it is to be noted that the creation of a line is merely not restricted to the above
pattern, because, sometimes the line may have a slope and intercept that its
information is required to be stored in more than one section of the frame buffer, so in
order to draw or to approximate such the line, two or more pixels are to be made ON.
Thus, the outcome of the line information in the frame buffer is displayed as a stair;
this effect of having two or more pixels ON to approximating a line between two
points say A and B is known as the Staircase effect. The concept is shown below in
Figure 4.
Staircase effect
So, from the Figure 4, I think, it is clear to you that when a line to be drawn is simply
described by its end points, then it can be plotted by making close approximations of
the pixels which best suit the line, and this approximation is responsible for the
staircase effect, which miss projects the information of the geometry of line stored in
the frame buffer as a stair. This defect known as Staircase effect is prominent in DDA
Line generation algorithms, thus, to remove the defect Bresenham line generation
Algorithm was introduced. We are going to discuss DDA (Digital Differential
Analyser) Algorithm and Bresenham line generation Algorithm next.
48
Graphic Primitives
In Figure 5, there are two lines. Line 1 (slope<1) and line 2 (slope>1). Now let us
discuss the general mechanism of construction of these two lines with the DDA
algorithm. As the slope of the line is a crucial factor in its construction, let us consider
the algorithm in two cases depending on the slope of the line whether it is > 1 or < 1.
Case 1: slope (m) of line is < 1 (i.e., line 1): In this case to plot the line we have to
move the direction of pixel in x by 1 unit every time and then hunt for the pixel value
of the y direction which best suits the line and lighten that pixel in order to plot the
line.
So, in Case 1 i.e., 0 < m < 1 where x is to be increased then by 1 unit every time and
proper y is approximated.
Case 2: slope (m) of line is > 1 (i.e., line 2) if m > 1 i.e., case of line 2, then the most
appropriate strategy would be to move towards the y direction by 1 unit every time
and determine the pixel in x direction which best suits the line and get that pixel
lightened to plot the line.
49
Raster Graphics and
Clipping
So, in Case 2, i.e., (infinity) > m > 1 where y is to be increased by 1 unit every time
and proper x is approximated.
Assumption: The line generation through DDA is discussed only for the Ist Quadrant,
if the line lies in any other quadrant then apply respective transformation (generally
reflection transformation), such that it lies in Ist Quadrant and then proceed with the
algorithm, also make intercept Zero by translational transformation such that (xi, yi)
resolves to (xi, mxi + c) or (xi, mxi) and similar simplification occurs for other cases of
line generation. The concept and application of transformations is discussed in
Block 2 of this course.
Note:
1) If in case 1, we plot the line the other way round i.e., moving in y direction by 1
unit every time and then hunting for x direction pixel which best suits the line. In
this case, every time we look for the x pixel, it will provide more than one choice
of pixel and thus enhances the defect of the stair case effect in line generation.
Additionally, from the Figure 5, you may notice that in the other way round
strategy for plotting line 1, the vertical span is quite less in comparison to the
horizontal span. Thus, a lesser number of pixels are to be made ON, and will be
available if we increase Y in unit step and approximate X. But more pixels will be
available if we increase X in unit steps and approximate Y (this choice will also
reduce staircase effect distortion in line generation) (Q more motion is to be made
along x-axis).
When (infinity) > m > 1 increment Y in unit steps and approximate X, simplify
(xi, yi) accordingly.
Consider a line to be generated from (X0, Y0) to (X1, Y1) , Assume that X1>X0 is true
else swap (X0, Y0) and (X1, Y1). Now, if ( X 1 − X 0 > Y1 − Y0 ) that means slope (m)
of line is < 1 hence, case 1 for line generation is applicable. Thus, we need to
increment X in unit steps and approximate Y. So, from X0 to X1 , xi is incremented in
unit steps in the horizontal direction, now for these unit steps the satisfying value of Y
can be estimated by the general equation of line y = mx + c.
Similarly, for case 2, let us sum up our discussion on DDA algorithm for both cases.
50
Graphic Primitives
Y (x1, y1)
(x0, y0)
For instance, in a given line the end points are (X0, Y0) and (X1, Y1). Using these end
points we find the slope (m = Y1 – Y0/ X1 – X0) and check that the value of m lies
between 0 and 1 or is > 1. If 0 < m < 1 then case 1 applies, else, case 2 applies.
For case 1, increase x by one Unit every time, for case 2 increase y by one Unit every
time and approximate respective values of y and x.
Equations (1) and (2) imply that to approximate line for case 1 we have to move along
x direction by 1 unit to have next value of x and we have to add slope m to initial y
value to get next value of y.
Now, using the starting point (x0, y0) in the above equations (1) and (2) we go for xi
and yi (i = 1, 2, ……n) and put colour to the pixel to be lightened.
x ← X0
y ← Y0
m ← (Y1 – Y0)/ (X1 – X0)
while (x < = X1) do
{ put-pixel (x, round (y), color)
(new x-value) x ← (old x-value) x + 1
(new y-axis) y ← (old y-value) y + m
}
51
Raster Graphics and
Clipping
at (x1, y1) = put pixel (x1, y1, colour)
similarly, x2 = x1 + 1; y2 = y1 + m
at (x2, y2) : put pixel (x2, y2, colour) and so on.
Case 2: slope (m) of line is > 1 (i.e., line 2): Same as case 1 but, this time, the y
component is increased by one unit each time and appropriate x component is to be
selected. To do the task of appropriate selection of x component we use the equation
of Line: y = mx+c.
⇒ 1
xi + 1 = xi +
m
Example 1: Draw line segment from point (2, 4) to (9, 9) using DDA algorithm.
52
Graphic Primitives
y1 − y0 9 − 4 5
⇒m= = = i.e., 0 < m < 1
x1 − x0 9 − 2 7
5 63 − 45 18
C = y1 – mx1 = 9 – × 9 = =
7 7 7
1) x1 = x0 + 1 = 3
5 28 + 5 33 5
y1 = y0 + m = 4 + = = =4
7 7 7 7
put pixel (x0, round y, colour)
i.e., put on (3, 5)
2) x2 = x1 + 1 = 3 + 1 = 4
5 5
y2 = y1 + m = (33/7) + = 38/7=5
7 7
put on (4, 5)
Similarly go on till (9, 9) is reached.
53
Raster Graphics and
Clipping
4) How does the resolution of a system affect graphic display?
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
5) Modify the DDA algorithm for negative sloped lines ; discuss both the cases i.e.,
slope > 1 and 0 < slope < 1.
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
6) Using DDA algorithm draw line segments from point (1,1) to (9,7).
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
12
l2
11
10
10 11 12 13 14
Pixel Columns
Figure 8: Bresenham line generation
Sampling at Unit x distance in Figure 8, we need to decide which of the two possible
pixel position is closer to the line path at each sample step.
In l1 : we need to decide that at next sample position whether to plot the pixel at
position (11, 11) or at (11, 12).
Similarly, In l2: the next pixel has to be (11, 13) or (11, 12) or what choice of pixel is
to be made to draw a line is given by Bresenham, by testing the sign of the integer
parameter whose value is proportional to the difference between the separation of the
two pixel positions from actual line path. In this section, we will discuss the
Bresenham line drawing algorithm for + ve slope ( 0 < m < 1). If the slope is negative
then, use reflection transformation to transform the line segment with negative slope
to line segment with positive slope. Now, let us discuss the generation of line again in
two situations as in the case of DDA line generation.
54
Graphic Primitives
Case 1: Scan Conversion of Line with the slope (0 < m < 1)
Here the pixel positions along the line path are determined by sampling at Unit x
intervals i.e., starting from the initial point. (x0, y0) of a given line we step to each
successive column. (i.e., x-position) and plot the pixel whose scanline y value is
closest to the line path. Assume we proceed in this fashion up to the kth step. The
process is shown in Figure 9. Assuming we have determined the pixel at (xk, yk). We
need to decide which pixel is to be plotted in column xk+ 1. Our choices are either
(xk +1, yk) or (xk + 1, yk + 1).
(xk + 1, yk+ 1)
yk+1 Si
(xk + 1, y)
d2
Y
y = mx + c d1
yk
(xk , yk) Ti
(xk +1, yk)
Xk (Xk +1 = Xk + 1)
Figure 9: Scan conversion 0 < m < 1
At sampling position Xk + 1 the vertical pixel (or scan line) separation from
mathematical line (y = mx + c) is say d1 and d2.
Now, the y coordinate on the mathematical line at pixel column position Xk + 1 is:
y = m (xk + 1) + c ---------------------(1)
by Figure 9:
d1 = y – yk ---------------------(2)
= m (xk + 1) + c – yk ---------------------(3) (using (1))
p k = ∆x(d1 − d 2 ) ---------------------(6)
Now, a decision parameter pk for the kth step in the line algorithm can be obtained by
rearranging (5) such that it involves only integer calculations. To accomplish this
substitute m = ∆y/∆x where, ∆y and ∆x ⇒ vertical and horizontal separations of the
end point positions.
55
Raster Graphics and
Clipping
Note: sign of pk is same as sign of d1 – d2 since it is assumed that ∆x > 0
[As in Figure 9, d1 < d2 i.e. (d1 – d2) is –ve i.e., pk is –ve so pixel Ti is more
appropriate choice otherwise pixel Si is the appropriate choice.
i.e., (1) if pk < 0 choose Ti , so next pixel choice after (xk, yk) is (xk + 1, yk)
else (2) if pk > 0 choose Si , so next pixel choice after (xk , yk ) is (xk + 1, yk + 1).
This first parameter p0 is determined by using equation (7), and (8) at the starting
pixel position (x0, y0) with m evaluated as ∆y /∆x (i.e., intercept c = 0)
p0 = 0 – 0 + b = 2∆y + ∆x (2 * 0 – 1) = 2∆y – ∆x
p0 = 2∆y – ∆x -----------------------(10 A)
Summary:
(Bresenham line drawing algorithm for +ve slope ( 0 < m < 1).
• Calculate constants 2∆y, 2∆y – 2∆x, ∆y, ∆x at once for each line to be scan
converted, so the arithmetic involves only integer addition/subtraction of these
constants.
56
Graphic Primitives
while (x0 < = x1) do
{puton (x0, y0)
if (pi > 0) then
{x0 ← x0 + 1;
y0 ← y0 + 1;
pi + 1 ← pi + 2 (∆y – ∆x);
}
if (pi < 0) then
{x0 ← x0 + 1
y0 ← y0
pi + 1 ← pi + 2 ∆y
}
}
Note:
Example 2: Draw line segment joining (20, 10) and (25, 14) using Bresenham line
generation algorithm.
y1 − y0 14 − 10 4
m= = = <1
x1 − x0 25 − 20 5
∆y 4
As, m= = ⇒ ∆y = 4
∆x 5
∆x = 5
→ plot point (20, 10)
p i = 2∆ y – ∆ x
i = 1: pi = 2 * 4 – 5 = 3
as p1 > 0 so x0 ← 21; y0 ← 11
now plot (21,11)
i = 2 as p1 > 0
∴p2 = p1 + 2(∆y – ∆x)
= 3 + 2 (4 – 5) = 3 – 2 = 1
p2 > 0 so x0 ← 22; y0 ← 12
plot (22,12)
i = 3 as p2 > 0
∴p3 = p2 + 2 (∆y – ∆x) = 1 + 2 (4 – 5) = – 1
p3 < 0 ∴x0 ← 23
y0 ← 12
plot (23, 12)
i = 4 as p3 < 0
∴ p 4 = p3 + 2∆ y
57
Raster Graphics and
Clipping
=–1+2*4=7
∴ x0 ← 24; y0 ← 13
plot (24, 13)
i = 5 as p4 > 0
Example 3: Illustrate the Bresenham line generation algorithm by digitzing the line
with end points (20, 10) and (30,18)
y 2 − y1 ∆y 18 − 10 8
Solution: m = = = = = 0.8 -------------------(1)
x2 − x1 ∆x 30 − 20 10
⇒ ∆ y = 8 and ∆ x = 10 -------------------(2)
plot initial point (x0, y0) = (20, 10) in frame buffer now determine successive pixel
positions along line path from decision parameters value (20, 10).
58
Graphic Primitives
2) Illustrate the Bresenham line generation algorithm by digitising the line with end
points (15, 5) and (25,13).
………………………………………………………………………………………
………………………………………………………………………………………
In spite of using the Bresenham circle generation (i.e., incremental approval on basis
of decision parameters) we could use basic properties of circle for its generation.
y
(xc, yc)
yc r
x
xc
Figure 8: Property of circle
A circle is defined as the set of points or locus of all the points, which exist at a given
distance r from center (xc, yc). The distance relationship could be expressed by using
Pythagonous theorem in Cartesian coordinates as
y
(x,y)
r
(xc, yc)
x
Figure 9: Circle generation (cartesian coordinate system)
Now, using (1) we can calculate points on the circumference by stepping along x-axis
from xc–r to xc + r and calculating respective y values for each position.
y = yc ± r 2 − ( x − xc ) 2 ----------------------(2)
59
Raster Graphics and
Clipping
Generation on basis of polar coordinates (r and θ)
(x, y)
r
θ
(xc, yc)
Expressing the circle equation in parametric polar form yields the following equations
x = xc + r cos θ
y = yc + r sin θ --------------------------(3)
Using a fixed angular step size we can plot a circle with equally spaced points along
the circumference.
Note:
• Generation of circle with the help of the two ways mentioned is not a good choice
Q both require a lot of computation equation (2) requires square root and
multiplication calculations where as equation (3) requires trigonometric and
multiplication calculations. A more efficient approach is based on incremental
calculations of decision parameter. One such approach is Bresenham circle
generation. (mid point circle algorithm).
• This Bresenham circle generation (mid point circle algorithm) is similar to line
generation. Sample pixels at Unit x intervals and determine the closest pixel to the
specified circle path at each step.
• For a given radius and center position (x, y,) we first setup our algorithm to
calculate pixel position around the path of circle centered at coordinate origin
(0, 0) i.e., we translate (xc, yc) → (0, 0) and after the generation we do inverse
translation (0, 0) → (xc, yc) hence each calculated position (x, y) of circumference
is moved to its proper screen position by adding xc to x and yc to y.
y (o, r)
60
Graphic Primitives
• In Bresenham circle generation (mid point circle algorithm) we calculate points
in an octant/quadrant, and then by using symmetry we find other respective points
on the circumference.
(– x, y) (x, y)
(–x,y) (x,y)
VIII I
(– y, x) (y, x)
VII II
VI III
(– y, – x) (y,–x)
V IV
(–x, –y) (x,-y)
(– x,y) (x,–y)
QUADRANT
OCTANT
Figure 12: Circle generation approaches
The position of any point (x, y) can be analysed by using the sign of fc (x, y) i.e.,
< 0 if ( x , y ) is inside circle boundary
decision parameter → fc (x, y) = = 0 if ( x , y ) is on the circle boundary ---------(2)
> 0 if ( x , y ) is outside circle boundary
i.e., we need to test the sign of fc (x, y) are performed for mid point between pixels
near the circle path at each sampling step.
x2 + y2 – r2 = 0
Yk
Yk – 1 mid
point
xk xk + 1 xk + 2
Figure 13: Mid point between Candidate Pixel at sampling Position xk + 1 along Circular Path
Figure 13 shows the mid point of two candidate pixel (xk + 1, yk) and (xk +1, yk – 1) at
sampling position xk + 1.
Assuming we have just plotted the (xk, yk) pixel, now, we need to determine the next
pixel to be plotted at (xk + 1, yk) or (xk +1, yk – 1). In order to decide the pixel on the
circles path, we would need to evaluate our decision parameter pk at mid point
between these two pixels i.e.,
1 1
pk = fc (xk + 1, yk – ) = (xk + 1)2 + (yk – )2 – r2 ----------------------(3)
2 2
(using (1)
61
Raster Graphics and
Clipping
If pk < 0 then ⇒ midpoint lies inside the circle and pixel on scan line yk is closer to
circle boundary else mid point is outside or on the circle boundary and we select pixel
on scan line yk – 1 successive decision parameters are obtained by using incremental
calculations.
⇒ pk + 1 = pk + (2xk + 1 + 1) ----------------(6)
Also,
2xk + 1 = 2 (xk + 1) = 2xk + 2 ------------------ (8)
Note:
• At starting position (0, r) the terms in (8) and (9) have value 0 and 2r respectively.
Each successive value is obtained by adding 2 to the previous value of 2x and
subtracting 2 from the previous value of 2y.
62
Graphic Primitives
• The initial decision parameter is obtained by evaluating the circle function at start
position (x0, y0) = (0, r)
1 1 5
i.e., p0 = fc (1, r – ) = 1 + (r – )2 – r2 = – r (using 1)
2 2 4
(a) Input radius r and circle, center (xc, yc) and obtain the first point on the
circumference of the circle centered on origin as
Example 4: Given a circle radius r = 10 determine positions along the circle octants
in 1st Quadrant from x = 0 to x = y.
63
Raster Graphics and
Clipping
equivalent gray scale. The most general classification appears to be through following
algorithms:
We restrict our discussion to scan line polygon fill algorithm, although we will discuss
the others in brief at the end of this section.
In this, the information for a solid body is stored in the frame buffer and using that
information all pixels i.e., of both boundary and interior region are identified and, are
hence plotted.
Here we are going to do scan conversion of solid areas where the region is bounded
by polygonal lines. Pixels are identified for the interior of the polygonal region, and
are then filled plotted with the predefined colour.
Let us discuss the algorithm briefly, then we will go into details on the same.
This algorithm checks and alters the attributes (i.e., characteristics and parameters) of
the pixels along the current raster scan line. As soon as it crosses over from the
outside to the inside of a boundary of the specified polygon it starts resetting the
colour (or gray) attribute. In effect filling the region along that scan line. It changes
back to the original attribute when it crosses the boundary again. Figure 14 shows
variations of this basic idea.
A
D
C
B
Figure 14: Concept of scan line polygon filling
So as to understand Scan Line Polygon Fill Algorithm in detail consider Figure 15:
y max
yk+1
yk
yk–1
yk–2
xk xk + 1 xk + 2 xk + 3-------------------- y min
64
Graphic Primitives
Here, in Figure 15,
• yk, yk – 1,….. → are scan lines from top to bottom (value of y at top or scan line at
top has maximum value and the scan line at bottom has minimum y value).
• xk, xk + 1…..→are consecutive pixels on a scan line i.e., (xk,yk),
(xk+1,yk ),…………⇒ a sea of pixels for scan lines yk, yk – 1, yk – 2, ….., yi and for
the chosen scan line we find pixels satisfying the condition as mentioned above.
• For a given scan line, let us assume that it intersects edges of the given polygonal
region at the points P1, P2, …, Pk. Sort these points P1, P2, …, Pk in terms of the
X-coordinates in increasing order.
(a) Scan line passes through the edges in between shown by point a in Figure 16.
(b) Scan line passes through the vertex of the polygon whose neighbouring vertices
lie completely on one side of the scan line (point b, point c in Figure 16).
(c) Scan line passes through some bottom vertex of polygon whose neighbouring
vertices lie on both sides of the scan line.
pt.b 1′ 2′ max y scanline
2 3 4
pt.a 1 middle y scanline
xj+ 1
yj
1” 2” 3” y scanline
min y scanline
xj
pt. c
In case a, i.e., when a scan line intersects an edge at a point but not a vertex point of
the edge (as in case of point 1, 2, 3, 4) then that point of intersection is considered as a
single point and it is taken as ON/OFF point depending on the requirement. Therefore,
in this case the intersection points are taken as 1234. Colour of the pixels between the
intersection points 1 and 2 is replaced with the filling colour but the colour of the
pixels between the points 2 and 3 are left unchanged. Again the colour of the pixels
between 3 and 4 are changed by the filling colour.
In case b and c, case c i.e., when a scan line intersects an edge E1 at a vertex V1 i.e.,
a vertex point of the edge E1 whose y coordinate is greater (or less ) than the y
coordinate values of the other two vertex points V2 and V3 where, for example, edge
E1 has end vertex points V1, V2 and edge E2 having end vertex points V1, V3. That
is the vertices V2 and V3 will lie either completely above V1 or both V2 and V3 will
lie completely below the vertex V1. In this case, the point of intersection, i.e., E1, will
be counted two times. For example, the vertices 1′ and 2′ are such that their
y-coordinate values are grater than the y-coordinate values of their neighbouring
vertices and therefore they are counted twice.1′1′ 2′2′ i.e. the colour of the pixels
between 1′, 1′ is replaced with the filling colour but the colour of the pixels between
1′ , 2′ is left unchanged. Again the colour of the pixels between 2′, 2′ is changed.
Assume that the scan line passes through a vertex point V1 of the polygonal region
having neighbouring vertices V0 and V2., i.e. let E1 and E2 be two edges of the
polygon so that V0, V1 and V1, V2 be respectively their end vertices. Suppose we
assume that the vertex V1 is such that the y-coordinate for the neighbouring vertex V0
is grater than the y-coordinate for V1 but the y-coordinate for the neighbouring vertex
65
Raster Graphics and
Clipping
V2 is less than the y-coordinate for V1. In this case, the intersection vertex point V0
will be taken as a single point and the algorithm will remain the same as above.
The point of intersection is to be considered two times, i.e., 1′′ 2′′ 2′′ 3′′. 1′′ 2′′
⇒ make pixels ON from 1′′ to 2′′, 2′′ 2′′ don’t make pixel ON, 2′′, 3′′⇒ make pixels
ON from 2′′ to 3′′.
2) Sort the points on scan line w.r.t x value. Get the respective pixels ON.
(yi, xj + 2)
(yi, xj + n)
yi
(yi, xj)
(yi, xj + 1)
Sum up: To fill a polygonal image we have to identify all edges and vertices of
intersections while moving from scan line ymax to ymin, for each scan line. Then check
for the case and perform the algorithm accordingly.
yi + 1 y = y max
yi
yi - 1 xi
xi - 1
y = y min
Figure 18: Scan line polygon filling (a recursive approach)
For a scan line y = yi identify the edges and use the data of the point of intersection
(xi ,yi.). Say x0 is point on scan line y0 and x1 is point on scan line y1. Now between
(x0, y0) and (x1, y1) we find slope (this all is to find recursive way to scan a figure).
x0 y0 x1 y1 1/m
yi +1 − yi
Q m = y1 – y0/x1 – x0 =
xi +1 − xi
⇒ m (xi + 1 – xi) = (yi+ 1 – yi) --------------(1)
66
Graphic Primitives
(Q moving from top to bottom we are moving from ymax to ymin)
⇒ xi + 1 = xi – (1/m)
Using yi + 1 and xi + 1 expressions we can find consecutive points. x for respective scan
lines yi . If xi comes out to be real then round the value to integer values.
Note: Although Flood fill algorithms are not part of this block let me briefly
describe Flood Fill Algorithms. The algorithm starts with the coordinates of a
specified point (known as the “seed” pixel) inside the polygon to be filled,
and proceeds outwards from it, testing the adjacent pixels around it in orderly
fashion until the spreading wave reaches the boundary in every direction, as
suggested in Figure 19.
D
seed
B
Figure 19: Flood filling method
Filling may be solid (painted with a single colour), or it may be patterned or tiled
(filled with different patterns).
Fill options in standard packages such as MS-Word are grouped generally under the
heads: (a) Gradient, variations from light to dark of specified colour in specified
directions; (b) Texture; (c) Pattern (i.e., Hatching); and (d) Picture, of the user’s
choice. Some of these will be discussed later.
67
Raster Graphics and
Clipping
2) Given a circle radius r = 5 determine positions along the circle octants in
1st Quadrant from x = 0 to x = y?
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
3) Distinguish between Scan line polygon fill and Seed fill or Flood fill algorithm?
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
2.7 SUMMARY
In this unit, we have discussed the basic graphic primitives that is point, line and
circle; we have also discussed both theoretical and practical application of different
algorithms related to their generation. At the end of the unit, we have emphasised on
different seed fill and flood fill type of polygon filling algorithms. The filling
algorithms are quite important as they give privilege to quickly fill colours into the
graphics created by you.
2.8 SOLUTIONS/ANSWERS
Check Your Progress 1
1) Graphic primitives are the fundamental graphic objects which can be united in any
number and way to produce a new object or image.
2) Due to low resolution and improper approximation of pixel positions the images
are generally distorted and the Zig-Zags (i.e., distortions) produced in the display
of straight line or any other graphic primitive is the staircase effect.
4) In a high resolution system the adjacent pixels are so closely spaced that the
approximated line-pixels lie very close to the actual line path and hence the
plotted lines appear to be much smoother — almost like straight lines drawn on
paper. In a low resolution system, the same approximation technique causes lines
to be displayed with a “stairstep apperance” i.e., not smooth.
5)
L1′
L2′
Initial L1
point Final L2
point
Line L1 Slope<1 Line L2 Slope>1
68
Graphic Primitives
Aliter: This hint you will understand after going through Block 2.
In the shown Figure say L1 and L2 are lines with negative slope with a magnitude
of <1 and >1 respectively. Then for the generation of such a line, you may take
reflection of the line about the concerned axis and follow the usual algorithm of
line generation. After that you may inverse reflection the transformation, and this
will provide you the line generation of negative slope.
⇒ m = (7-1)/(9-1) =6/8
C = y1 – mx1 = 7 – (6/8) *9 = 1/4
1) x1 = x0 + 1 = 2
y1 = y0 + m = 1+(6/8) = 7/4
1) Bresenham line generation algorithm is better than DDA because it has better
solution to the concept of approximation of pixel position to be enlightened for
the display of any graphic primitive, thus, reduces the staircase effect, for more
details refer to 2.4.2
2) Here we are using the Bresenham line generation algorithm by digitising the line
with end points (15, 5) and (25,13).
y 2 − y1 ∆y 13 − 5 8
m= = = = = 0.8 -------------------(1)
x2 − x1 ∆x 25 − 15 10
69
Raster Graphics and
Clipping
⇒ ∆ y = 8 and ∆ x = 10 -------------------(2)
plot initial point (x0, y0) = (15, 5) in the frame buffer now determine successive
pixel positions along line path from decision parameters value (15, 5).
1) No, there is no need to generate the full circumference of the circle using the
algorithm. We may generate it in a quadrant or octant only and then use it to
produce the rest of the circumference by taking reflections along the respective
axis.
70
2-D Viewing and
Clipping
3.1 Introduction 71
3.2 Objectives 72
3.3 Point Clipping 73
3.4 Line Clipping 73
3.4.1 Cohen Sutherland Line Clippings 73
3.4.2 Cyrus-Beck Line Clipping Algorithm 84
3.5 Polygon Clipping 90
3.5.1 Sutherland-Hodgman Algorithm 91
3.6 Windowing Transformations 93
3.7 Summary 97
3.8 Solutions/Answers 98
3.1 INTRODUCTION
In the earlier two units of this block, we discussed the basic components of computer
graphics, i.e., the software and hardware used for graphic systems along with the
graphic primitives used to create graphic images. Now, we are in a position to discuss
technicalities related to the display of any graphic primitive or image. Let us begin
with, the concept of clipping, for both point and line clipping, followed by the
concerned algorithms used to perform line clipping. We shall then examine the
concept and algorithm related to polygon clipping, and end with a discussion on
window to viewport transformations.
Clipping may be described as the procedure that identifies the portions of a picture
lie inside the region, and therefore, should be drawn or, outside the specified region,
and hence, not to be drawn. The algorithms that perform the job of clipping are called
clipping algorithms there are various types, such as:
• Point Clipping
• Line Clipping
• Polygon Clipping
• Text Clipping
• Curve Clipping
Further, there are a wide variety of algorithms that are designed to perform certain
types of clipping operations, some of them which will be discussed in unit.
There are various other algorithms such as, Liang – Barsky Line clipping,
Weiler-Atherton Polygon Clipping, that are quite efficient in performing the task of
clipping images. But, we will restrict our discussion to the clipping algorithms
mentioned earlier.
Before going into the details of point clipping, let us look at some basic terminologies
used in the field of clipping, such as, window and viewport.
Window may be described as the world coordinate area selected for display.
71
Raster Graphics and
Clipping
Viewport may be described as the area on a display device on which the window is
mapped.
So, it is the window that specifies what is to be shown or displayed whereas viewport
specifies where it is to be shown or displayed.
Specifying these two coordinates, i.e., window and viewport coordinates and then the
transformation from window to viewport coordinates is very essential from the point
of view of clipping.
Note:
• Assumption: That the window and viewport are rectangular. Then only, by
specifying the maximum and the minimum coordinates i.e., (Xwmax, Ywmax) and
(Xwmin, Ywmin) we can describe the size of the overall window or viewport.
• Window and viewport are not restricted to only rectangular shapes they could be
of any other shape (Convex or Concave or both).
Vymin
Wymin
Object
Wxmin Wxmax Vxmin Vxmax
Window Viewport
World Coordinates Device Coordinates
Figure 1: Clipping alongwith Viewing Transformation through rectangular window and viewport
3.2 OBJECTIVES
After going though this unit, you should be able to:
• explain the concept of clipping,
• examine how line clipping is performed,
• understand and implement the algorithms that work behind the concept of line
clipping,
• explain how polygon clipping is performed,
• understand and implement the algorithms that work behind the concept of
polygon clipping, and
• describe window to viewport transformation.
are sufficient to specify window size, and any point (X,Y), which can be shown or
displayed should satisfy the following inequalities. Otherwise, the point will not be
visible. Thus, the point will be clipped or not can be decided on the basis of following
inequalities.
Ywmax
Xwmin ≤ X ≤ Xwmax
Ywmin ≤ Y ≤ Ywmax
Ywmin
Xwmin Xwmax
Figure 2: Point Clipping
It is to be noted that (Xwmax, Ywmax) and (Xwmin, Ywmin) can be either world
coordinate window boundary or viewport boundary. Further, if any one of these four
inequalities is not satisfied, then the point is clipped (not saved for display).
This algorithm is quite interesting. The clipping problem is simplified by dividing the
area surrounding the window region into four segments Up, Down, Left, Right
(U,D,L,R) and assignment of number 1 and 0 to respective segments helps in
positioning the region surrounding the window. How this positioning of regions is
performed can be well understood by considering Figure 3.
The meaning of the UDLR code to specify the location of region with respect to
window is:
1st bit ⇒ Up(U) ; 2nd bit ⇒ Down(D) ;3rd bit ⇒ Left(L) ; 4th bit ⇒ Right(R),
73
Raster Graphics and
Clipping
Now, to perform Line clipping for various line segment which may reside inside the
window region fully or partially, or may not even lie in the widow region; we use the
tool of logical ANDing between the UDLR codes of the points lying on the line.
Note:
• UDLR code of window is 0000 always and w.r.t. this will create bit codes of
other regions.
• A line segment is visible if both the UDLR codes of the end points of the line
segment equal to 0000 i.e. UDLR code of window region. If the resulting code is
not 0000 then, that line segment or section of line segment may or may not be
visible.
Now, let us study how this clipping algorithm works. For the sake of simplicity we
will tackle all the cases with the help of example lines l1 to l5 shown in Figure 4.
Each line segment represents a case.
l2 b
a l5
a
1010 1000 1001 b 1001
a l1 b l4
l3
b
0110 0100 0101
b
Note, that in Figure 4, line l1 is completely visible, l2 and l3 are completely invisible;
l4and l5 are partially visible. We will discuss these out comings as three separate
cases.
Case 1: (Trivial acceptance case) if the UDLR bit codes of the end points P,Q of a
given line is 0000 then line is completely visible. Here this is the case as the
end points a and b of line l1 are: a (0000), b (0000). If this trival acceptance
test is failed then, the line segment PQ is passed onto Case 2.
Case 2: (Trivial Rejection Case) if the logical intersection (AND) of the bit codes of
the end points P, Q of the line segment is ≠ 0000 then line segment is not
visible or is rejected.
Note that, in Figure 4, line 2 is completely on the top of the window but line 3 is
neither on top nor at the in bottom plus, either on the LHS nor on the RHS of the
74
2-D Viewing and
Clipping
window. We use the standard formula of logical ANDing to test the non visibility of
the line segment.
So, to test the visibility of line 2 and 3 we need to calculate the logical intersection of
end points for line 2 and line 3.
line l2: bit code of end points are 1010 and 1000
logical intersection of end points = (1010) ^ (1000) = 1000
as logical intersection ≠ 0000. So line 2 will be invisible.
line l3: end points have bit codes 0010 and 0101 now logical intersection = 0000,
i.e., 0010 ^ 0101 = 0000 from the Figure 4, the line is invisible. Similarly in line 4
one end point is on top and the other on the bottom so, logical intersection is 0000 but
then it is partially visible, same is true with line 5. These are special cases and we will
discuss them in case 3.
P
1001
1010 1000 l5
l5
P’’’
P′
P′′
0010 l4 0001
0000
P′′
P*
P’ l3 P
0110 0101
0100
P
Figure 5: Cohen Sutherland line clipping
Case 3: Suppose for the line segment PQ, both the trivial acceptance and rejection
tests failed (i.e., Case 1 and Case 2 conditions do not hold, this is the case
for l3, l4 and l5 line segments) shown above in Figure 5. For such non-trivial
Cases the algorithm is processed, as follows.
Since, both the bitcodes for the end points P, Q of the line segment cannot be equal to
0000. Let us assume that the starting point of the line segment is P whose bit code is
not equal to 0000. For example, for the line segment l5 we choose P to be the bitcodes
1001. Now, scan the bitcode of P from the first bit to the fourth bit and find the
position of the bit at which the bit value 1 appears at the first time. For the line
segment l5 it appears at the very first position. If the bit value 1 occurs at the first
position then proceed to intersect the line segment with the UP edge of the window
and assign the first bit value to its point of intersection as 0. Similarly, if the
bit value 1 occurs at the second position while scanning the bit values at the first time
then intersect the line segment PQ with the Down edge of the window and so on. This
point of intersection may be labeled as P’. Clearly the line segment PP’ is outside the
window and therefore rejected and the new line segment considered for dipping will
be P’Q. The coordinates of P’ and its remaining new bit values are computed. Now,
by taking P as P’, again we have the new line segment PQ which will again be
referred to Case 1 for clipping.
75
Raster Graphics and
Clipping
Geometrical study of the above type of clipping (it helps to find point of intersection
of line PQ with any edge).
Let (x1, y1) and (x2, y2) be the coordinates of P and Q respectively.
1) top case/above
if y1 > ywmax then 1st bit of bit code = 1 (signifying above) else bit code = 0
3) Left case: if x1 < xwmin then 3rd bit = 1 (i.e. left) else 0
4) Right case: if x1 > xwmax then 4th bit = 1 (i.e. right) else 0
1) Top/above case:
2) Bottom/below edge start with y = ywmin and proceed as for above case.
∴equation of line between intersection point (x’, ywmin) and point Q i.e. (x2, y2)
is
(ywmin – y2) = m (x′ – x2)
The coordinates of the point of intersection of PQ with the bottom edge will be
1
( x2 + ( ywmin − y 2 ), ywmin )
m
76
2-D Viewing and
Clipping
Hence, we get value of xwmin and y both i.e. coordinates of intersection point is
given by ( xwmin , y1 + m( xwmin − x1 )) .
4) Right edge: proceed as in left edge case but start with x-xwmax.
P2
P2 (x2y2)
XLYT XRYT
P1
P2 LÆLeft ;
(x1y1) RÆ Right
P1
P2 TÆTop ;
P1 P1 BÆBottom
XLYB XRYB
P1
P2
Figure 7: Steps for Cohen Sutherland Line Clipping
STEP 1:
Input: x L , x R , yT , y B , P1 ( x1 , y1 ), P2 ( x 2 , y 2 )
Initialise i = 1
While i <= 2
if xi < x L then bit 1 of code –Pi = 1 else 0
if xi > x R then bit 2 of code –Pi =1 else 0 : The endpoint codes of the line
are set
if y i < y B then bit 3 of code –Pi = 1 else 0
if y i > yT then bit 4 of code –Pi = 1 else 0
i = i +1
end while
i=1
STEP 2:
Initialise j = 1
While j <= 2
if x j < x L then C j left = 1 else C j left = 0
if x j > x R then C j right = 1 else C j right = 0 :Set flags according to the position
of the line endpoints w.r.t.
window
if y j < y B then C j bottom = 1 else C j bottom = 0 edges
77
Raster Graphics and
Clipping
STEP 3: If codes of P1and P2 are both equal to zero then draw P1P2 (totally visible)
STEP 4: If logical intersection or AND operation of code –P1 and code –P2 is not
equal to zero then ignore P1P2 (totally invisible)
STEP 5: If code –P1= 0 then swap P1 and P2 along with their flags and set i = 1
for i = 1,
{if C1 left = 1 then
find intersection ( x L , y ' L ) with left edge vide eqn. (C)
assign code to ( x L , y ' L )
P1 = ( x L , y ' L )
end if
i = i + 1;
go to 3
}
for i = 2,
{if C1 right = 1 then
find intersection ( x R , y ' R ) with right edge vide eqn. (D)
assign code to ( x R , y ' R )
P1 = ( x R , y ' R )
end if
i=i+1
go to 3
}
for i = 3
{if C1 bottom = 1 then
find intersection ( x 'B , yB ) with bottom edge vide eqn. (B)
assign code to ( x 'B , yB )
P1 = ( x 'B , yB )
end if
i=i+1
go to 3
}
for i = 4,
{if C1 top = 1 then
find intersection ( x 'T , yT ) vide eqn. (A) with top edge
assign code to ( x 'T , yT )
P1 = ( x 'T , yT )
end if
i=i+1
go to 3
}
end
78
2-D Viewing and
Clipping
A(100, 10), B(160, 10, C(160, 40), D(100, 40). Using Sutherland-Cohen clipping
algorithm find the visible portion of the line segments EF, GH and P1P2. E(50,0),
F(70,80), G(120, 20), H(140, 80), P1(120, 5), P2(180, 30).
F(70,80) H(140,80)
D(100,40) C(160,40)
P2(180,30)
G(120,20)
A(100,10) B(160,10)
E(50,0) P1(120,5)
Figure 8: Example of Cohen Sutherland Line Clipping
As code –P <> 0
for i = 1
{
C1 left (= 0) <> 1 then nothing is done.
i=i+1=2
}
code –P1 <> 0 and code –P2 <> 0
then P1P2 not totally visible.
79
Raster Graphics and
Clipping
for i = 2
{
C1 right (= 0) <> 1 then nothing is to be done.
i=i+1=2+1=3
}
code –P1 <> 0 and code –P2 <> 0
then P1P2 not totally visible.
for i = 3
{
C1 bottom = 1 then find intersection of P1P2 with bottom edge
yB = 10
B
xB = (180-120)(10-5)/(30-5) + 120
B
=132
then P1 = (132,10)
P1 = (180, 30)
P2 = (132, 10)
code –P1 = 0010
code –P2 = 0000
C1 left = 0 C2 left = 0
C1 right = 1 C2 right = 0
C1 bottom = 0 C2 bottom = 0
C1 top = 0 C2 top = 0
Reset i = 1
for i = 1
{
C1 left (= 0) <> 1 then nothing is to be done.
i=i+1=1+1=2
}
code –P1 <> 0, and code –P2 <> 0
then P1P2 is not totally visible.
for i = 2
{
C1 right = 1 then find intersection of P1P2 with right edge
xR = 160
yR = (30 – 5)(160 – 120)/(180 – 120) + 5
= 21.667
= 22
then P1 = (160, 22)
x1 > x L then bit 1 of code –P1 = 0 C1 left = 0
x1 = x R then bit 2 of code –P1 = 0 C1 right = 0
y1 > y B then bit 3 of code –P1 = 0 C1 bottom = 0
y1 < yT then bit 4 of code –P1 = 0 C1 top = 0
code –P1 = 0000, i = i + 1 = 2 + 1 = 3
}
As both code –P1 = 0 and code –P2 = 0 then the line segment P1P2 is totally visible.
So, the visible portion of input line P1P2 is P’1P’2 where, P1 = (160, 22) and
P2 = (132, 10).
Similarly,
Fleft = 1, Fright = 0, Ftop = 1 Fbottom = 0
3) Since codes of E and F are both not equal to zero the line is not totally visible.
4) Logical intersection of codes of E and F is not equal to zero. So, we may ignore
EF line and declare it as totally invisible.
Similarly,
Hleft = 0, Hright = 0, Htop = 1, Hbottom = 0.
3) Since, codes of G and H are both not equal to zero so the line is not totally
visible.
5) Since, code – G = 0, Swap G and H along with their flags and set i = 1
81
Raster Graphics and
Clipping
The conditions 3 and 4 do not hold and so we cannot declare line GH as totally
visible or invisible.
Any line passing through the points G, H and a point P(x, y) is given by
y – 20 = {(180 – 20) /(140 – 120)}(x – 120)
or, y – 20 = 3x – 360
or, y – 30 = -340
Since, the y coordinate of every point on line CD is 40, so we put y = 40 for the point
of intersection P(x, y) of line GH with edge CD.
40 – 3x = -340
or, - 3x = - 380
or x = 380/3 = 126.66 ≈ 127
82
2-D Viewing and
Clipping
You might wonder as to what a convex window? In general, on the basis of the
shapes the windows are classified into two categories:
i) Convex shaped windows: Windows of a shape such that if we take any two
points inside the window then the line joining them will never cross the window
boundary, such shaped windows are referred as convex shaped windows.
ii) Concave or non Convex shaped windows: Windows of a shape such that if we
can choose a pair of points inside the window such that the line joining them may
cross the window boundary or some section of the line segment may lie outside
the window. Such shaped windows are referred to as non-convex or concave
shaped windows.
Q
B (x2, y2)
A
P
(x1, y1)
Now, just recall the parametric equation of line segment PQ, which we have studied
in the course CS-60.
It is simply P + t (Q – P) 0≤t≤1
∴P + t (Q – P) ⇒ (x1, y1) + t (x2 – x1, y2 – y1) = (x, y) be any point on PQ. ------ (1)
83
Raster Graphics and
Clipping
So, the variation in parameter t is actually generating line in point wise manner .The
range of the parameter values will identify the portion to be clipped by any convex
polygonal region having n-vertices or lattice points to be specified by the user. One
such clipping situation is shown in Figure 9(a).
Remark:
• How to specify the window region: a convex polygonal region having n-
vertices {P0, P1, P2…, Pn – 1, Pn, P0} or lattice points to be specified by the user
encloses the convex window region. To be specific about the window we may
say that each edge between two points contributes to the boundary of the window
under the situation that (when we traverse each edge in anticlockwise manner),
the window region lies on left hand side (LHS) of edge. This orientation is
preserved and while giving input, the user takes care of the orientation to specify
the window region of any arbitrary convex shape.
P P P
e4
PL PL
PE PE2 P Q
PE1
e1
P
Say, we are moving from point P to point Q, and we know that while
traversing the windows edges in anticlockwise manner the region lying on
the left hand side is referred to as the window region. So, from the Figure 10
(a), you may notice that at point PE we are moving from P to Q we are
entering the window region , thus this point of intersection should be referred
to as the Potentially entering point(PE). Now, refer to other intersection
shown in the Figure 10 (a), here, also the movement of points on the line are
determined by varying value of parameter t is from point P to Q and w.r.t .,
the orientation of window boundary, we are exiting the window region. So,
this point of intersection is known as potentially leaving point (PL).
Potentially entering point (PE) while moving towards the window, the point
of intersection between line PQ and the window edges faced are known as
PE.
OR
N1 N2
84
θ Pi – 1
B B
P3
B B
θ Q
(x1, y1)
B B B B
2-D Viewing and
Clipping
Other way round you may detect the point of intersection as Potentially
leaving point (PL) or Potentially entering point (PE) by studying the angle
between the line to be clipped from the outward normal to the intersecting
edge, at the point of intersection. From Figure 10 (b), it is the angle between
PQ and N1 or N2. You may notice that, while moving from P to Q the angle
between line PQ and N1 is obtuse whereas the angle between PQ and N2 is
acute; so, you may say, if the angle between line PQ and N1 is obtuse the
point of intersection is potentially entering (PE) whereas, if the angle between
PQ and N2 is acute, then the point of intersection is potentially leaving (PL).
OR
PE => Ni . PQ < 0 ; (angle θ greater than 90° or Obtuse )
Ni . (Q-P) < 0
Ni
Pi – 1
(xi – 1, yi – 1)
85
Raster Graphics and
Clipping
⇒ s1n1i + s2n2i = 0
⇒ s1n1i = – s2n2i = 0
− s2
⇒ n1i = n2i
s1
s
If n2i = 1 then, n1i = 2
s1
−s
Therefore, N i = (n1i, n2i) = ( 2 ,1) NORMAL
s1
⎛−s ⎞ ⎛ − s2 ⎞ ⎛s ⎞
if ⎜⎜ 2 ,1⎟⎟ is outward normal then – ⎜⎜ ,1⎟⎟ i.e. ⎜⎜ 2 ,-1⎟⎟ is inward normal.
⎝ s1 ⎠ ⎝ s1 ⎠ ⎝ s1 ⎠
Now, we have learned some basic things which will be required in the working of the
algorithm, so, we can start using the concepts learned so far to clip the line segment
passing through the window.
P + t (Q – P) ; 0 ≤ t ≤ 1
Now, to find the point of intersection of line PQ and ith edge we need to find the
appropriate value of parameter t, for that we firstly find normal to the ith edge. Say
Ni is the normal to ith edge (Pi-1 to Pi), then to find the appropriate parameter t, we
should determine the dot product of the vector from Pi – 1 to the point of intersection
and the normal Ni. Let us do this exercise.
The vector joining edge point Pi – 1 and point of intersection of the line PQ with the
ith edge for some t will be given by:
{ [ P + t ( Q − P )] − P i−1}
As Ni is normal to the ith edge , so, the dot product of the vector joining edge point
Pi – 1 and point of intersection of the line PQ with the ith edge for some t and Ni will be
given by:
{[ P + t (Q − P )] − P i −1}. N i = 0 ---------------(1)
− (P − P i −1 ). N i
t= ---------------(2)
(Q − P).N i
Using this value of t from (2) in parametric form of line we will get the point of
intersection of line PQ and ith edge.
Note: By the above discussion of PE and PL we came to know that if Ni . (Q-P) < 0
=> PE point and Ni . (Q-P) > 0 => PL point. If the value of t calculated
using (2) lies in the interval 0 to 1 (in magnitude) then, the point of intersection
will be considered otherwise it will not be considered.
• Firstly, find all the points of intersections of the line segment PQ with the edges
of the polygonal window and classify them either as PE and PL points. Also
86
2-D Viewing and
Clipping
determine the value of parmeter t , using equation (2) for respective PE’s and
PL’s.
Or
If value of the normals to respective edges are not given then, find the value of
normal to every edge of the window, then, determine the value of parmeter t ,
using equation (2) for respective point of intersection between line segment and
window edge then on the basis of the vaue of parameter t mark them as PE and
PL provided the value of t lies from 0 to 1 in magnitude.
• Secondly, out of the various values of t for PE’s determine the maximum value of
t say it be tmax. Similarly, out of the various values of t for PL’s determine the
minimum value of t say it be tmin . Note that for clipping to be possible tmin > tmax.
• Finally, vary the parameter t from tmax to tmin and determine the clipped line as
outcome.
Line 2Q
PE Q
PL e3 PL2
WINDOW PL1
P P
e4 Line 1 e2
PL
PE2 P Q
PE1 PL
e1 Line 3
P Q
Figure 12: Steps of Cyrus Beck Clipping
1) Point 1 is potentially entering (PE1) because as we move along PQ, the LHS of e1
is window and hence, it seems that we are entering the window.
2) Point 2 is again PE2 similarly as point 1.
3) Point 3 is potentially leaving (P4) ∵ as we move along PQ, the LHS of e2 is
window and hence, it seems that we are leaving the window.
4) Similarly, point 4 is also PL2.
Using same logic as for line 1 we find P L and PE. Now, it is to be noted that for each
point of intersection we have some value of t.
As the portion visible to us is image what laying inside the window for line 1 the line
visible is from PE2 to PL1 and for these points there is some value of t. With this
initialisation let us study the following cases:
Case 1: Say we get a new value of tE (i.e value of parameter t for any potentially
entering (PE) point) we choose tmax as: tmax = max {tmax, tE}. The initial
value of tmax = 0 is taken.
87
Raster Graphics and
Clipping
Case 2: Say we get a new value of tL (i.e value of parameter t for any potentially
leaving(PL) point) then tmax value is updated by tmin = min {tmin, tL}. The
initial value of tmin = 1 is taken.
Finally when value of t for all edges are found and if tmax < tmin then line is
visible else not. And line is visible from
P + tmax (Q – P) to P + tmin (Q – P) i.e.
P + t (Q – P) such that tmax ≤ t ≤ tmin
tmax < tmin (draw line)
tmax < tmin (reject line)
Example: How does the Cyrus Beck line clipping algorithm, clip a line segment if
the window is non convex?
Solution: Consider the Figure 13, here, the window is non convex in shape and PQ is
a line segment passing through this window .Here too the condition of visibility of
the line is tmax < tmin and the line is visible from P + tmax (Q – P) to P + tmin (Q – P),
if tmax ≮ tmin then reject the line segment. Now, applying this rule to the Figure 13, we
find that when PQ line segment passes through the non convex window, it cuts the
edges of the window at 4 points. 1→ PE; 2 → PL; 3 → PE; 4 → PL . In this example,
using the algorithm we reject the line segment PQ but it is not the correct result.
Condition of visibility is satisfied in region 1-2 and 3-4 only so the line exists there
but in region 2-3 the condition is violated so the line does not exists.
P
tmax
PE
tmax < tmin (visible)
PL tmin
PL
tmin
PE
PE
tmax < tmin (visible)
PL
tmax
Q
Figure 13: Example Cyrus Beck Clipping
……………………………………………………………………………………
……………………………………………………………………………………
…………
2) A clipping window is given by P0(10,10), P1(20,10), P2(25,15), P3(20,20),
P4(10,15), P5 = P0. Using Cyrus Beck algorithm clip the line A(0,0) and B(30,25).
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
………
3) How will you compute the outward normal to a clipping in the case of Cyrus-
Back algorithm?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
………
4) Compare Cohen Sutherland Line clipping with Cyrus Beck Line clipping
algorithm?
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
………
5) Clip the line shown in Figure below using Cohen Sutherland Line clipping
algorithm.
P(x1, y1)
Q (x2, y2)
……………………………………………………………………………………
……………………………………………………………………………………
……………………………………………………………………………………
………
89
Raster Graphics and
Clipping
Any polygon of any arbitrary shape can be described with the help of some set of
vertices associated with it. When we try to clip the polygon under consideration with
any rectangular window, then, we observe that the coordinates of the polygon
vertices satisfies one of the four cases listed in the table shown below, and further it is
to be noted that this procedure of clipping can be simplified by clipping the polygon
edgewise and not the polygon as a whole. This decomposes the bigger problem into a
set of subproblems, which can be handled separately as per the cases listed in the
table below. Actually this table describes the cases of the Sutherland-Hodgman
Polygon Clipping algorithm.
Thus, in order to clip polygon edges against a window edge we move from vertex Vi
to the next vertexVi+1 and decide the output vertex according to four simple tests or
rules or cases listed below:
Table: Cases of the Sutherland-Hodgman Polygon Clipping Algorithm
Case Vi Vi+1 Output Vertex
In words, the 4 possible Tests listed above to clip any polygon states are as
mentioned below:
1) If both Input vertices are inside the window boundary then only 2nd vertex is
added to output vertex list.
2) If 1st vertex is inside the window boundary and the 2nd vertex is outside then, only
the intersection edge with boundary is added to output vertex.
3) If both Input vertices are outside the window boundary then nothing is added to
the output list.
4) If the 1st vertex is outside the window and the 2nd vertex is inside window, then
both the intersection points of the polygon edge with window boundary and 2nd
vertex are added to output vertex list.
So, we can use the rules cited above to clip a polygon correctly. The polygon must be
tested against each edge of the clip rectangle; new edges must be added and existing
edges must be discarded , retained or divided. Actually this algorithm decomposes
90
2-D Viewing and
Clipping
the problem of polygon clipping against a clip window into identical subproblems,
where a subproblem is to clip all polygon edges (pair of vertices) in succession
against a single infinite clip edge. The output is a set of clipped edges or pair of
vertices that fall in the visible side with respect to clip edge. These set of clipped
edges or output vertices are considered as input for the next sub problem of clipping
against the second window edge. Thus, considering the output of the previous
subproblem as the input, each of the subproblems are solved sequentially, finally
yielding the vertices that fall on or within the window boundary. These vertices
connected in order forms, the shape of the clipped polygon.
For better understanding of the application of the rules given above consider the
Figure 14, where the shaded region shows the clipped polygon.
Test D V2
V1
Polygon to Rectangular
be clipped Window Test A
Test C
V4
Test B V3
(a) (b)
Figure 14 :Sutherland-Hodgman Polygon Clipping
Define variables
inVertexArray is the array of input polygon vertices
outVerteArray is the array of output polygon vertices
Nin is the number of entries in inVertexArray
Nout is the number of entries in outVertexArray
n is the number of edges of the clip polygon
ClipEdge[x] is the xth edge of clip polygon defined by a pair of vertices
s, p are the start and end point respectively of current polygon edge
i is the intersection point with a clip boundary
j is the vertex loop counter
Define Functions
AddNewVertex(newVertex, Nout, outVertexArray)
: Adds newVertex to outVertexArray and then updates Nout
InsideTest(testVertex, clipEdge[x])
: Checks whether the vertex lies inside the clip edge or not;
retures TRUE is inside else returns FALSE
91
Raster Graphics and
Clipping
Window: A world coordinate area selected for display (i.e. area of picture selected
for viewing).
Note:
• Window defines what is to be viewed and viewpoint defines where it is to be
displayed.
• Often window and viewpoints are rectangles in standard position with edges
parallel to coordinate axes. Generalised shapes like polygon etc., take long to
92
2-D Viewing and
Clipping
process, so we are not considering these cases where window or viewport can
have general polygon shape or circular shape.
Wymax
Vymax
Vymin
Wymin
You may notice in Figure 15, that all parts of the picture that lie outside the window
are clipped and the contents which lie inside the widow are transferred to device
coordinates. Secondly, you may also notice that while window selects a part of the
scene, viewport displays the selected part at the desired location on the display area.
When window is changed we see a different part of the scene at the same portion
(viewport) on display. If we change the viewport only, we see the same part of the
scene drawn at a different scale or at a different place on the display. By successively
increasing or decreasing the size of the window around a part of the scene the
viewport remain fixed, we can get the effect of zoom out or zoom in respectively on
the displayed part. Mathematically, viewing transformation can be expressed as
V=W.N
Where,
• V refers Viewing transformation which maps a part of world coordinate scene to
device coordinates;
• W refers to workstation transformation which maps normalised device
coordinates to physical device coordinates;
• N refers to Normalisation transformation used to map world coordinates to
normalized device coordinates.
Ywmax
YVmax
(Xw, Yw)
YVmin
.
(XV, YV)
Ywmin
93
Raster Graphics and
Clipping
So, as to maintain the same relative placement in the viewpoint as in the window we
require
xv − xv min xw − xwmin
= -------------(a)
xv max − xv min xwmax − xwmin
-------------(1)
yv − yv min yw − ywmin
= -------------(b)
yv max − yv min ywmax − ywmin
rearranging equation (a) and (b) of (1) we get viewpoint position (xv, yv) i.e.,
where
S x scaling factor along x axis = xv max − xv min
xwmax − xwmin ----------(3)
Note, if Sx = Sy then the relative proportions of objects are maintained else the world
object will be stretched or contracted in either x or y direction when displayed on
output device.
Step 1 Scaling the window area to the size of the viewport with scale factors sx and sy
w.r.t a fixed point (xwmin, ywmin).
⎛ sx 0 xwmin (1 − s x ) ⎞
⎜ ⎟
⇒ [T1] = ⎜ 0 sy ywmin (1 − s y ) ⎟
⎜0 ⎟
⎝ 0 1 ⎠
Step 2 Translating the scaled window to the position of the viewport so that,
Δx = xvmin – xwmin and
Δy = yvmin – ywmin
⎛ 1 0 Δx ⎞
⇒ [T2] = ⎜⎜ 0 1 Δy ⎟⎟
⎜0 0 1 ⎟
⎝ ⎠
Replacing the values of Δx and Δy in the above transformation matrix we finally get,
⎛ sx 0 − s x xwmin + xvmin ⎞
⎜ ⎟
[T] = ⎜ 0 sy − s y ywmin + yvmin ⎟ (1)
⎜0 ⎟
⎝ 0 1 ⎠
94
2-D Viewing and
Clipping
So, it can be said that if the aspect ratio av of the viewport equals the aspect ratio aw
of the window, then sx = sy and no distortion of displayed scene occurs other than
uniform magnification or compression. If av ≠ aw then the displayed scene in the
viewport gets somewhat distorted w.r.t the scene captured by the window.
Y(4,5)
X(5,3) (1,1)
Z(0,3)
α
(0,0)
W(1,1)
Here, we see that the window edges are not parallel to the coordinate axes. So we will
first rotate the window about W so that it is aligned with the axes.
3 −1 1
Now, tan α= =
5 −1 2
1 2
⇒ sin α = cos α =
5 5
Here, we are rotating the rectangle in clockwise direction. So α is ( – )ve i.e., – α
The rotation matrix about W (1, 1) is,
⎛ 2 1 ⎛1− 3 ⎞⎞
⎜ ⎜ ⎟⎟
⎜ 5 5 ⎝ 5 ⎠⎟
⎜ −1 2 ⎛ 1−1⎞ ⎟
[TR.θ]W = ⎜ ⎜ ⎟⎟
⎜ 5 5 ⎝ 5 ⎠⎟
⎜ ⎟
⎜ 0 0 1 ⎟
⎜ ⎟
⎝ ⎠
The x extent of the rotated window is the length of WX which is (42 + 22 ) = 2 5
95
Raster Graphics and
Clipping
3.7 SUMMARY
In this unit, we have discussed the concept of clipping, along with the concept of
clipping we have also discussed various clipping types. The unit is quite important
from the point of view of achieving realism through computer graphics.
This unit contains algorithms, which are easy to implement in any programming
language.
3.7 SOLUTIONS/ANSWERS
Check Your Progress 1
1) Rectangular window ABCD is defined such that A = (–1, –2) and C (3, 1). So, to
clip the line segment joining the points P(–20, 0) and Q(20, 30) using generalised
geometrical approach we do the following tasks.
96
1001 1000 1010
2-D Viewing and
Clipping
As point P (-20, 0) lies on the line given by equation (1) so it should satisfy
equation (1) i.e.,
0 = ¾ * (-20) + c ⇒ c = 15 ------------------(2)
∴ ⇒ P’’(-1, 14.25) is also lying out of window region ABCD so reject portion
P’P’’ of line segment.
3) Now consider line segment P’’Q
We will get P’’’ if we will find the pt. of intersection of P’’Q with x = xmax = 3
∴by using x = xmax = 3 in (3) we get
∴ ⇒ P’’’(3, 17.25) is also lying out of window region ABCD hence Q (20,30)
is also lying out ∴Reject whole line segment P’’’Q.
That is how a whole line segment PQ is rejected in small segments.
97
Raster Graphics and
Clipping
Now, let us do the above steps so as to clip the line AB in clipping windows
p0p1,p2,p3,p4.
98
2-D Viewing and
Clipping
99
Raster Graphics and
Clipping
A + tmax(B-A) to A + tmin(B-A)
A + t max ( B − A)
= (0,0) + 2/5[(30,25) - (0,0)] = (12, 10)
A + t min ( B − A)
= (0,0) + 8/11[(30,25) - (0,0)] = (240/11, 200/11)
θ’
Pi −1 ( xi −1 , y i −1 )
θ
Pi ( xi , y i ) θ Pi − 2 ( xi − 2 , y i − 2 )
θ’
100
2-D Viewing and
Clipping
Normal N i in case 1 is opposite of the Normal in case 2 i.e. the direction is opposite.
So, if N i in case is OUTWARD NORMAL then N i in case2 will be INWARD
NORMAL
A
θ’
90° Pi-1
θ
Pi-2
Pi θ
θ’
Similarly ;
101
Raster Graphics and
Clipping
if the line doesn’t pass through the clipping region it will have logical
intersection of 0000 implying that line segment will be clipped but infact it is
not so.
1001 1000 1010
5) Using Cohen Sutherland Line clipping algorithm when we clip the line we get the
line shown in Figure below as output.
6) Window regions which are Non convex can’t be handled by the CYRUS-BECK
ALGORITHM without modification.
As we know that in Cyrus Beck algorithm the line is to draw if tmax < tmin else
it is to be rejected and as per equation (1) and Figure we find that the whole line
LM will be rejected but some portion of LM has to be visible, to achieve this, the
algorithm has to be modified so as to handle new convex window regions.
1) The non convex region is to be divided into many convex sub regions.
2) tmax and tmin are to be found w.r.t individual convex window region hence
line is to be clipped for different convex window regions.
As per the modification the window region ABCDE is to be divided into two convex
windows. L
C D
PL2
Check Your Progress 2
102
2-D Viewing and
Clipping
103