SE 1st - 2nd Chapter Material

Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

DISCLAIMER

THIS DOCUMENT CAN NOT BE USED AS A SUBSTITUTE FOR


PRESCRIBED TEXTBOOKS.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 1


SOFTWARE ENGINEERING
WHAT IS SOFTWARE ENGINEEING
The term software engineering is composed of two words, software and engineering.
Software is more than just a program code. A program is an executable code, which
serves some computational purpose. Software is considered to be a collection of
executable programming code, associated libraries and documentations. Software, when
made for a specific requirement is called software product.
Engineering on the other hand, is all about developing products, using well-defined,
scientific principles and methods.
Definition:

Software Engineering is an engineering branch associated with the development of


software product using well-defined scientific principles, methods and procedures. The
outcome of software engineering is an efficient and reliable software product.
IEEE defines: The application of a systematic, disciplined, quantifiable approach to the
development, operation and maintenance of software.

NEED OF SOFTWARE ENGINEERING


The need of software engineering arises because of higher rate of change in user
requirements and environment on which the software is working.
 Large software - It is easier to build a wall than to a house or building, likewise,
as the size of software become large engineering has to step to give it a
scientific process.
 Scalability- If the software process were not based on scientific and engineering
concepts, it would be easier to re-create new software than to scale an existing
one.
 Cost- As hardware industry has shown its skills and huge manufacturing has
lower down the price of computer and electronic hardware. But the cost of
software remains high if proper process is not adapted.
 Dynamic Nature- The always growing and adapting nature of software hugely
depends upon the environment in which the user works. If the nature of

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 2


software is always changing, new enhancements need to be done in the
existing one. This is where software engineering plays a good role.
 Quality Management- Better process of software development provides better
and quality software product.
CHARACTEREISTIC OF SOFTWARE ENGINEERING
A software product can be judged by what it offers and how well it can be used. This
software must satisfy on the following grounds:
 Operational
 Transitional
 Maintenance
Well-engineered and crafted software is expected to have the following characteristics:
Operational
This tells us how well software works in operations. It can be measured on:

 Budget
 Usability
 Efficiency
 Correctness
 Functionality
 Dependability
 Security
 Safety

Transitional
This aspect is important when the software is moved from one platform to another:
 Portability
 Interoperability
 Reusability
 Adaptability
Maintenance
This aspect briefs about how well a software has the capabilities to maintain itself in the
ever- changing environment:
 Modularity

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 3


 Maintainability
 Flexibility
 Scalability

SOFTWARE DEVELOPMENT LIFE CYCLE

LIFE CYCLE MODEL


A software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle. A life cycle model represents all the activities
required to make a software product transit through its life cycle phases. It also captures
the order in which these activities are to be undertaken. In other words, a life cycle
model maps the different activities performed on a software product from its
commencement to retirement. Different life cycle models may map the basic
development activities to phases in different ways. During any life cycle phase, more
than one activity may also be carried out.
NEED OF SOFTWARE LIFE CYCLE MODEL
The development team must identify a suitable life cycle model for the particular project
and then follow to it. Without using of a particular life cycle model the development of a
software product would not be in a systematic and disciplined manner. When a software
product is being developed by a team there must be a clear understanding among team
members about when and what to do. Otherwise it would lead to confusion and project
failure. A software life cycle model defines entry and exit criteria for every phase. A
phase can start only if its phase-entry criteria have been satisfied. So without software
life cycle model the entry and exit criteria for a phase cannot be recognized. Without
software life cycle models it becomes difficult for software project managers to monitor
the progress of theproject.

DIFFERENT SOFTWARE LIFE CYCLE MODELS


Many life cycle models have been proposed so far. Each of them has some advantages
as well as some disadvantages. A few important and commonly used life cycle models
are as follows:

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 4


 Classical Waterfall Model
 Iterative Waterfall Model
 Prototyping Model
 Evolutionary Model
 Spiral Model

1. CLASSICAL WATERFALL MODEL


The classical waterfall model is the most obvious way to develop software. Though
the classical waterfall model is elegant and intuitively obvious, it is not a practical
model in the sense that it cannot be used in actual software development projects.
Thus, this model can be considered to be a theoretical way of developing software.
But all other life cycle models are essentially derived from the classical waterfall
model. So, in order to be able to appreciate other life cycle models it is necessary to
learn the classical waterfall model. Classical waterfall model divides the life cycle into
the 06 phases as shown in fig.:

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 5


Feasibility study - The main aim of feasibility study is to determine whether it
would be financially and technically feasible to develop the product.

 At first project managers or team leaders try to have a rough understanding


of what is required to be done by visiting the client side. They study
different input data to the system and output data to be produced by the
system.
 After they have an overall understanding of the problem they investigate
the different solutions that are possible. Then they examine each of the
solutions in terms of what kind of resources required, what would be the
cost of development and what would be the development time for
eachsolution.

 Based on this analysis they pick the best solution and determine whether

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 6


the solution is feasible financially and technically. They check whether the
customer budget would meet the cost of the product and whether they
have sufficient technical expertise in the area of development.

Requirements analysis and specification: - The aim of the requirements analysis


and specification phase is to understand the exact requirements of the customer
and to document them properly. This phase consists of two distinct
activities,namely
 Requirements gathering andanalysis
 Requirementsspecification
The goal of the requirements gathering activity is to collect all relevant
information from the customer regarding the product to be developed.
The requirements analysis activity is begun by collecting all relevant data
regarding the product to be developed from the users of the product and from
the customer through interviews and discussions. It is necessary to identify all
ambiguities and contradictions in the requirements and resolve them through
further discussions with the customer. After all ambiguities, inconsistencies, and
incompleteness have been resolved and all the requirements properly
understood, the requirements specification activity can start. During this activity,
the user requirements are systematically organized into a Software Requirements
Specification (SRS) document. The important components of this document are
functional requirements, the nonfunctional requirements, and the goals of
implementation.

Design: - The goal of the design phase is to transform the requirements specified
in the SRS document into a structure that is suitable for implementation in some
programming language. In technical terms, during the design phase the software
architecture is derived from the SRS document. Two distinctly different
approaches are available: the traditional design approach and the object-oriented
design approach.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 7


 Traditional design approach -Traditional design consists of two different
activities; first a structured analysis of the requirements specification is
carried out where the detailed structure of the problem is examined. This is
followed by a structured design activity. During structured design, the
results of structured analysis are transformed into the softwaredesign.
 Object-oriented design approach -In this technique, various objects that
occur in the problem domain and the solution domain are first identified,
and the different relationships that exist among these objects are
identified. The object structure is further refined to obtain the
detaileddesign.
Coding and unit testing:-The purpose of the coding phase (sometimes called the
implementation phase) of software development is to translate the software
design into source code. Each component of the design is implemented as a
program module. The end-product of this phase is a set of program modules that
have been individually tested. During this phase, each module is unit tested to
determine the correct working of all the individual modules.

Integration and system testing: -Integration of different modules is undertaken


once they have been coded and unit tested. During the integration and system
testing phase, the modules are integrated in a planned manner. The different
modules making up a software product are almost never integrated in one shot.
Integration is normally carried out incrementally over a number of steps. During
each integration step, the partially integrated system is tested and a set of
previously planned modules are added to it. Finally, when all the modules have
been successfully integrated and tested, system testing is carried out. System
testing usually consists of three different kinds of testingactivities:

 α – testing: It is the system testing performed by the development team.


 β –testing: It is the system testing performed by a friendly set of customers.
 Acceptance testing: It is the system testing performed by the customer
himself after the product delivery to determine whether to accept or reject
the deliveredproduct.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 8


System testing is normally carried out in a planned manner according to the
system test plan document. The system test plan identifies all testing-related
activities that must be performed,specifies the schedule of testing, and allocates
resources. It also lists all the test cases and the expected outputs for each test
case.

Maintenance: -Maintenance of a typical software product requires much more


than the effort necessary to develop the product itself. Maintenance involves
performing any one or more of the following three kinds of activities:
 Correcting errors that were not discovered during the product
development phase. This is called correctivemaintenance.
 Improving the implementation of the system, and enhancing the
functionalities of the system according to the customer’s requirements.
This is called perfectivemaintenance.
 Porting the software to work in a new environment. For example, porting
may be required to get the software to work on a new computer platform
or with a new operating system. This is called adaptivemaintenance.

SHORTCOMINGS OF THE CLASSICAL WATERFALL MODEL

The classical waterfall model is an idealistic one since it assumes that no


development error is ever committed by the engineers during any of the life cycle
phases. However, in practical development environments, the engineers do
commit a large number of errors in almost every phase of the life cycle. The
source of the defects can be many: oversight, wrong assumptions, use of
inappropriate technology, communication gap among the project engineers, etc.
These defects usually get detected much later in the life cycle. For example, a
design defect might go unnoticed till we reach the coding or testing phase. Once a
defect is detected, the engineers need to go back to the phase where the defect
had occurred and redo some of the work done during that phase and the

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 9


subsequent phases to correct the defect and its effect on the later phases.
Therefore, in any practical software development work, it is not possible to
strictly follow the classical waterfall model.

2. ITERATIVE WATERFALL MODEL


To overcome the major shortcomings of the classical waterfall model, we come up
with the iterative waterfall model.

Here, we provide feedback paths for error correction as & when detected later in
a phase. Though errors are inevitable, but it is desirable to detect them in the
same phase in which they occur. If so, this can reduce the effort to correct thebug.

The advantage of this model is that there is a working model of the system at a
very early stage of development which makes it easier to find functional or design
flaws. Finding issues at an early stage of development enables to take corrective
measures in a limited budget.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 10


The disadvantage with this SDLC model is that it is applicable only to large and
bulky software development projects. This is because it is hard to break a small
software system into further small serviceable increments/modules.

3. PROTOTYPING MODEL
PROTOTYPE
A prototype is a toy implementation of the system. A prototype usually
exhibits limited functional capabilities, low reliability, and inefficient
performance compared to the actual software. A prototype may produce the
desired results by using a table look-up instead of performing the actual
computations.

NEED FOR A PROTOTYPE IN SOFTWARE DEVELOPMENT


There are several uses of a prototype. An important purpose is to illustrate the
input data formats, messages, reports, and the interactive dialogues to the
customer. This is a valuable mechanism for gaining better understanding of the
customer’s needs:
 how the screens might looklike
 how the user interface wouldbehave
 how the system would produceoutputs

Another reason for developing a prototype is that it is impossible to get the


perfect product in the first attempt. Many researchers and engineers
advocate that if you want to develop a good product you must plan to throw
away the first version. The experience gained in developing the prototype can
be used to develop the finalproduct.
A prototyping model can be used when technical solutions are unclear to the
development team.
A prototype of the actual product is preferred in situations such as:
• User requirements are notcomplete

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 11


• Technical issues are notclear

4. EVOLUTIONARY MODEL
It is also called successive versions model or incremental model. At first, a simple
working model is built. Subsequently it undergoes functional improvements & we
keep on adding new functions till the desired system isbuilt.
Applications:
Large projects where you can easily find modules for incremental
implementation. Often used when the customer wants to start using the
core features rather than waiting for the fullsoftware.
 Also used in object oriented software development because the system can
be easily portioned into units in terms ofobjects.
Advantages:
 User gets a chance to experiment partially developedsystem
 Reduce the error because the core modules get testedthoroughly.
Disadvantages:
 It is difficult to divide the problem into several versions that would be

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 12


acceptable to the customer which can be incrementally implemented
&delivered.

5. SPIRAL MODEL
The Spiral model of software development is shown in fig. The diagrammatic
representation of this model appears like a spiral with many loops. The exact
number of loops in the spiral is not fixed. Each loop of the spiral represents a
phase of the software process. For example, the innermost loop might be
concerned with feasibility study, the next loop with requirements specification,
the next one with design, and so on. Each phase in this model is split into four
sectors (or quadrants) as shown in fig. The following activities are carried out
during each phase of a spiral model.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 13


FIRST QUADRANT (OBJECTIVE SETTING)
 During the first quadrant, it is needed to identify the objectives of the
phase.
 Examine the risks associated with these objectives.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 14


SECOND QUADRANT (RISK ASSESSMENT AND REDUCTION)
 A detailed analysis is carried out for each identified project risk.
 Steps are taken to reduce the risks. For example, if there is a risk that
the requirements are inappropriate, a prototype system may be
developed.
THIRD QUADRANT (DEVELOPMENT AND VALIDATION)
 Develop and validate the next level of the product after resolving the
identified risks.
FOURTH QUADRANT (REVIEW AND PLANNING)
 Review the results achieved so far with the customer and plan the next
iteration around the spiral.
 Progressively more complete version of the software gets built with
each iteration around the spiral.

CIRCUMSTANCES TO USE SPIRAL MODEL

The spiral model is called a meta model since it encompasses all other life cycle
models. Risk handling is inherently built into this model. The spiral model is
suitable for development of technically challenging software products that are
prone to several kinds of risks. However, this model is much more complex than
the other models – this is probably a factor deterring its use in ordinary projects.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 15


DATA FLOW DIAGRAM
Data flow diagram is a graphical representation of data flow in an information
system. It is capable of depicting incoming data flow, outgoing data flow and
stored data. The DFD does not mention anything about how data flows through
the system.

There is a prominent difference between DFD and Flowchart. The flowchart


depicts flow of control in program modules. DFDs depict flow of data in the
system at various levels. DFD does not contain any control or branch elements.

TYPES OF DFD
Data Flow Diagrams are either Logical or Physical.

 Logical DFD - This type of DFD concentrates on the system process and
flow of data in the system. For example in a Banking software system, how
data is moved between different entities.
 Physical DFD - This type of DFD shows how the data flow is actually
implemented in the system. It is more specific and close to the
implementation.

DFD COMPONENTS
DFD can represent Source, destination, storage and flow of data using the following
set of components –

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 16


 Entities - Entities are source and destination of information data. Entities
are represented by rectangles with their respective names.
 Process - Activities and action taken on the data are represented by
Circle or Round- edged rectangles.
 Data Storage - There are two variants of data storage - it can either be
represented as a rectangle with absence of both smaller sides or as an
open-sided rectangle with only one side missing.
 Data Flow - Movement of data is shown by pointed arrows. Data
movement is shown from the base of arrow as its source towards head
of the arrow as destination.
IMPORTANCE OF DFDS IN A GOOD SOFTWARE DESIGN
The main reason why the DFD technique is so popular is probably because of the
fact that DFD is a very simple formalism – it is simple to understand and use.
Starting with a set of high-level functions that a system performs, a DFD model
hierarchically represents various sub-functions. In fact, any hierarchical model is
simple to understand. Human mind is such that it can easily understand any
hierarchical model of a system – because in a hierarchical model, starting with a
very simple and abstract model of a system, different details of the system are
slowly introduced through different hierarchies. The data flow diagramming
technique also follows a very simple set of intuitive concepts and rules. DFD is an
elegant modeling technique that turns out to be useful not only to represent the

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 17


results of structured analysis of a software problem, but also for several other
applications such as showing the flow of documents or items in an organization.

How to create a data flow diagram

Now that you have some background knowledge on data flow diagrams and how
they are categorized, you’re ready to build your own DFD. The process can be
broken down into 5 steps:

1. IDENTIFY MAJOR INPUTS AND OUTPUTS IN YOUR SYSTEM


Nearly every process or system begins with input from an external entity and
ends with the output of data to another entity or database. Identifying such
inputs and outputs gives a macro view of your system—it shows the broadest
tasks the system should achieve. The rest of your DFD will be built on these
elements, so it is crucial to know them early on.

2. BUILD A CONTEXT DIAGRAM


Once you’ve identified the major inputs and outputs, building a context diagram
is simple. Draw a single process node and connect it to related external entities.
This node represents the most general process information undergoes to go from
input to output.
The example below shows how information flows between various entities via an
online community. Data flows to and from the external entities, representing
both input and output. The center node, “online community,” is the general
process.

3. EXPAND THE CONTEXT DIAGRAM INTO A LEVEL 1 DFD


The single process node of your context diagram doesn’t provide much
information—you need to break it down into subprocesses. In your level 1 data
flow diagram, you should include several process nodes, major databases, and all
external entities. Walk through the flow of information: where does the
information start and what needs to happen to it before each data store?

4. EXPAND TO A LEVEL 2+ DFD

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 18


To enhance the detail of your data flow diagram, follow the same process as in
step 3. The processes in your level 1 DFD can be broken down into more specific
subprocesses. Once again, ensure you add any necessary data stores and flows—
at this point you should have a fairly detailed breakdown of your system. To
progress beyond a level 2 data flow diagram, simply repeat this process. Stop
once you’ve reached a satisfactory level of detail.

5. CONFIRM THE ACCURACY OF YOUR FINAL DIAGRAM


When your diagram is completely drawn, walk through it. Pay close attention to
the flow of information: does it make sense? Are all necessary data stores
included? By looking at your final diagram, other parties should be able to
understand the way your system functions. Before presenting your final diagram,
check with co-workers to ensure your diagram is comprehensible.

DATA FLOW DIAGRAM LEVELS


Data flow diagrams are also categorized by level. Starting with the most basic,
level 0, DFDs get increasingly complex as the level increases. As you build your
own data flow diagram, you will need to decide which level your diagram will be.
Level 0 DFDs,also known as context diagrams, are the most basic data flow diagrams.
They provide a broad view that is easily digestible but offers little detail. Level 0 data flow
diagrams show a single process node and its connections to external entities.

Level 1 DFDsare still a general overview, but they go into more detail than a context
diagram. In a level 1 data flow diagram, the single process node from the context

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 19


diagram is broken down into subprocesses. As these processes are added, the diagram
will need additional data flows and data stores to link them together.

Level 2+ DFDssimply break processes down into more detailed subprocesses. In theory,
DFDs could go beyond level 3, but they rarely do. Level 3 data flow diagrams are detailed
enough that it doesn’t usually make sense to break them down further.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 20


Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 21
EXAMPLE: DFD

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 22


OVERVIEW OF QUALITY STANDARDS

Quality standards are defined as documents that provide requirements,


specifications, guidelines, or characteristics that can be used consistently to
ensure that materials, products, processes and services are fit for their purpose.

Standards provide organizations with the shared vision, understanding,


procedures, and vocabulary needed to meet the expectations of their
stakeholders. Because standards present precise descriptions and terminology,
they offer an objective and authoritative basis for organizations and consumers
around the world to communicate and conduct business.

Quality Standards are helpful in:-

 Organize processes
 Improve the efficiency of processes
 Continually improve.

ISO – 9001
ISO 9001 is defined as the international standard that specifies requirements for
a quality management system (QMS). Organizations use the standard to
demonstrate the ability to consistently provide products and services that meet
customer and regulatory requirements. It is the most popular standard in the ISO
9000 series and the only standard in the series to which organizations can certify.
ISO 9001 was first published in 1987 by the International Organization for
Standardization (ISO), an international agency composed of the national
standards bodies of more than 160 countries. The current version of ISO
9001 was released in September 2015.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 23


ISO 9001:2015 applies to any organization, regardless of size or industry. More
than one million organizations from more than 160 countries have applied the
ISO 9001 standard requirements to their quality management systems.

Organizations of all types and sizes find that using the ISO 9001 standard helps
them:
 Organize processes
 Improve the efficiency of processes
 Continually improve.

ISO 9001:2015 covers the following:-


 Requirements for a quality management system, including documented
information, planning and determining process interactions
 Responsibilities of management
 Management of resources, including human resources and an organization’s
work environment
 Product realization, including the steps from design to delivery
 Measurement, analysis, and improvement of the QMS through activities
like internal audits and corrective and preventive action

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 24


SOFTWARE ENGINEERING | CAPABILITY MATURITY MODEL
SEI – CMM MODEL

SEI - stands for ‘Software Engineering Institute.


CMM - stands for ‘Capability Maturity Model
CMM was developed by the Software Engineering Institute (SEI) at Carnegie
Mellon University in 1987.

Different International Standards are as follows:

 It is not a software process model. It is a framework which is used to


analyse the approach and techniques followed by any organization to
develop a software product.
 It also provides guidelines to further enhance the maturity of those
software products.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 25


 It is based on profound feedback and development practices adopted by
the most successful organizations worldwide.
 This model describes a strategy that should be followed by moving through
5 different levels.
 Each level of maturity shows a process capability level. All the levels except
level-1 are further described by Key Process Areas (KPA’s).

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 26


Maturity Level Characterisation Key Process Areas
(KPA)
Initial Ad hoc Process No KPA

Repeatable Basic Project Management Project Planning, Sub –


Process Contract Management,
Software Quality
Assurance
Defined Process Definition Training Programs,
Organization Process
Definition
Managed Process Measurement Quality Management,
Quantitative Management
Optimizing Process Control Defect Prevention,
Technology Change
Management

CMMI is a model that contains the best practices and provides the organizations
with those critical elements that makes their business processes effective.
The Capability Maturity Model (CMM) is a methodology used to develop and
refine an organization's software development process.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 27


METRICS FOR SOFTWARE PROJECT SIZE
ESTIMATION

Accurate estimation of the problem size is fundamental to satisfactory estimation


of effort, time duration and cost of a software project. In order to be able to
accurately estimate the project size, some important metrics should be defined in
terms of which the project size can be expressed. The size of a problem is
obviously not the number of bytes that the source code occupies. It is neither the
byte size of the executable code. The project size is a measure of the problem
complexity in terms of the effort and time required to develop the product.

Currently two metrics are popularly being used widely to estimate size: lines of
code (LOC) and function point (FP).

LINES OF CODE (LOC)


LOC is the simplest among all metrics available to estimate project size. This
metric is very popular because it is the simplest to use. Using this metric, the
project size is estimated by counting the number of source instructions in the
developed program. Obviously, while counting the number of source instructions,
lines used for commenting the code, blank lines and the header lines should
beignored.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 28


Determining the LOC count at the end of a project is a very simple job. However,
accurate estimation of the LOC count at the beginning of a project is very difficult.
In order to estimate the LOC count at the beginning of a project, project managers
usually divide the problem into modules, and each module into submodules and
so on, until the sizes of the different leaf-level modules can be approximately
predicted.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 29


FUNCTION POINT (FP)
Function point metric overcomes many of the shortcomings of the LOC metric.
One of the important advantages of using the function point metric is that it can
be used to easily estimate the size of a software product directly from the
problem specification. This is in contrast to the LOC metric, where the size can be
accurately determined only after the product has fully been developed. The
conceptual idea behind the function point metric is that the size of a software
product is directly dependent on the number of different functions or features it
supports. A software product supporting many features would certainly be of
larger size than a product with less number of features.

Following functionalities are counted while counting the function points of the
system.

 Data Functionality
 Internal Logical Files (ILF)
 External Interface Files (EIF)
 Transaction Functionality
 External Inputs (EI)
 External Outputs (EO)
 External Queries (EQ)

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 30


Now logically if you divide you software application into parts it will always come to
one or more of the 5 functionalities that are mentioned above.

The size of a product in function points (FP) can be expressed as the weighted sum of
these five problem characteristics. The weights associated with the five
characteristics were proposed empirically and validated by the observations over
many projects.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 31


Function point is computed in two steps. The first step is to compute the unadjusted
function point (UFP).

UFP = (NUMBER OF INPUTS)*COMPLEXITY + (NUMBER OF


OUTPUTS)*COMPLEXITY + (NUMBER OF
INQUIRIES)*COMPLEXITY + (NUMBER OF
FILES)*COMPLEXITY + (NUMBER OF
INTERFACES)*COMPLEXITY

Number of inputs: Each data item input by the user is counted. Data inputs should be
distinguished from user inquiries. Inquiries are user commands such as print-account-
balance.

Inquiries are counted separately. It must be noted that individual data items input
by the user are not considered in the calculation of the number of inputs, but a
group of related inputs are considered as a single input. For example, while
entering the data concerning an employee to an employee pay roll software; the
data items name, age, sex, address, phone number, etc. are together considered
as a single input. All these data items can be considered to be related, since they
pertain to a single employee.

Number of outputs: The outputs considered refer to reports printed, screen


outputs, error messages produced, etc. While outputting the number of outputs
the individual data items within a report are not considered, but a set of related
data items is counted as one input.

Number of inquiries: Number of inquiries is the number of distinct interactive


queries which can be made by the users. These inquiries are the user commands
which require specific action by the system.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 32


Number of files: Each logical file is counted. A logical file means groups of logically
related data. Thus, logical files can be data structures or physical files.

Number of interfaces: Here the interfaces considered are the interfaces used to
exchange information with other systems. Examples of such interfaces are data
files on tapes, disks, communication links with other systems etc.

Once the unadjusted function point (UFP) is computed, the technical complexity
factor (TCF) is computed next. TCF refines the UFP measure by considering
fourteen other factors such as
1) Data Communication
2) Distributed Data Processing
3) Performance
4) Heavily Used Configuration
5) Transaction Role
6) Online Data Entry
7) End-User Efficiency
8) Online Update
9) Complex Processing
10) Reusability
11) Installation Ease
12) Operational Ease
13) Multiple Sites
14) Facilitate Change

Each of these 14 factors is assigned from 0 (not present or no influence) to 6


(strong influence) as follows:

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 33


The resulting numbers are summed, yielding the total degree of influence (DI).
Now, TCF is computed as (0.65+0.01*DI). As DI can vary from 0 to 70, TCF can
vary from 0.65 to 1.35. Finally,

FP=UFP*TCF
Question:Consider a project with the following functional units :
 Number of user inputs = 50
 Number of user outputs = 40
 Number of user enquiries = 35
 Number of user files = 06
 Number of external interfaces = 04
Assuming all complexity adjustment factors and weighing factors as average, find the function
points for the project.

Solution:

Thusunadjusted function points (UFP) can be calculated


as:
= (200 + 200 + 140 + 60 + 28)
= 628

TCF = 0.65 + 0.01 x (14 x 3)


= 0.65 + 0.42

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 34


= 1.07

Function Point (FP) = UFP * TCF


Function Point (FP) = UFP * TCF
= 628 x 1.07
= 672

Question:Consider a project with the following functional units :


External Inputs: 4

External Outputs: 3

Internal Logical files: 2

Assuming all complexity adjustment factors and weighing factors as low, find the function
points for the project.

Solution:

Thusunadjusted function points (UFP)calculated as:

= ((4*3) + (3*4) + (2*7) + 0 + 0)


= 12 + 12 + 14
= 38

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 35


COCOMO MODEL

Organic, Semidetached and Embedded software projects


Boehm postulated that any software development project can be classified into one
of the following three categories based on the development complexity: organic,
semidetached, and embedded. In order to classify a product into the identified
categories.

Boehm’s definition of organic, semidetached, and embedded systems are


elaborated below.

Organic: A development project can be considered of organic type, if the project


deals with developing a well understood application program, the size of the
development team is reasonably small, and the team members are experienced in
developing similar types of projects.

Semidetached: A development project can be considered of semidetached type, if


the development consists of a mixture of experienced and inexperienced staff.
Team members may have limited experience on related systems but may be
unfamiliar with some aspects of the system being developed.

Embedded: A development project is considered to be of embedded type, if the


software being developed is strongly coupled to complex hardware, or if the
stringent regulations on the operational procedures exist.

COCOMO
COCOMO (Constructive Cost Estimation Model) was proposed by Boehm.
According to Boehm, software cost estimation should be done through three
stages: Basic COCOMO, Intermediate COCOMO, and Complete COCOMO.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 36


BASIC COCOMO MODEL
The basic COCOMO model gives an approximate estimate of the project parameters.
The basic COCOMO estimation model is given by the following expressions:

EFFORT = a1 x (KLOC) a2 PM
TDEV = b1 x (EFFORT) b2 Months

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 37


• KLOC is the estimated size of the software product expressed in Kilo
Lines of Code,

• a1, a2, b1, b2 are constants for each category of software products,

• Tdev is the estimated time to develop the software, expressed in months,

• Effort is the total effort required to develop the software product,


expressed in person months(PMs).

The effort estimation is expressed in units of person-months (PM). It is the area


under the person-month plot (as shown in fig. below). It should be carefully noted
that an effort of 100 PM does not imply that 100 persons should work for 1 month
nor does it imply that 1 person should be employed for 100 months, but it denotes
the area under the person-month curve (as shown in fig.).

Person-month curve

AccordingtoBoehm,everylineofsourcetextshouldbecalculatedasoneLOCirrespectiveof
the actual number of instructions on that line. Thus, if a single instruction spans
several lines (say
nlines),itisconsideredtobenLOC.Thevaluesofa1 ,a2,b1,b2fordifferentcategoriesofprod
ucts (i.e. organic, semidetached, and embedded) as given by Boehm [1981] are

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 38


summarized below. He derived the above expressions by examining historical data
collected from a large number of actual projects.

ESTIMATION OF DEVELOPMENT EFFORT& DEVELOPMENT TIME


For the three classes of software products, the formulas for estimating the effort
based on the code size are shown below:

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 39


Some insight into the basic COCOMO model can be obtained by plotting the
estimated characteristics for different software sizes. Fig. shows a plot of estimated
effort versus product size. From fig., we can observe that the effort is somewhat
super linear in the size of the software product. Thus, the effort required to
develop a product increases very rapidly with project size.

Effort versus product size

The development time versus the product size in KLOC is plotted in fig. below.
From fig., it can be observed that the development time is a sub linear function
of the size of the product, i.e. when the size of the product increases by two
times, the time to develop the product does not double but rises moderately.
This can be explained by the fact that for larger products, a larger number of
activities which can be carried out concurrently can be identified. The parallel
activities can be carried out simultaneously by the engineers. This reduces the time
to complete the project. Further, from fig., it can be observed that the development
time is roughly the same for all the three categories of products. For example, a
60 KLOC program can be developed in approximately 18 months, regardless of
whether it is of organic, semidetached, or embedded type.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 40


Development time versus size

From the effort estimation, the project cost can be obtained by multiplying the
required effort by the manpower cost per month. But, implicit in this project cost
computation is the assumption that the entire project cost is incurred on account of
the manpower cost alone. In addition to manpower cost, a project would incur

It is important to note that the effort and the duration estimations obtained using the COCOMO
model are called as nominal effort estimate and nominal duration estimate. The term nominal
implies that if anyone tries to complete the project in a time shorter than the estimated duration,
then the cost will increase drastically. But, if anyone completes the project over a longer period
of time than the estimated, then there is almost no decrease in the estimated costvalue.

costs due to hardware and software required for the project and the company
overheads for administration, office space, etc.

EXAMPLE:
Assume that the size of an org organic type software product has been estimated to
be 32,000 lines of source code. Assume that the average salary of software
engineers be Rs. 15,000/- per month. Determine the effort required to develop the
software product and the nominal development time.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 41


From the basic COCOMO estimation formula for organic software:

Effort = 2.4 x (32)1.05= 91 PM


1.05
Effort = 2.4 х = 91PM
(32) 0.38
Nominal development time = 2.5 х (91) = 14months

Cost required to develop the product = 14 х 15,000


= RS. 210,000/-

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 42


PUTNAM RESOURCE ALLOCATION MODEL

The Lawrence Putnam model describes the time and effort requires finishing a software project of a
specified size. Putnam makes a use of a so-called The Norden/Rayleigh Curve to estimate project effort,
schedule & defect rate as shown in fig:

The Putnam model is an empirical software effort estimation model. Empirical models work by
collecting software project data (for example, effort and size) and fitting a curve to the data. Future effort
estimates are made by providing size and calculating the associated effort using the equation which fit the
original data (usually with some error).

Created by Lawrence Putnam, Sr. the Putnam model describes the time and effort required to finish a
software project of specified size. SLIM (Software LIfecycle Management) is the name given by Putnam
to the proprietary suite of tools his company QSM, Inc. has developed based on his model. It is one of the
earliest of these types of models developed, and is among the most widely used. Closely related software
parametric models are Constructive Cost Model (COCOMO), Parametric Review of Information for

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 43


Costing and Evaluation – Software (PRICE-S), and Software Evaluation and Estimation of Resources –
Software Estimating Model (SEER-SEM).

Putnam noticed that software staffing profiles followed the well known Rayleigh distribution. Putnam
used his observation about productivity levels to derive the software equation:

The various terms of this expression are as follows:

K is the total effort expended (in PM) in product development, and L is the product estimate in KLOC .

td correlate to the time of system and integration testing. Therefore, td can be relatively considered as the
time required for developing the product.

Ck Is the state of technology constant and reflects requirements that impede the development of the
program.

Typical values of Ck = 2 for poor development environment

Ck= 8 for good software development environment

Ck = 11 for an excellent environment (in addition to following software engineering principles,


automated tools and techniques are used).

The exact value of Ck for a specific task can be computed from the historical data of the organization
developing it.

Putnam proposed that optimal staff develop on a project should follow the Rayleigh curve. Only a small
number of engineers are required at the beginning of a plan to carry out planning and specification tasks.
As the project progresses and more detailed work are necessary, the number of engineers reaches a peak.
After implementation and unit testing, the number of project staff falls.

EFFECT OF A SCHEDULE CHANGE ON COST

Putnam derived the following expression:

Where, K is the total effort expended (in PM) in the product development

L is the product size in KLOC

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 44


td corresponds to the time of system and integration testing

Ck Is the state of technology constant and reflects constraints that impede the progress of the program

Now by using the above expression, it is obtained that,

For the same product size, C =L3 / Ck3 is a constant.

(As project development effort is equally proportional to project development cost)

From the above expression, it can be easily observed that when the schedule of a project is compressed,
the required development effort as well as project development cost increases in proportion to the fourth
power of the degree of compression. It means that a relatively small compression in delivery schedule can
result in a substantial penalty of human effort as well as development cost.

For example, if the estimated development time is 1 year, then to develop the product in 6 months, the
total effort required to develop the product (and hence the project cost) increases 16 times.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 45


SOFTWARE PROTOTYPING

What is software prototyping ?

It is the process of implementing the presumed software requirements with an intention to learn more
about the actual requirements or alternative design that satisfies the actual set of requirements .

Need for software prototyping


 To assess the set of requirements that makes a productsuccessful in the market
 To test the feasibility without building the whole system.
 To make end-user involved in the design phase.

Benefits of Software Prototyping


 It makes the developers clear about the missing requirements. Lets the developers know what
actually the users want.
 Reduces the loss by bringing the manufacturer to a conclusion weather the system which we
are about to build is feasible or not rather than building the whole system and finding it .
 One can have a working system in before hand.
 It brings the user to get involved in the system design

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 46


THE SOFTWARE PROTOTYPING PROCESS

There is typically a four-step process for prototyping:

1. Identify initial requirements: In this step, the software publisher decides what the software will
be able to do. The publisher considers who the user will likely be and what the user will want
from the product, then the publisher sends the project and specifications to a software designer or
developer.
2. Develop initial prototype: In step two, the developer will consider the requirements as proposed
by the publisher and begin to put together a model of what the finished product might look like.
An initial prototype may be as simple as a drawing on a whiteboard, or it may consist of sticky
notes on a wall, or it may be a more elaborate working model.
3. Review: Once the prototype is developed, the publisher has a chance to see what the product
might look like; how the developer has envisioned the publisher's specifications. In more
advanced prototypes, the end consumer may have an opportunity to try out the product and offer
suggestions for improvement. This is what we know of as beta testing.
4. Revise: The final step in the process is to make revisions to the prototype based on the feedback
of the publisher and/or beta testers.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 47


RISKMANAGEMENT

A software project can be affected by a large variety of risks. In order to be


able to systematically identify the important risks which might affect a
software project, it is necessary to categorize risks into different classes.

There are three main categories of risks which can affect a software project:

1. PROJECTRISKS
Project risks concern varies forms of budgetary, schedule,
personnel, resource, and customer-related problems. An
important project risk is schedule slippage. Since, software is
intangible, it is very difficult to monitor and control a software
project. It is very difficult to control something which cannot be seen.
For any manufacturing project, such as manufacturing of cars, the
project manager can see the product taking shape. He can for
instance, see that the engine is fitted, after that the doors are fitted,
the car is getting painted, etc. Thus he can easily assess the progress
of the work and control it. The invisibility of the product being
developed is an important reason why many software projects suffer
from the risk of schedule slippage.

2. TECHNICALRISKS
Technical risks concern potential design, implementation,
interfacing, testing, and maintenance problems. Technical risks
also include ambiguous specification, incomplete specification,
changing specification, technical uncertainty, and technical
obsolescence. Most technical risks occur due to the
development team’s insufficient knowledge about the
project.

3. BUSINESS RISKS
This type of risks include risks of building an excellent product that
no one wants, losing budgetary or personnel commitments, etc.

RISK ASSESSMENT
The objective of risk assessment is to rank the risks in terms of their
damage causing potential. For risk assessment, first each risk should be
rated in two ways:
• The likelihood of a risk coming true (denoted asr).
• The consequence of the problems associated with that risk (denoted
ass).

Based on these two factors, the priority of each risk can be computed:
p=r*s
Where, p is the priority with which the risk must be handled, r is the
probability of the risk becoming true, and sis the severity of damage caused
due to the risk becoming true. If all identified risks are prioritized, then the
most likely and damaging risks can be handled first and more
comprehensive risk abatement procedures can be designed for these risks.

RISK CONTAINMENT
After all the identified risks of a project are assessed, plans must be made to
contain the most damaging and the most likely risks. Different risks require
different containment procedures. In fact, most risks require ingenuity on
the part of the project manager in tackling the risk.

There are three main strategies to plan for risk containment:

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 49


Avoid the risk- This may take several forms such as discussing with the
customer to change the requirements to reduce the scope of the work,
giving incentives to the engineers to avoid the risk of manpower turnover,
etc.
Transfer the risk- This strategy involves getting the risky component
developed by a third party, buying insurance cover,etc.
Risk reduction- This involves planning ways to contain the damage due to
a risk. For example, if there is risk that some key personnel might leave,
new recruitment may be planned.

RISK LEVERAGE
To choose between the different strategies of handling a risk, the project
manager must consider the cost of handling the risk and the corresponding
reduction of risk. For this the risk leverage of the different risks can be
computed.
Risk leverage is the difference in risk exposure divided by the cost of
reducing the risk. More formally,
RISK LEVERAGE = (RISK EXPOSURE BEFORE REDUCTION – RISK
EXPOSURE AFTER REDUCTION) / (COST OF REDUCTION)

Risk related to schedule slippage

Even though there are three broad ways to handle any risk, but still risk
handling requires a lot of ingenuity on the part of a project manager. As an
example, it can be considered the options available to contain an important
type of risk that occurs in many software projects – that of schedule
slippage. Therefore, these can be dealt with by increasing the visibility of
the software product. Visibility of a software product can be increased by
producing relevant documents during the development process wherever
meaningful and getting these documents reviewed by an appropriate team.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 50


Milestones should be placed at regular intervals through a software
engineering process to provide a manager with regular indication of
progress. Every phase can be broken down to reasonable-sized tasks and
milestones can be scheduled for these tasks too. A milestone is reached,
once documentation produced as part of a software engineering task is
produced and gets successfully reviewed. Milestones need not be placed for
every activity. An approximate rule of thumb is to set a milestone every 10
to 15days.

SOFTWARE CONFIGURATION MANAGEMENT


The results (also called as the deliverables) of a large software development
effort typically consist of a large number of objects, e.g. source code,
design document, SRS document, test document, user’s manual, etc. These
objects are usually referred to and modified by a number of software
engineers through out the life cycle of the software. The state of all these
objects at any point of time is called the configuration of the software
product. The state of each deliverable object changes as development
progresses and also as bugs are detected and fixed.

RELEASE VS. VERSION VS. REVISION


A new version of a software is created when there is a significant change in
functionality, technology, or the hardware it runs on, etc. On the other hand
a new revision of a software refers to minor bug fix in that software. A new
release is created if there is only a bug fix, minor enhancements to the
functionality, usability, etc.

For example, one version of a mathematical computation package might


run on Unix-based machines, another on Microsoft Windows and so on. As
a software is released and used by the customer, errors are discovered that
need correction. Enhancements to the functionalities of the software may
also be needed. A new release of software is an improved system intended
to replace an old one. Often systems are described as version m, release n;

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 51


or simple m.n. Formally, a history relation is version of can be defined
between objects. This relation can be split into two sub relations is revision
of and is variant of.

NECESSITY OF SOFTWARE CONFIGURATION MANAGEMENT


There are several reasons for putting an object under configuration
management. But, possibly the most important reason for configuration
management is to control the access to the different deliverable objects.
Unless strict discipline is enforced regarding updation and storage of
different objects, several problems appear. The following are some of the
important problems that appear if configuration management is notused.

 Inconsistency problem when the objects are replicated. A


scenario can be considered where every software engineer has a
personal copy of an object (e.g. source code). As each engineer
makes changes to his local copy, he is expected to intimate them to
other engineers, so that the changes in interfaces are uniformly
changed across all modules. However, many times an engineer
makes changes to the interfaces in his local copies and forgets to
intimate other teammates about the changes. This makes the different
copies of the object inconsistent. Finally, when the product is
integrated, it does not work. Therefore, when several team members
work on developing an object, it is necessary for them to work on a
single copy of the object, otherwise inconsistency mayarise.

 Problems associated with concurrent access. Suppose there is a


single copy of a problem module, and several engineers are working
on it. Two engineers may simultaneously carry out changes to
different portions of the same module, and while saving overwrite
each other. Though the problem associated with concurrent access to
program code has been explained, similar problems occur for any
other deliverable object.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 52


 Providing a stable development environment. When a project is
underway, the team members need a stable environment to make
progress. Suppose somebody is trying to integrate module A, with
the modules B and C, he cannot make progress if developer of
module C keeps changing C; this can be especially frustrating if a
change to module C forces him to recompile A. When an effective
configuration management is in place, the manager freezes the
objects to form a base line. When anyone needs any of the objects
under configuration control, he is provided with a copy of the base
line item. The requester makes changes to his private copy. Only
after the requester is through with all modifications to his private
copy, the configuration is updated and a new base line gets formed
instantly. This establishes a baseline for others to use and depend on.
Also, configuration may be frozen periodically. Freezing a
configuration may involve archiving everything needed to rebuild it.
(Archiving means copying to a safe place such as a magnetic tape).

 System accounting and maintaining status information. System


accounting keeps track of who made a particular change and when
the change wasmade.

 Handling variants. Existence of variants of a software product


causes some peculiar problems. Suppose somebody has several
variants of the same module, and finds a bug in one of them. Then, it
has to be fixed in all versions and revisions. To do it efficiently, he
should not have to fix it in each and every version and revision of the
software separately.

SOFTWARE CONFIGURATION MANAGEMENT ACTIVITIES


Normally, a project manager performs the configuration management
activity by using an automated configuration management tool. A
configuration management tool provides automated support for overcoming
all the problems mentioned above. In addition, a configuration management

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 53


tool helps to keep track of various deliverable objects, so that the project
manager can quickly and unambiguously determine the current state of the
project. The configuration management tool enables the engineers to
change the various components in a controlledmanner.

Configuration management is carried out through two principal activities:


• Configuration identification involves deciding which parts of the
system should be kept track of.
• Configuration control ensures that changes to a system happen
smoothly.

CONFIGURATION IDENTIFICATION
The project manager normally classifies the objects associated with a
software development effort into three main categories: controlled, pre
controlled, and uncontrolled. Controlled objects are those which are already
put under configuration control. One must follow some formal procedures
to change them. Pre controlled objects are not yet under configuration
control, but will eventually be under configuration control. Uncontrolled
objects are not and will not be subjected to configuration control.
Controllable objects include both controlled and pre controlled objects.
Typical controllable objects include:
• Requirements specification document
• Design documents

Tools used to build the system, such as compilers, linkers, lexical analyzers,
parsers, etc.
• Source code for each module
• Test cases
• Problem reports

The configuration management plan is written during the project planning


phase and it lists all controlled objects. The managers who develop the plan
must strike a balance between controlling too much, and controlling too

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 54


little. If too much is controlled, overheads due to configuration management
increase to unreasonably high levels. On the other hand, controlling too
little might lead to confusion when something changes.

CONFIGURATION CONTROL
Configuration control is the process of managing changes to controlled
objects. Configuration control is the part of a configuration management
system that most directly affects the day-to- day operations of developers. The
configuration control system prevents unauthorized changes toany controlled objects. In
order to change a controlled object such as a module, a developer can get a private copy
of the module by a reserve operation as shown in fig. 38.1. Configuration management
tools allow only one person to reserve a module at a time. Once an object is reserved, it
does not allow anyone else to reserve this module until the reserved module is restored
as shown in fig. 38.1. Thus, by preventing more than one engineer to simultaneously
reserve a module, the problems associated with concurrent access are solved.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 55


Fig. Reserve and restore operation in configuration control

It can be shown how the changes to any object that is under configuration
control can be achieved. The engineer needing to change a module first
obtains a private copy of the module through a reserve operation. Then, he
carries out all necessary changes on this private copy. However, restoring
the changed module to the system configuration requires the permission of
a change control board (CCB). The CCB is usually constituted from among
the development team members. For every change that needs to be carried
out, the CCB reviews the changes made to the controlled object and
certifies several things about the change:
1. Change is well-motivated.
2. Developer has considered and documented the effects of the change.
3. Changes interact well with the changes made by other developers.
4. Appropriate people (CCB) have validated the change, e.g. someone
has tested the changed code, and has verified that the change is
consistent with the requirement.

The change control board (CCB) sounds like a group of people.


However, except for very large projects, the functions of the change
control board are normally discharged by the project manager himself or
some senior member of the development team. Once the CCB reviews
the changes to the module, the project manager updates the old base line
through a restore operation (as shown in fig. 38.1). A configuration
control tool does not allow a developer to replace an object he has

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 56


reserved with his local copy unless he gets an authorization from the
CCB. By constraining the developers’ ability to replace reserved objects,
a stable environment is achieved. Since a configuration management
tool allows only one engineer to work on one module at any one time,
problem of accidental overwriting is eliminated. Also, since only the
manager can update the baseline after the CCB approval, unintentional
changes are eliminated.

CONFIGURATION MANAGEMENT TOOLS


SCCS and RCS are two popular configuration management tools available on most
UNIX systems. SCCS or RCS can be used for controlling and managing different
versions of text files. SCCS and RCS do not handle binary files (i.e. executable files,
documents, files containing diagrams, etc.) SCCS and RCS provide an efficient way of
storing versions that minimizes the amount of occupied disk space. Suppose, a module
MOD is present in three versions MOD1.1, MOD1.2, and MOD1.3. Then, SCCS and
RCS stores the original module MOD1.1 together with changes needed to transform
MOD1.1 into MOD1.2 and MOD1.2 to MOD1.3. The changes needed to transform each
base lined file to the next version are stored and are called deltas. The main reason
behind storing the deltas rather than storing the full version files is to save disk space.
The change control facilities provided by SCCS and RCS include the ability to
incorporate restrictions on the set of individuals who can create new versions, and
facilities for checking components in and out (i.e. reserve and restore operations).
Individual developers check out components and modify them. After they have made all
necessary changes to a module and after the changes have been reviewed, they check in
the changed module into SCCS or RCS. Revisions are denoted by numbers in ascending
order, e.g., 1.1, 1.2, 1.3 etc. It is also possible to create variants or revisions of a
component by creating a fork in the development history.

Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 57


Software Engineering (ETCS 303) By Dr Shafiqul Abidin, HMRITM, Delhi 58

You might also like