V-Model: Unit#5

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 78

Unit#5

V-Model
The V-model is a type of SDLC model where
process executes in a sequential manner in V-
shape. It is also known as Verification and Validation
model. It is based on the association of a testing
phase for each corresponding development stage.
Development of each step directly associated with
the testing phase. The next phase starts only after
completion of the previous phase i.e. for each
development activity, there is a testing activity
corresponding to it.

Verification: It involves static analysis technique


(review) done without executing code. It is the
process of evaluation of the product development
phase to find whether specified requirements meet.

Validation: It involves dynamic analysis technique


(functional, non-functional), testing done by
executing code. Validation is the process to evaluate
the software after the completion of the development
phase to determine whether software meets the
customer expectations and requirements.
So V-Model contains Verification phases on one
side of the Validation phases on the other side.
Verification and Validation phases are joined by
coding phase in V-shape. Thus it is called V-Model.
Design Phase:
 Requirement Analysis: This phase contains
detailed communication with the customer to
understand their requirements and expectations.
This stage is known as Requirement Gathering.
 System Design: This phase contains the
system design and the complete hardware and
communication setup for developing product.
 Architectural Design: System design is broken
down further into modules taking up different
functionalities. The data transfer and
communication between the internal modules and
with the outside world (other systems) is clearly
understood.
 Module Design: In this phase the system
breaks dowm into small modules. The detailed
design of modules is specified, also known as
Low-Level Design (LLD).
Testing Phases:
 Unit Testing: Unit Test Plans are developed
during module design phase. These Unit Test
Plans are executed to eliminate bugs at code or
unit level.
 Integration testing: After completion of unit
testing Integration testing is performed. In
integration testing, the modules are integrated
and the system is tested. Integration testing is
performed on the Architecture design phase. This
test verifies the communication of modules
among themselves.
 System Testing: System testing test the
complete application with its functionality, inter
dependency, and communication.It tests the
functional and non-functional requirements of the
developed application.
 User Acceptance Testing (UAT): UAT is
performed in a user environment that resembles
the production environment. UAT verifies that the
delivered system meets user’s requirement and
system is ready for use in real world.
ETVX
ETVX stands for Entry-Task-Verification-eXit. IBM
introduced the ETVX model during the 80’s. In this
model any process is broken down  down to multiple
tasks which would be performed linearly. You will
enter a task only if the entry criteria is met and exit
the task when verification is complete and the exi
criteria is met.

 Entry Criteria = Working software


 Exit Criteria = Passing the automated test
 Task = Write the code
 Verification = Run the automated test
1 .'E' stands for the entry criteria which must be
satisfied before a set of tasks can be performed.
         Ex: Like Approved SRS/FRS documents are
required for the design of application / for Testing
  'T' is the set of tasks / procedures to be performed
          Ex: tasks include design test cases,
reviews...etc
 
 'V' stands for the verification & validation process
to ensure that the right tasks are performed
          Ex: validate the application as per the
requirements and review the testing
 'X' stands for the exit criteria or the outputs of the
tasks.
          Ex: The documents prepared like Test Log /
defect reports / Test summary reports...etc

3.
Baseline
Producing software from a specification is like walking on water - it's
easier if it's frozen.       Barry Boehm
A baseline is a reference point in the software development life cycle
marked by the completion and formal approval of a set of predefined work
products. The objective of a baseline is to reduce a project's vulnerability to
uncontrolled change by fixing and formally change controlling various key
deliverables (configuration items) at critical points in the development life
cycle. Baselines are also used to identify the aggregate of software and
hardware components that make up a specific release of a system.
Cf Baseline purpose. The purpose of a baseline is to provide:

 Measurable progress points within the system development lifecycle


(if it's baselined it's finished!)
 A basis for change control in subsequent project phases
 A stable reference for future work
 Intermediate and final points for assessing the fitness for purpose of
project work products.

Effective baselines have the following characteristics:


 A baseline must be associated with the production and formal
approval of a physical deliverable such as a document or
hardware/software component
 All items associated with a baseline must be placed under formal
change control.

Examples of baselines (refer figure). Typical baselines include:

 The statement of system requirements (functional baseline)


 High level design (allocated baseline)
 Detailed design (design baseline)
 The software product at the completion of system test (product
baseline)
 The software product in its operational environment (operational
baseline).

Baseline Progression
Typical Baseline
Components

Process Assessment and


Improvement
What is SPI?
Software Process Improvement (SPI)
methodology is defined as a sequence of
tasks, tools, and techniques to plan and
implement improvement activities to
achieve specific goals such as increasing
development speed, achieving higher
product quality or reducing costs.
SPI can be considered as process re-engineering
or change management project to detect the
software development lifecycle inefficiencies
and resolve them to have a better process. This
process should be mapped and aligned with
organizational goals and change drivers to have
real value to the organization.
SPI mainly consists of 4 cyclic steps as shown in
the figure below, while these steps can be
broken down into more steps according to the
method and techniques used. While in most
cases the process will contain these steps.
Why are Companies
Seeking SPI – The
motivators?
There are a lot of motivators from different
perspectives for companies, management
perspectives, sales perspectives, employee
perspectives, and others. I will mention here the
most common motivators for SPI:

Standardization and Process


consistency
To have a standard and practical process for
software development mapped to organization goals
and strategy.
Cost Reduction
To improve projects cost by enhancing the process
and eliminate issues, redundancies, and deficiencies.

Competitive Edge
Being certified in CMMI for example, can put the
company in higher competitive edge and make it
gain more sales due to the evidence of existing
mature software process based on standard method.

Meeting targets and reduce time to


market
Meeting organization goals, projects delivery, quality
standards, valuable products, professional
documentation are outputs from SPI.

Improve customers satisfaction


Project delivery on time and based on the
specification with high quality will improve
customers satisfaction and improve the sales
process.

Job satisfaction, Responsibilities,


and Resource Management
Employees get job satisfaction from producing a
good quality product and knowing what to do
without workload and the time consumed to resolve
conflicts or to eliminate issue due to an immature
process.

Automation and Autonomy


Introducing tools to automate things and improve
quality and ensure consistency. Moreover, enabling
different employees to play different roles in the
project.

Proven outcome

What are the Demotivators


for SPI?
Similar to the motivators, demotivators can be taken
from different perspectives, and here are the
common demotivators for SPI and they are very
correlated:

Time pressure
Due to the nature of the companies to deliver the
projects on time, they faced a lot of time pressure
which make it harder for them to dedicate time to
the SPI project. While I see this as weakness and
actually a driver for SPI.
SPI took a long time and it is a costly process while It
is necessary if you have issues as discussed before.

Budget Constraints
As we just mentioned SPI is a costly process, because
it needs time and dedicated resources, and not only
that but also skilled resources especially in SPI. And
you may need SPI consultant and train the resources
and orient them on SPI initiative.

Inadequate metrics
Most of the small companies do not have metrics to
measure and compare their progress or
improvement which make it sometimes impossible
to identify measure the improvements of the SPI.

Lack of Management Commitment


It is mainly because the management cannot
understand the benefit from SPI and they do not
fully support doing this change as well as the other
factors like lack of resources, budget, time, …etc.

Staff turnover
Sometimes, the company has a high staff turnover
which can be an issue to impose the SPI culture
change and this can lead to endless SPI.

Micro Organization
Some organization are very small and have very few
resources. The SPI will be too big for that kind of
companies.

Bad Experience and lack of


evidence for direct benefits

ISO (International Organization for Standardization)


established a standard known as ISO 9001:2000
 This standard follows a plan-do-check-act
(PDCA) cycle, which includes a set of activities that
are listed below.
1. Plan: Determines the processes and resources
which are required to develop a quality product
according to the user's satisfaction.
2. Do: Performs activities according to the plan to
create the desired product.
3. Check: Measures whether the activities for
establishing quality management according to the
requirements are accomplished. In addition, it
monitors the processes and takes corrective actions
to improve them.
4. Act: Initiates activities which constantly
improve processes in the organization.
             
Six Sigma
Six Sigma is the process of improving the quality of the
output by identifying and eliminating the cause of defects
and reduce variability in manufacturing and business
processes.

Six Sigma is a highly disciplined process that helps


us focus on developing and delivering near-perfect
products and services.
The maturity of a manufacturing process can be defined
by a sigma rating indicating its percentage of defect-free
products it creates.

A six sigma method is one in which 99.99966% of all


the opportunities to produce some features of a
component are statistically expected to be free of defects
(3.4 defective features per million opportunities).
History of Six Sigma
Six-Sigma is a set of methods and tools for process
improvement. It was introduced by Engineer Sir Bill
Smith while working at Motorola in 1986. In the
1980s, Motorola was developing Quasar televisions
which were famous, but the time there was lots of
defects which came up on that due to picture quality and
sound variations.

By using the same raw material, machinery and


workforce a Japanese form took over Quasar television
production, and within a few months, they produce
Quasar TV's sets which have fewer errors. This was
obtained by improving management techniques.

Six Sigma was adopted by Bob Galvin, the CEO of


Motorola in 1986 and registered as a Motorola Trademark
on December 28, 1993, then it became a quality leader.

Characteristics of Six Sigma


The Characteristics of Six Sigma are as follows:

1. Statistical Quality Control: Six Sigma is derived


from the Greek Letter σ (Sigma) from the Greek
alphabet, which is used to denote Standard Deviation
in statistics. Standard Deviation is used to measure
variance, which is an essential tool for measuring
non-conformance as far as the quality of output is
concerned.
2. Methodical Approach: The Six Sigma is not a
merely quality improvement strategy in theory, as it
features a well defined systematic approach of
application in DMAIC and DMADV which can be used
to improve the quality of production. DMAIC is an
acronym for Design-Measure- Analyze-Improve-
Control. The alternative method DMADV stands for
Design-Measure- Analyze-Design-Verify.
3. Fact and Data-Based Approach: The statistical
and methodical aspect of Six Sigma shows the
scientific basis of the technique. This accentuates
essential elements of the Six Sigma that is a fact and
data-based.
4. Project and Objective-Based Focus: The Six
Sigma process is implemented for an organization's
project tailored to its specification and requirements.
The process is flexed to suits the requirements and
conditions in which the projects are operating to get
the best results.
5. Customer Focus: The customer focus is
fundamental to the Six Sigma approach. The quality
improvement and control standards are based on
specific customer requirements.
6. Teamwork Approach to Quality
Management: The Six Sigma process requires
organizations to get organized when it comes to
controlling and improving quality. Six Sigma
involving a lot of training depending on the role of an
individual in the Quality Management team.

Six Sigma Methodologies


Six Sigma projects follow two project methodologies:

1. DMAIC
2. DMADV
DMAIC
It specifies a data-driven quality strategy for improving
processes. This methodology is used to enhance an
existing business process.

The DMAIC project methodology has five phases:


1. Define: It covers the process mapping and flow-
charting, project charter development, problem-
solving tools, and so-called 7-M tools.
2. Measure: It includes the principles of measurement,
continuous and discrete data, and scales of
measurement, an overview of the principle of
variations and repeatability and reproducibility (RR)
studies for continuous and discrete data.
3. Analyze: It covers establishing a process baseline,
how to determine process improvement goals,
knowledge discovery, including descriptive and
exploratory data analysis and data mining tools, the
basic principle of Statistical Process Control (SPC),
specialized control charts, process capability
analysis, correlation and regression analysis, analysis
of categorical data, and non-parametric statistical
methods.
4. Improve: It covers project management, risk
assessment, process simulation, and design of
experiments (DOE), robust design concepts, and
process optimization.
5. Control: It covers process control planning, using
SPC for operational control and PRE-Control.

DMADV
It specifies a data-driven quality strategy for designing
products and processes. This method is used to create
new product designs or process designs in such a way
that it results in a more predictable, mature, and detect
free performance.

The DMADV project methodology has five phases:


1. Define: It defines the problem or project goal that
needs to be addressed.
2. Measure: It measures and determines the
customer's needs and specifications.
3. Analyze: It analyzes the process to meet customer
needs.
4. Design: It can design a process that will meet
customer needs.
5. Verify: It can verify the design performance and
ability to meet customer needs.

Features of Six Sigma


 Six Sigma's aim is to eliminate waste and inefficiency,
thereby increasing customer satisfaction by delivering what
the customer is expecting.
 Six Sigma follows a structured methodology, and has
defined roles for the participants.
 Six Sigma is a data driven methodology, and requires
accurate data collection for the processes being analyzed.
 Six Sigma is about putting results on Financial Statements.
 Six Sigma is a business-driven, multi-dimensional structured
approach for −
o Improving Processes
o Lowering Defects
o Reducing process variability
o Reducing costs
o Increasing customer satisfaction
o Increased profits
The word Sigma is a statistical term that measures
how far a given process deviates from perfection.
The central idea behind Six Sigma: If you can
measure how many "defects" you have in a
process, you can systematically figure out how to
eliminate them and get as close to "zero defects" as
possible and specifically it means a failure rate of
3.4 parts per million or 99.9997% perfect.

Key Concepts of Six Sigma


At its core, Six Sigma revolves around a few key
concepts.
 Critical to Quality − Attributes most important to the
customer.
 Defect − Failing to deliver what the customer wants.
 Process Capability − What your process can deliver.
 Variation − What the customer sees and feels.
 Stable Operations − Ensuring consistent, predictable
processes to improve what the customer sees and feels.
 Design for Six Sigma − Designing to meet customer needs
and process capability.
Our Customers Feel the Variance, Not the Mean.
So Six Sigma focuses first on reducing process
variation and then on improving the process
capability.

Myths about Six Sigma


There are several myths and misunderstandings
surrounding Six Sigma. Some of them few are given
below −
 Six Sigma is only concerned with reducing defects.
 Six Sigma is a process for production or engineering.
 Six Sigma cannot be applied to engineering activities.
 Six Sigma uses difficult-to-understand statistics.
 Six Sigma is just training.

Benefits of Six Sigma


Six Sigma offers six major benefits that attract
companies −
 Generates sustained success
 Sets a performance goal for everyone
 Enhances value to customers
 Accelerates the rate of improvement
 Promotes learning and cross-pollination
 Executes strategic change

Left Topics-
Process definition techniques
CMMI
Unit 4
Software Configuration Management?
Software Configuration Management is defined as a process to systematically
manage, organize, and control the changes in the documents, codes, and
other entities during the Software Development Life Cycle. It is abbreviated as
the SCM process in software engineering. The primary goal is to increase
productivity with minimal mistakes.
SCM is the discipline which

o Identify change
o Monitor and control change
o Ensure the proper implementation of change made to the item.
o Auditing and reporting on the change made.

Why do we need Configuration management?


The primary reasons for Implementing Software Configuration Management
System are:

 There are multiple people working on software which is continually


updating
 It may be a case where multiple version, branches, authors are involved
in a software project, and the team is geographically distributed and
works concurrently
 Changes in user requirement, policy, budget, schedule need to be
accommodated.
 Software should able to run on various machines and Operating
Systems
 Helps to develop coordination among stakeholders
 SCM process is also beneficial to control the costs involved in making
changes to a system
Any change in the software configuration Items will affect the final product.
Therefore, changes to configuration items need to be controlled and
managed.

Tasks in SCM process


Configuration Identification

Baselines

Change Control
Configuration Status Accounting

Configuration Audits and Reviews

Configuration Identification:
Configuration identification is a method of determining the scope of the
software system. With the help of this step, you can manage or control
something even if you don't know what it is. It is a description that contains the
CSCI type (Computer Software Configuration Item), a project identifier and
version information.

Activities during this process:

 Identification of configuration Items like source code modules, test case,


and requirements specification.
 Identification of each CSCI in the SCM repository, by using an object-
oriented approach
 The process starts with basic objects which are grouped into aggregate
objects. Details of what, why, when and by whom changes in the test
are made
 Every object has its own features that identify its name that is explicit to
all other objects
 List of resources required such as the document, the file, tools, etc.

Example:

Instead of naming a File login.php its should be named login_v1.2.php where


v1.2 stands for the version number of the file

Instead of naming folder "Code" it should be named "Code_D" where D


represents code should be backed up daily.

Baseline:
A baseline is a formally accepted version of a software configuration item. It is
designated and fixed at a specific time while conducting the SCM process. It
can only be changed through formal change control procedures.

Activities during this process:

 Facilitate construction of various versions of an application


 Defining and determining mechanisms for managing various versions of
these work products
 The functional baseline corresponds to the reviewed system
requirements
 Widely used baselines include functional, developmental, and product
baselines

In simple words, baseline means ready for release.

Change Control:
Change control is a procedural method which ensures quality and consistency
when changes are made in the configuration object. In this step, the change
request is submitted to software configuration manager.

Activities during this process:

 Control ad-hoc change to build stable software development


environment. Changes are committed to the repository
 The request will be checked based on the technical merit, possible side
effects and overall impact on other configuration objects.
 It manages changes and making configuration items available during
the software lifecycle

Configuration Status Accounting:


Configuration status accounting tracks each release during the SCM process.
This stage involves tracking what each version has and the changes that lead
to this version.

Activities during this process:

 Keeps a record of all the changes made to the previous baseline to


reach a new baseline
 Identify all items to define the software configuration
 Monitor status of change requests
 Complete listing of all changes since the last baseline
 Allows tracking of progress to next baseline
 Allows to check previous releases/versions to be extracted for testing

Configuration Audits and Reviews:


Software Configuration audits verify that all the software product satisfies the
baseline needs. It ensures that what is built is what is delivered.

Activities during this process:

 Configuration auditing is conducted by auditors by checking that defined


processes are being followed and ensuring that the SCM goals are
satisfied.
 To verify compliance with configuration control standards. auditing and
reporting the changes made
 SCM audits also ensure that traceability is maintained during the
process.
 Ensures that changes made to a baseline comply with the configuration
status reports
 Validation of completeness and consistency

Participant of SCM process:


Following are the key participants in SCM

1. Configuration Manager

 Configuration Manager is the head who is Responsible for identifying


configuration items.
 CM ensures team follows the SCM process
 He/She needs to approve or reject change requests

2. Developer

 The developer needs to change the code as per standard development


activities or change requests. He is responsible for maintaining
configuration of code.
 The developer should check the changes and resolves conflicts

3. Auditor
 The auditor is responsible for SCM audits and reviews.
 Need to ensure the consistency and completeness of release.

4. Project Manager:

 Ensure that the product is developed within a certain time frame


 Monitors the progress of development and recognizes issues in the
SCM process
 Generate reports about the status of the software system
 Make sure that processes and policies are followed for creating,
changing, and testing

5. User

The end user should understand the key SCM terms to ensure he has the
latest version of the software

Software Configuration Management Plan


The SCMP (Software Configuration management planning) process planning
begins at the early phases of a project. The outcome of the planning phase is
the SCM plan which might be stretched or revised during the project.

 The SCMP can follow a public standard like the IEEE 828 or
organization specific standard
 It defines the types of documents to be management and a document
naming. Example Test_v1
 SCMP defines the person who will be responsible for the entire SCM
process and creation of baselines.
 Fix policies for version management & change control
 Define tools which can be used during the SCM process
 Configuration management database for recording configuration
information.

Software Configuration Management Tools


Any Change management software should have the following 3 Key features:

Concurrency Management:

When two or more tasks are happening at the same time, it is known as
concurrent operation. Concurrency in context to SCM means that the same
file being edited by multiple persons at the same time.

If concurrency is not managed correctly with SCM tools, then it may create
many pressing issues.
Version Control:

SCM uses archiving method or saves every change made to file. With the
help of archiving or save feature, it is possible to roll back to the previous
version in case of issues.

Synchronization:

Users can checkout more than one files or an entire copy of the repository.
The user then works on the needed file and checks in the changes back to the
repository.They can synchronize their local copy to stay updated with the
changes made by other team members.

Following are popular tools

1. Git: Git is a free and open source tool which helps version control. It is
designed to handle all types of projects with speed and efficiency.

2. Team Foundation Server: Team Foundation is a group of tools and


technologies that enable the team to collaborate and coordinate for building a
product.

3. Ansible: It is an open source Software configuration management tool.


Apart from configuration management it also offers application deployment &
task automation.

CFEngine, Bcfg2 server, Vagrant, SmartFrog, CLEAR


CASETOOL (CC), SaltStack, CLEAR QUEST TOOL, Puppet,
SVN- Subversion, Perforce, TortoiseSVN, IBM Rational team
concert, IBM Configuration management version management,
Razor, Ansible, etc.

The SCM system has the following advantages:

 Reduced redundant work.


 Effective management of simultaneous updates.
 Avoids configuration-related problems.
 Facilitates team coordination.
 Helps in building management; managing tools used in builds.
 Defect tracking: It ensures that every defect has traceability back to its source.

Conclusion:
 Configuration Management helps organizations to systematically
manage, organize, and control the changes in the documents, codes,
and other entities during the Software Development Life Cycle.
 The primary goal of the SCM process is to increase productivity with
minimal mistakes
 The main reason behind configuration management process is that
there are multiple people working on software which is continually
updating. SCM helps establish concurrency, synchronization, and
version control.
 A baseline is a formally accepted version of a software configuration
item
 Change control is a procedural method which ensures quality and
consistency when changes are made in the configuration object.
 Configuration status accounting tracks each release during the SCM
process
 Software Configuration audits verify that all the software product
satisfies the baseline needs
 Project manager, Configuration manager, Developer, Auditor, and user
are participants in SCM process
 The SCM process planning begins at the early phases of a project.
 Git, Team foundation Sever and Ansible are few popular SCM tools

Configuration Control
(Aliases: change control, change management)
It is not the strongest of the species that survive, nor the most intelligent, but the
ones most responsive to change.        Charles Darwin

Configuration control is an important function of the configuration


management discipline. Its purpose is to ensure that all changes to a complex system
are performed with the knowledge and consent of management. The scope creep that
results from ineffective or nonexistent configuration control is a frequent cause of project
failure.
Configuration control tasks include initiating, preparing, analysing, evaluating and
authorising proposals for change to a system (often referred to as "the configuration").
Configuration control has four main processes:
1. Identification and documentation of the need for a change in a change request
2. Analysis and evaluation of a change request and production of a change proposal
3. Approval or disapproval of a change proposal
4. Verification, implementation and release of a change.
The Configuration Control Process

Why Configuration Control is Important


Configuration control is an essential component of a project's risk
management strategy. For example, uncontrolled changes to software requirements
introduce the risk of cost and schedule overruns.
Scenario - Curse of the Feature Creep
A project misses several key milestones and shows no sign of delivering anything.
WHY?

 The customer regularly talks directly to software developers asking them to make
'little changes' without consulting the project manager.
 The developers are keen to show off the new technology they are using. They
slip in the odd 'neat feature' that they know the customer will love.

Solution: Implement configuration control. Document all requests for change and have
them considered by a Configuration Control Board.
Change Control
Change Control is the process of identifying, documenting, approving or rejecting, and
controlling changes to the project baselines (including scope baselines, schedule
baselines, cost baselines, etc.). In other words, it is used to control changes to all aspects
of an approved project plan. An effective Change Control system ensures that:

 Proposed changes are reviewed and their impact is analyzed, prior to approving
or rejecting them.
 All requests and changes are properly documented to provide a clear audit trail.

Difference between Configuration Control and Change


Control
Configuration Control and Change Control are distinct in the following ways:

1. Configuration Control addresses the management of the product (or project's


deliverables), whereas Change Control addresses the management of the project.
2. Configuration Control manages changes to the product baseline, whereas Change
Control manages changes to the project baseline.
3. Configuration Control is applied throughout the lifecycle of the product (concept
-> design -> develop/manufacture -> service -> dispose), whereas Change
Control is applied during the lifecycle of the project subsequent to establishing
the project baselines.

IDENTIFYING ARTIFACTS TO BE CONFIGURED


Figure 3: Artifacts that are updated continuously and are of special interest are put into
configuration management. Sources and tests are put into version control; libraries are put into
distribution management.

NAMING CONVENSION AND VERSION CONTROL


A naming convention is a set of rules for choosing the character
sequence to be used for identifiers which
denote variables, types, functions, and other entities in source
code and documentation.
Reasons for using a naming convention (as opposed to
allowing programmers to choose any character sequence) include the
following:

 To reduce the effort needed to read and understand source code; [1]
 To enable code reviews to focus on more important issues than
arguing over syntax and naming standards.
 To enable code quality review tools to focus their reporting mainly on
significant issues other than syntax and style preferences.
A consistent and descriptive file naming convention serves many purposes, often related to
branding, information management, and usability. The overall goal of intentional file naming is
to increase readability in file names.
COCOMO Model
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number of
Lines of Code. It is a procedural cost estimate model for software projects and often
used as a process of reliably predicting the various parameters associated with making
a project such as size, effort, cost, time and quality. It was proposed by Barry Boehm in
1970 and is based on the study of 63 projects, which make it one of the best-
documented models.
The key parameters which define the quality of any software products, which are also
an outcome of the Cocomo are primarily Effort & Schedule:
 Effort: Amount of labor that will be required to complete a task. It is measured in
person-months units.
 Schedule: Simply means the amount of time required for the completion of the
job, which is, of course, proportional to the effort put. It is measured in the units of
time such as weeks, months.
Different models of Cocomo have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of these
models can be applied to a variety of projects, whose characteristics determine the
value of constant to be used in subsequent calculations. These characteristics
pertaining to different system types are mentioned below.

The necessary steps in this model are:

1. Get an initial estimate of the development effort from evaluation of thousands of


delivered lines of source code (KDLOC).
2. Determine a set of 15 multiplying factors from various attributes of the project.
3. Calculate the effort estimate by multiplying the initial estimate with all the
multiplying factors i.e., multiply the values in step1 and step2.

The initial estimate (also called nominal estimate) is determined by an equation of the form
used in the static single variable models, using KDLOC as the measure of the size. To
determine the initial effort Ei in person-months the equation used is of the type is shown
below

 Ei=a*(KDLOC)b

The value of the constant a and b are depends on the project type.

In COCOMO, projects are categorized into three types:

1. Organic
2. Semidetached
3. Embedded

1.Organic: A development project can be treated of the organic type, if the project deals
with developing a well-understood application program, the size of the development team is
reasonably small, and the team members are experienced in developing similar methods of
projects. Examples of this type of projects are simple business systems, simple
inventory management systems, and data processing systems.

2. Semidetached: A development project can be treated with semidetached type if the


development consists of a mixture of experienced and inexperienced staff. Team members
may have finite experience in related systems but may be unfamiliar with some aspects of
the order being developed. Example of Semidetached system includes developing a
new operating system (OS), a Database Management System (DBMS), and
complex inventory management system.

3. Embedded: A development project is treated to be of an embedded type, if the software


being developed is strongly coupled to complex hardware, or if the stringent regulations on
the operational method exist. For Example: ATM, Air Traffic control.

For three product categories, Bohem provides a different set of expression to predict effort
(in a unit of person month)and development time from the size of estimation in KLOC(Kilo
Line of code) efforts estimation takes into account the productivity loss due to holidays,
weekly off, coffee breaks, etc.

According to Boehm, software cost estimation should be done through three stages:

1. Basic Model
2. Intermediate Model
3. Detailed Model

1. Basic COCOMO Model: The basic COCOMO model provide an accurate size of the project
parameters. The following expressions give the basic COCOMO estimation model:

                Effort=a1*(KLOC) a2 PM
                Tdev=b1*(efforts)b2 Months

Where

KLOC is the estimated size of the software product indicate in Kilo Lines of Code,

a1,a2,b1,b2 are constants for each group of software products,

Tdev is the estimated time to develop the software, expressed in months,

Effort is the total effort required to develop the software product, expressed in person
months (PMs).

Estimation of development effort

For the three classes of software products, the formulas for estimating the effort based on
the code size are shown below:

Organic: Effort = 2.4(KLOC) 1.05 PM

Semi-detached: Effort = 3.0(KLOC) 1.12 PM

Embedded: Effort = 3.6(KLOC) 1.20 PM

Estimation of development time

For the three classes of software products, the formulas for estimating the development
time based on the effort are given below:

Organic: Tdev = 2.5(Effort) 0.38 Months

Semi-detached: Tdev = 2.5(Effort) 0.35 Months


Embedded: Tdev = 2.5(Effort) 0.32 Months

Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes. Fig shows a plot of estimated effort versus
product size. From fig, we can observe that the effort is somewhat superliner in the size of
the software product. Thus, the effort required to develop a product increases very rapidly
with project size.

The development time versus the product size in KLOC is plotted in fig. From fig it can be
observed that the development time is a sub linear function of the size of the product, i.e.
when the size of the product increases by two times, the time to develop the product does
not double but rises moderately. This can be explained by the fact that for larger products,
a larger number of activities which can be carried out concurrently can be identified. The
parallel activities can be carried out simultaneously by the engineers. This reduces the time
to complete the project. Further, from fig, it can be observed that the development time is
roughly the same for all three categories of products. For example, a 60 KLOC program can
be developed in approximately 18 months, regardless of whether it is of organic,
semidetached, or embedded type.

From the effort estimation, the project cost can be obtained by multiplying the required
effort by the manpower cost per month. But, implicit in this project cost computation is the
assumption that the entire project cost is incurred on account of the manpower cost alone.
In addition to manpower cost, a project would incur costs due to hardware and software
required for the project and the company overheads for administration, office space, etc.
It is important to note that the effort and the duration estimations obtained using the
COCOMO model are called a nominal effort estimate and nominal duration estimate. The
term nominal implies that if anyone tries to complete the project in a time shorter than the
estimated duration, then the cost will increase drastically. But, if anyone completes the
project over a longer period of time than the estimated, then there is almost no decrease in
the estimated cost value.

Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and
development time for each of the three model i.e., organic, semi-detached & embedded.

Solution: The basic COCOMO equation takes the form:

                Effort=a1*(KLOC) a2 PM
                Tdev=b1*(efforts)b2 Months
                Estimated Size of project= 400 KLOC

(i)Organic Mode

                E = 2.4 * (400)1.05 = 1295.31 PM


                D = 2.5 * (1295.31)0.38=38.07 PM

(ii)Semidetached Mode

                E = 3.0 * (400)1.12=2462.79 PM
                D = 2.5 * (2462.79)0.35=38.45 PM

(iii) Embedded Mode

                E = 3.6 * (400)1.20 = 4772.81 PM


                D = 2.5 * (4772.8)0.32 = 38 PM

Example2: A project size of 200 KLOC is to be developed. Software development team has
average experience on similar type of projects. The project schedule is not very tight.
Calculate the Effort, development time, average staff size, and productivity of the project.

Solution: The semidetached mode is the most appropriate mode, keeping in view the size,
schedule and experience of development time.

Hence       E=3.0(200)1.12=1133.12PM
                D=2.5(1133.12)0.35=29.3PM

            P = 176 LOC/PM

2. Intermediate Model: The basic Cocomo model considers that the effort is only a
function of the number of lines of code and some constants calculated according to the
various software systems. The intermediate COCOMO model recognizes these facts and
refines the initial estimates obtained through the basic COCOMO model by using a set of 15
cost drivers based on various attributes of software engineering.
Classification of Cost Drivers and their attributes:

(i) Product attributes -

o Required software reliability extent


o Size of the application database
o The complexity of the product

Hardware attributes -

o Run-time performance constraints


o Memory constraints
o The volatility of the virtual machine environment
o Required turnabout time

Personnel attributes -

o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience

Project attributes -

o Use of software tools


o Application of software engineering methods
o Required development schedule

The cost drivers are divided into four categories:

Intermediate COCOMO equation:

                E=ai (KLOC) bi*EAF
                D=ci (E)di

Coefficients for intermediate COCOMO

Project ai bi

Organic 2.4 1.05

Semidetached 3.0 1.12

Embedded 3.6 1.20


3. Detailed COCOMO Model:Detailed COCOMO incorporates all qualities of the standard
version with an assessment of the cost driver?s effect on each method of the software
engineering process. The detailed model uses various effort multipliers for each cost driver
property. In detailed cocomo, the whole software is differentiated into multiple modules,
and then we apply COCOMO in various modules to estimate effort and then sum the effort.

The Six phases of detailed COCOMO are:

1. Planning and requirements


2. System structure
3. Complete structure
4. Module code and test
5. Integration and test
6. Cost Constructive model

The effort is determined as a function of program estimate, and a set of cost drivers are
given according to every phase of the software lifecycle.

Software Engineering | COCOMO II Model


COCOMO-II is the revised version of the original Cocomo (Constructive Cost
Model) and is developed at University of Southern California. It is the model that allows
one to estimate the cost, effort and schedule when planning a new software
development activity.
It consists of three sub-models:

1. End User Programming:


Application generators are used in this sub-model. End user write the code by using
these application generators.
Example – Spreadsheets, report generator, etc.
2. Intermediate Sector:
 (a). Application Generators and Composition Aids –
This category will create largely prepackaged capabilities for user programming.
Their product will have many reusable components. Typical firms operating in this
sector are Microsoft, Lotus,
Oracle, IBM, Borland, Novell.
 (b). Application Composition Sector –
This category is too diversified and to be handled by prepackaged solutions. It
includes GUI, Databases, domain specific components such as financial, medical
or industrial process control packages.
 (c). System Integration –
This category deals with large scale and highly embedded systems.
3. Infrastructure Sector:
This category provides infrastructure for the software development like Operating
System, Database Management System, User Interface Management System,
Networking System, etc.
Stages of COCOMO II:

1. Stage-I:
It supports estimation of prototyping. For this it uses Application Composition
Estimation Model. This model is used for the prototyping stage of application
generator and system integration.
2. Stage-II:
It supports estimation in the early design stage of the project, when we less know
about it. For this it uses Early Design Estimation Model. This model is used in early
design stage of application generators, infrastructure, system integration.
3. Stage-III:
It supports estimation in the post architecture stage of a project. For this it
uses Post Architecture Estimation Model. This model is used after the completion
of the detailed architecture of application generator, infrastructure, system
integration.
Difference between COCOMO 1 and COCOMO 2
COCOMO 1 Model:
The Constructive Cost Model was first developed by Barry W. Boehm. The model is for
estimating effort, cost, and schedule for software projects. It is also called as Basic
COCOMO. This model is used to give an approximate estimate of the various
parameters of the project. Example of projects based on this model is business system,
payroll management system and inventory management systems.
COCOMO 2 Model:
The COCOMO-II is the revised version of the original Cocomo (Constructive Cost
Model) and is developed at the University of Southern California. This model calculates
the development time and effort taken as the total of the estimates of all the individual
subsystems. In this model, whole software is divided into different modules. Example of
projects based on this model is Spreadsheets and report generator.

Difference between COCOMO 1 and COCOMO 2:

COCOMO I COCOMO II

COCOMO I is useful in the waterfall

models of the software development COCOMO II is useful in non-sequential, rapid

cycle. development and reuse models of software.

It provides estimates that represent one

It provides estimates pf effort and standard deviation around the most likely

schedule. estimate.

This model is based upon the linear This model is based upon the non linear reuse

reuse formula. formula

This model is also based upon the This model is also based upon reuse model

assumption of reasonably stable which looks at effort needed to understand and

requirements. estimate.

Effort equation’s exponent is Effort equation’s exponent is determined by 5

determined by 3 development modes. scale factors.

Development begins with the

requirements assigned to the software. It follows a spiral type of development.

Number of submodels in COCOMO I In COCOMO II, Number of submodel are 4

is 3 and 15 cost drivers are assigned and 17 cost drivers are assigned
COCOMO I COCOMO II

Size of software stated in terms of Size of software stated in terms of Object

Lines of code points, function points and lines of code

Functional Requirement
A functional requirement document defines the functionality of a system or one of its
subsystems. It also depends upon the type of software, expected users and the type of
system where the software is used.
Functional user requirements may be high-level statements of what the system should
do but functional system requirements should also describe clearly about the system
services in detail.

In software engineering, a functional requirement defines a system or its


component. It describes the functions a software must perform. A function is
nothing but inputs, its behavior, and outputs. It can be a calculation, data
manipulation, business process, user interaction, or any other specific
functionality which defines what function a system is likely to perform.

Functional software requirements help you to capture the intended behavior of


the system. This behavior may be expressed as functions, services or tasks or
which system is required to perform.

Functional Requirement Specifications:


The following are the key fields, which should be part of the functional requirements
specifications document:
 Purpose of the Document
 Scope
 Business Processes
 Functional Requirements
 Data and Integration
 Security Requirements
 Performance
 Data Migration & Conversion

What is Non-Functional Requirement?


A non-functional requirement defines the quality attribute of a software
system. They represent a set of standards used to judge the specific
operation of a system. Example, how fast does the website load?

A non-functional requirement is essential to ensure the usability and


effectiveness of the entire software system. Failing to meet non-functional
requirements can result in systems that fail to satisfy user needs.

Non-functional Requirements allows you to impose constraints or restrictions


on the design of the system across the various agile backlogs. Example, the
site should load in 3 seconds when the number of simultaneous users are >
10000. Description of non-functional requirements is just as critical as a
functional requirement.

TRACEABILITY

Quality Attributes Workshop


Bottom-Up Estimating
Lance is a construction project manager who needs to change his approach to estimating. He
usually does not spend much time estimating, confident in using his experience with similar
projects as a reference point. However, the last few projects he has worked on have exceeded
the estimates provided, so he has decided it is worth the effort to be more accurate. He decides
to try bottom-up estimating.

Bottom-up estimating is a way to approximate an overall value by approximating values for


smaller components and using the sum total of these values as the overall value. In project
management, this type of estimating is used to create a schedule or budget. Typically, the
project work is broken down, or decomposed, into smaller components and an estimate of
duration and cost is assigned to each component. The schedule is determined by aggregating
the individual duration estimates, while the budget is determined by aggregating the individual
cost estimates.

In planning to use this new approach, Lance wants to get support from his team, as well as
project stakeholders. He starts with the disadvantages and advantages of using this approach.
From there, he plans to provide them with a specific example using the approach.

Bottom-Up (Dis)Advantages
Lance's decision to adopt bottom-up estimating came after examining the tradeoffs and
determining that the advantages outweigh the disadvantages. The advantage of bottom-up
estimating is that it leads to greater accuracy. This is exactly what Lance needs. The accuracy
results because this approach takes into consideration each component of the project work.
Accuracy is also achieved because the estimates for each component are given by the
individuals responsible for these components: the ones who know the work well.

The primary disadvantage of bottom-up estimating is the time it takes to complete. While other
forms of estimating can use the high-level requirements used to start the project process as a
basis, bottom-up estimating requires low-level components. In order to take into consideration
each component of the project work, these components must first be identified, through
decomposition. This process is long, and can be even more so when a large amount of work or
complex work is involved.

Another disadvantage of bottom-up estimating is that it can be costly. The time spent
decomposing project work is not free. Additionally, the estimation done for each component is
given by the individuals responsible for completing the components. These team members are
typically not involved in the project during the planning phase. Bringing in individuals, especially
if they are contracted, increases the cost of planning, which increases the cost of the project.

In general, bottom-up estimating is not the best choice for projects that do not allow for long
periods of planning or projects that have contracted resources that typically do not start on the
project much earlier than when the work is going to be completed. Lance can use this approach
because he has a devoted project team who can assist with estimates and because the
stakeholders are more concerned with accuracy than speed.
TOP DOWN ESTIMATION
Cost of a New Project
Lisa is a project manager at Dolphin Boat Company, a business that makes boats of all kinds.
The company wants to add a new model to their speedboat line. However, before they commit
to this project, they need to know if it's a worthwhile endeavor.

Lisa has been asked to provide an estimate on how much the project will likely cost, as this will
help the company determine if the project is feasible or if they should just shelve it.

As usual, Lisa uses top down-estimating.

Cost of a New Project


Lisa is a project manager at Dolphin Boat Company, a business that makes boats of all kinds.
The company wants to add a new model to their speedboat line. However, before they commit
to this project, they need to know if it's a worthwhile endeavor.

Lisa has been asked to provide an estimate on how much the project will likely cost, as this will
help the company determine if the project is feasible or if they should just shelve it.

As usual, Lisa uses top down-estimating.

Top-Down Estimating
Top-down estimating is a technique used by upper-level management to estimate the total
cost of a project by using information from a previous, similar project. In other words, she's
going to estimate the cost of the current project based on the last time they introduced a new
boat model. Top-down estimates may also be based on the experiences of those involved in
developing the cost estimate and expert judgement.

Lisa looks at some of her and the company's previous projects. One previous project also
involved building a speedboat that used the same engine and was similar in size to the new
boat the company wishes to build. Jackpot! The cost of the previous project was $100,000, so
Lisa estimates cost of the current project will also be roughly $100,000.

Top-Down Estimating Advantages


Top-down estimating is a technique used by upper-level management to estimate the total
cost of a project by using information from a previous, similar project. In other words, she's
going to estimate the cost of the current project based on the last time they introduced a new
boat model. Top-down estimates may also be based on the experiences of those involved in
developing the cost estimate and expert judgement.

Lisa looks at some of her and the company's previous projects. One previous project also
involved building a speedboat that used the same engine and was similar in size to the new
boat the company wishes to build. Jackpot! The cost of the previous project was $100,000, so
Lisa estimates cost of the current project will also be roughly $100,000.
Top-Down Estimating Advantages
PLANNING POKER
It seems so interesting how the idea of a game can be weaved into a
technical terminology. Poker is one such term. Planning poker is an
estimation technique used in agile methodology. It has its origin from
an old estimation methodology known as Delphi method(an
estimation technique for sales and marketing). It is an approach where
a group of experts assimilate to analyse the size of project goals to be
accomplished. Just like in a game of poker, where the cards are faced
down and the numbers are not spoken by any member, they reveal it
when its their chance. Planning poker technique follows the same
ideology. Individuals contribute their ideas that generates a consensus
based conclusion.

Anatomy of poker technique :


A deck of cards is placed in the centre of the table. A numeric metric is
used so that an estimation could be made in quantifiable terms. Many
approaches are available to implement the poker strategy– numeric
sizing(1 to 10), t-shirt sizes(xs,s,m,l,xl,xxl) , fibonacci
sequence(1,2,3,5,8...). The onus of choosing a specific methodology
lies on the person heading the team, often termed as a facilitator,
chooses a method that best serves the purpose. So, a metric is chosen
by the facilitator, in the form of a quantity representing, say, number
of days, to complete the task. Poker planning begins somewhat like
this.
The facilitator disseminates the information to the team members,
with respect to the software component in consideration. Each person
selects a card from the deck. When the last card is chosen from the
deck, the facilitator initiates revelation of all the cards. One by one
every member starts disclosing their cards. This way a decision is
made, based on the metric chosen and an estimate is made regarding
the establishment of the mission to be achieved. The T-shirt method is
appropriate when the parameters have to be set in terms of hours of
working and so on. The estimation process is repeated unless a final
conclusion is made.

Types of Cards :

 The ZERO card:This card is marked with 0 symbol. It means


not much work is involved in that work.

 The QUESTION Mark card:This card is shown when any


member has no clue of what to estimate. In such a scenario, the
work is discussed further and re-voting is done.

 The INFINITY card:This signifies that the story under


consideration is so big that it does not seem feasible to
implement , hence further clarification is needed of how things
should proceed in the right direction.

Benefits :

 As the activity is carried out by experts, usually the estimations


are quite accurate.

 A controlled environment of work is established, measured in


quantifiable terms.

 Offers a result oriented approach.

Challenges faced during Poker Planning Technique : :

 Productive Discussions :Time is precious, hence the time


allocated to planning should be decided prior setting up the
discussion.

 Divide the whole session into smaller sub-


sessions :There may be possibilities that the planning poker
technique is adopted in iterations. When a new project begins,
there has to be frequent discussions regarding the execution of
the project. Hence the overall plan should be divided in sub-
parts.

 Deciding the time to be allocated for every


iteration :Further the time division has to be made to each
subset of discussion.

Idea behind Poker Planning :


The ideology that underlies a poker planning technique in the context
of Agile estimation, is to collaborate views, ideas, opinions of people
from different areas, and sum up each. This helps to deliver the project
within the right time by applying the right approach.

More people + More ideas= Effective estimation plan

Everything on this planet which seems to offer an edge over their


counterparts, has their share of disadvantages also. Let's have a look at
the demerits hidden in poker planning technique.

Demerits :

 Although a team effort is what it takes to execute a poker


planning, there are few things that are often overlooked. The
experts share their individual experiences, so the consensus
formed tends to have a limited span. It means that often some or
the other aspect of the software development process is
compromised, that is , a domain specific vision- designing,
programming, testing etc. Basically a person's knowledge is
confined to a specific domain, hence the niceties pertaining to
each domain is not heeded much attention.

 Often a product backlog is prepared to prioritize entries by the


team. The product owner needs to assess how difficult it is to
realise a task. It may happen that the same backlog is handed
over to some other agile testing team, which may pose a problem,
because the estimations may vary from one project to another.

 What needs to be estimated, has to be clearly defined, else the


scope of estimation may remain void somewhere.

Conclusion :
The planning poker technique lays a robust foundation for building an
application in planned and quantifiable terms. Interestingly, the
technique acts as a recreation activity, the team members have a
candid conversation and thus get a chance to plan a project playfully,
which may otherwise turn out to be a burdensome task. It is thus the
most efficient estimation technique.

(From Tutorialpt)
Planning Poker Estimation
Planning Poker is a consensus-based technique for estimating, mostly used to
estimate effort or relative size of user stories in Scrum.
Planning Poker combines three estimation techniques − Wideband Delphi Technique,
Analogous Estimation, and Estimation using WBS.
Planning Poker was first defined and named by James Grenning in 2002 and later
popularized by Mike Cohn in his book "Agile Estimating and Planning”, whose
company trade marked the term.

Planning Poker Estimation Technique


In Planning Poker Estimation Technique, estimates for the user stories are derived by
playing planning poker. The entire Scrum team is involved and it results in quick but
reliable estimates.
 Planning Poker is played with a deck of cards. As Fibonacci sequence is used, the cards
have numbers - 1, 2, 3, 5, 8, 13, 21, 34, etc. These numbers represent the “Story Points”.
Each estimator has a deck of cards. The numbers on the cards should be large enough to
be visible to all the team members, when one of the team members holds up a card.
 One of the team members is selected as the Moderator. The moderator reads the
description of the user story for which estimation is being made. If the estimators have any
questions, product owner answers them.
 Each estimator privately selects a card representing his or her estimate. Cards are not
shown until all the estimators have made a selection. At that time, all cards are
simultaneously turned over and held up so that all team members can see each estimate.
 In the first round, it is very likely that the estimations vary. The high and low estimators
explain the reason for their estimates. Care should be taken that all the discussions are
meant for understanding only and nothing is to be taken personally. The moderator has to
ensure the same.
 The team can discuss the story and their estimates for a few more minutes.
 The moderator can take notes on the discussion that will be helpful when the specific story
is developed. After the discussion, each estimator re-estimates by again selecting a card.
Cards are once again kept private until everyone has estimated, at which point they are
turned over at the same time.
Repeat the process till the estimates converge to a single estimate that can be used for
the story. The number of rounds of estimation may vary from one user story to another.

Benefits of Planning Poker Estimation


Planning poker combines three methods of estimation −
Expert Opinion − In expert opinion-based estimation approach, an expert is asked
how long something will take or how big it will be. The expert provides an estimate
relying on his or her experience or intuition or gut feel. Expert Opinion Estimation
usually doesn’t take much time and is more accurate compared to some of the
analytical methods.
Analogy − Analogy estimation uses comparison of user stories. The user story under
estimation is compared with similar user stories implemented earlier, giving accurate
results as the estimation is based on proven data.
Disaggregation − Disaggregation estimation is done by splitting a user story into
smaller, easier-to-estimate user stories. The user stories to be included in a sprint are
normally in the range of two to five days to develop. Hence, the user stories that
possibly take longer duration need to be split into smaller use-Cases. This approach
also ensures that there would be many stories that are comparable.
Delphi Method is a structured communication technique, originally developed as a
systematic, interactive forecasting method which relies on a panel of experts. The
experts answer questionnaires in two or more rounds. After each round, a facilitator
provides an anonymous summary of the experts’ forecasts from the previous round
with the reasons for their judgments. Experts are then encouraged to revise their
earlier answers in light of the replies of other members of the panel.
It is believed that during this process the range of answers will decrease and the group
will converge towards the "correct" answer. Finally, the process is stopped after a
predefined stop criterion (e.g. number of rounds, achievement of consensus, and
stability of results) and the mean or median scores of the final rounds determine the
results.
Delphi Method was developed in the 1950-1960s at the RAND Corporation.

Wideband Delphi Technique


In the 1970s, Barry Boehm and John A. Farquhar originated the Wideband Variant of
the Delphi Method. The term "wideband" is used because, compared to the Delphi
Method, the Wideband Delphi Technique involved greater interaction and more
communication between the participants.
In Wideband Delphi Technique, the estimation team comprise the project manager,
moderator, experts, and representatives from the development team, constituting a 3-7
member team. There are two meetings −

 Kickoff Meeting
 Estimation Meeting

Wideband Delphi Technique – Steps


Step 1 − Choose the Estimation team and a moderator.
Step 2 − The moderator conducts the kickoff meeting, in which the team is presented
with the problem specification and a high level task list, any assumptions or project
constraints. The team discusses on the problem and estimation issues, if any. They
also decide on the units of estimation. The moderator guides the entire discussion,
monitors time and after the kickoff meeting, prepares a structured document containing
problem specification, high level task list, assumptions, and the units of estimation that
are decided. He then forwards copies of this document for the next step.
Step 3 − Each Estimation team member then individually generates a detailed WBS,
estimates each task in the WBS, and documents the assumptions made.
Step 4 − The moderator calls the Estimation team for the Estimation meeting. If any of
the Estimation team members respond saying that the estimates are not ready, the
moderator gives more time and resends the Meeting Invite.
Step 5 − The entire Estimation team assembles for the estimation meeting.
Step 5.1 − At the beginning of the Estimation meeting, the moderator collects the initial
estimates from each of the team members.
Step 5.2 − He then plots a chart on the whiteboard. He plots each member’s total
project estimate as an X on the Round 1 line, without disclosing the corresponding
names. The Estimation team gets an idea of the range of estimates, which initially may
be large.

Step 5.3 − Each team member reads aloud the detailed task list that he/she made,
identifying any assumptions made and raising any questions or issues. The task
estimates are not disclosed.
The individual detailed task lists contribute to a more complete task list when
combined.
Step 5.4 − The team then discusses any doubt/problem they have about the tasks they
have arrived at, assumptions made, and estimation issues.
Step 5.5 − Each team member then revisits his/her task list and assumptions, and
makes changes if necessary. The task estimates also may require adjustments based
on the discussion, which are noted as +N Hrs. for more effort and –N Hrs. for less
effort.
The team members then combine the changes in the task estimates to arrive at the
total project estimate.

Step 5.6 − The moderator collects the changed estimates from all the team members
and plots them on the Round 2 line.
In this round, the range will be narrower compared to the earlier one, as it is more
consensus based.

Step 5.7 − The team then discusses the task modifications they have made and the
assumptions.
Step 5.8 − Each team member then revisits his/her task list and assumptions, and
makes changes if necessary. The task estimates may also require adjustments based
on the discussion.
The team members then once again combine the changes in the task estimate to
arrive at the total project estimate.
Step 5.9 − The moderator collects the changed estimates from all the members again
and plots them on the Round 3 line.
Again, in this round, the range will be narrower compared to the earlier one.
Step 5.10 − Steps 5.7, 5.8, 5.9 are repeated till one of the following criteria is met −

 Results are converged to an acceptably narrow range.


 All team members are unwilling to change their latest estimates.
 The allotted Estimation meeting time is over.

Step 6 − The Project Manager then assembles the results from the Estimation
meeting.
Step 6.1 − He compiles the individual task lists and the corresponding estimates into a
single master task list.
Step 6.2 − He also combines the individual lists of assumptions.
Step 6.3 − He then reviews the final task list with the Estimation team.

Advantages and Disadvantages of Wideband Delphi


Technique
Advantages

 Wideband Delphi Technique is a consensus-based estimation technique for estimating effort.


 Useful when estimating time to do a task.
 Participation of experienced people and they individually estimating would lead to reliable
results.
 People who would do the work are making estimates thus making valid estimates.
 Anonymity maintained throughout makes it possible for everyone to express their results
confidently.
 A very simple technique.
 Assumptions are documented, discussed and agreed.

Disadvantages

 Management support is required.


 The estimation results may not be what the management wants to hear.
Bug Tracking
A bug tracking system or defect tracking system is a software application that keeps track of
reported software bugs in software development projects. It may be regarded as a type of issue
tracking system.
Many bug tracking systems, such as those used by most open-source software projects, allow end-
users to enter bug reports directly.[1] Other systems are used only internally in a company or
organization doing software development. Typically bug tracking systems are integrated with
other project management software.
Bug tracking is the process of logging and monitoring bugs or errors during software
development. It’s also referred to as defect tracking or issue tracking.
Bug tracking is important since large systems may have hundreds or thousands of
defects. Each needs to be evaluated, monitored and prioritized for debugging. In many
cases, issues also need to be tracked over a long period of time.
In his IBM blog, Chip Davis notes: “Defects are not static but change over time. In
addition, multiple defects are often related to one another. Effective defect tracking is
crucial to both testing and development teams.”
05:15

What is a bug tracking system?


Bug tracking essentials
According to Tutorials Point: “A software bug arises when an expected result doesn't
match with actual results. It can also be an error, flaw, failure or fault in a computer
program.”
Development teams use bug tracking to record and monitor errors that occur when an
application is being tested. In some cases, customers may provide feedback to help
identify defects after product release.
Each software bug has a lifecycle. During its lifetime, a defect may go through several
stages or states. They include:[i]
 Active - Investigation is underway
 Test - Fixed and ready for testing
 Verified - Retested and verified by quality assurance (QA)
 Closed - Can be closed after QA retesting or if it’s not considered to be a defect
 Reopened - Not fixed and reactivated
Bugs are managed based on priority and severity. Severity levels are used to identify
the relative impact of a problem on a product release. Classifications may vary in
number, with some organizations using up to nine levels. In general, severity levels
include:
 Catastrophic: Defect causes total failure of the software or unrecoverable data loss.
There’s no workaround, and the product can’t be released.
 Impaired functionality: A workaround may exist, but it’s unsatisfactory. The software
can’t be released.
 Failure of non-critical systems: A reasonably satisfactory workaround exists. The product
may be released, if the bug is documented.
 Very minor: There’s a workaround or the issue can be ignored. Doesn’t impact the
product release.
Defect states and severity levels are reported and monitored in the bug tracking
database. A good tracking application also ties into larger software development and
management systems. The better to assess defect status and potential impact on
overall production and timelines.

Why bug tracking is important


Software errors can damage a brand’s reputation – leading to frustrated and lost
customers. In some cases, a bug may degrade interconnected systems or cause
serious malfunctions.
Software testing and bug tracking help identify errors before a product goes to market.
The sooner development teams receive feedback, the sooner they can address issues
such as incorrect functionality, design flaws and security issues.
By testing, tracking and debugging, the overall design improves, and reliable, high-
quality applications are delivered with fewer errors. In this case, a system that meets or
even exceeds customer expectations leads to potentially more sales and greater market
share.
Causal Analysis

The purpose of causal analysis is trying to find the root cause of a


problem instead of finding the symptoms. This technique helps to
uncover the facts that lead to a certain situation.Hence causal analysis
can be conducted with the help of any of the following ways.

 Reviews/ Testing

 Audit of the quality of system/projects

 Management review meetings

 Maintaining a quality system and noting deviations, if any

 Customer complaints

 Problems/issues encountered at the project or organizational


level

Objectives of causal analysis :

 Identify critical problems: When a problem is


identified, the team comes together and organises a
brainstorming session in order to find out the root
cause of the problem. Participants include those
people or the team who comes across a given problem,
experts in the specific domain and also members from
the quality analysis and defect prevention team of the
project.

 Identify causes and root causes: Post


brainstorming session team members are in a position
to determine the root cause of the problem. The root
causes of the problem are documented. The defect
prevention team analyses the problem with the help of
any of the following metrics :

 Pareto analysis : This technique is used when


we have quantitative data. This method is used
to prioritize and deal with the various causes
that result in a problem, so that an effective
measure could be taken against a given problem.

 Cause and effect diagram :

A fishbone diagram is used to visualise, clarify,


link, identify and classify the various causes of a
problem. Using this type of diagram, a root
cause analysis is done which helps in taking
corrective actions, so that the same problem
does not occur again. It is a qualitative technique
and requires brainstorming by the team in
addition to the tool

 Identify corrective actions -Finally a solution is


proposed and discussions take place to devise a
strategy to rectify the problem.

Causal Analysis performed at various levels of the


organisation :

 Project level -At project level, causal analysis deals


with finding out issues and defects in a project. There
is periodic review by the team members to track the
performance of the system with the help of inputs
given by the team members. According to a
consensus, the team derives at a time of the month or
week when the cause analysis process is going to take
place. The team must also review the progress of the
correction activities and record what is being
performed. The project managers along with the
project team members are also expected to conduct
meetings periodically. The topic of discussion in a
meeting includes the following points :

 Progress of the project

 Progress of the project so far

 Project activities yet to arrive

 Distribution of responsibilities for the tasks


about to arrive

 Workgroup level -

Workgroup heads organise departmental meetings to


share information about departmental activities,
share important information about ongoing activities
within the organisation, identify and discuss the root
cause of problems, and follow up with the everyone.

 Organisational level -

The head of the various groups shall meet frequently


with the head of the organisation to discuss
departmental issues and activities. The meetings
involve discussion about common issues and
problems, share information regarding their
respective workgroups, and efforts should be made to
solve any inter-departmental issues.

Guidelines/check list for causal analysis:


One among the following few purposes must be
fulfilled :

 It must prove a point.

 Try to contradict a widely accepted belief.

 To study the theory.

 Mention whether the essay focuses on


cause, effect or both.

 Develop description, narration, example,


classification or comparison.

 Design a logical pattern for representing


the facts

 Single cause - multiple effects

 Multiple causes - single effect

 Casual chain

 Establish an association between an idea


and issues of cause/effect which reflects an
effective transition.

 Avoiding logical fallacies like -


oversimplification of a situation,
insufficient evidence and specific details,
omission of important connections etc.

 An effective causal analysis demands a


writer's deep understanding about the
immediate causes or effects.

 The causal analysis documentation


answers a lot many questions. It helps to
develop a better understanding of quite
complex series of events in a simplified
manner.

You might also like