SE 1st - 2nd Chapter Material
SE 1st - 2nd Chapter Material
SE 1st - 2nd Chapter Material
Budget
Usability
Efficiency
Correctness
Functionality
Dependability
Security
Safety
Transitional
This aspect is important when the software is moved from one platform to another:
Portability
Interoperability
Reusability
Adaptability
Maintenance
This aspect briefs about how well a software has the capabilities to maintain itself in the
ever- changing environment:
Modularity
Based on this analysis they pick the best solution and determine whether
Design: - The goal of the design phase is to transform the requirements specified
in the SRS document into a structure that is suitable for implementation in some
programming language. In technical terms, during the design phase the software
architecture is derived from the SRS document. Two distinctly different
approaches are available: the traditional design approach and the object-oriented
design approach.
Here, we provide feedback paths for error correction as & when detected later in
a phase. Though errors are inevitable, but it is desirable to detect them in the
same phase in which they occur. If so, this can reduce the effort to correct thebug.
The advantage of this model is that there is a working model of the system at a
very early stage of development which makes it easier to find functional or design
flaws. Finding issues at an early stage of development enables to take corrective
measures in a limited budget.
3. PROTOTYPING MODEL
PROTOTYPE
A prototype is a toy implementation of the system. A prototype usually
exhibits limited functional capabilities, low reliability, and inefficient
performance compared to the actual software. A prototype may produce the
desired results by using a table look-up instead of performing the actual
computations.
4. EVOLUTIONARY MODEL
It is also called successive versions model or incremental model. At first, a simple
working model is built. Subsequently it undergoes functional improvements & we
keep on adding new functions till the desired system isbuilt.
Applications:
Large projects where you can easily find modules for incremental
implementation. Often used when the customer wants to start using the
core features rather than waiting for the fullsoftware.
Also used in object oriented software development because the system can
be easily portioned into units in terms ofobjects.
Advantages:
User gets a chance to experiment partially developedsystem
Reduce the error because the core modules get testedthoroughly.
Disadvantages:
It is difficult to divide the problem into several versions that would be
5. SPIRAL MODEL
The Spiral model of software development is shown in fig. The diagrammatic
representation of this model appears like a spiral with many loops. The exact
number of loops in the spiral is not fixed. Each loop of the spiral represents a
phase of the software process. For example, the innermost loop might be
concerned with feasibility study, the next loop with requirements specification,
the next one with design, and so on. Each phase in this model is split into four
sectors (or quadrants) as shown in fig. The following activities are carried out
during each phase of a spiral model.
The spiral model is called a meta model since it encompasses all other life cycle
models. Risk handling is inherently built into this model. The spiral model is
suitable for development of technically challenging software products that are
prone to several kinds of risks. However, this model is much more complex than
the other models – this is probably a factor deterring its use in ordinary projects.
TYPES OF DFD
Data Flow Diagrams are either Logical or Physical.
Logical DFD - This type of DFD concentrates on the system process and
flow of data in the system. For example in a Banking software system, how
data is moved between different entities.
Physical DFD - This type of DFD shows how the data flow is actually
implemented in the system. It is more specific and close to the
implementation.
DFD COMPONENTS
DFD can represent Source, destination, storage and flow of data using the following
set of components –
Now that you have some background knowledge on data flow diagrams and how
they are categorized, you’re ready to build your own DFD. The process can be
broken down into 5 steps:
Level 1 DFDsare still a general overview, but they go into more detail than a context
diagram. In a level 1 data flow diagram, the single process node from the context
Level 2+ DFDssimply break processes down into more detailed subprocesses. In theory,
DFDs could go beyond level 3, but they rarely do. Level 3 data flow diagrams are detailed
enough that it doesn’t usually make sense to break them down further.
Organize processes
Improve the efficiency of processes
Continually improve.
ISO – 9001
ISO 9001 is defined as the international standard that specifies requirements for
a quality management system (QMS). Organizations use the standard to
demonstrate the ability to consistently provide products and services that meet
customer and regulatory requirements. It is the most popular standard in the ISO
9000 series and the only standard in the series to which organizations can certify.
ISO 9001 was first published in 1987 by the International Organization for
Standardization (ISO), an international agency composed of the national
standards bodies of more than 160 countries. The current version of ISO
9001 was released in September 2015.
Organizations of all types and sizes find that using the ISO 9001 standard helps
them:
Organize processes
Improve the efficiency of processes
Continually improve.
CMMI is a model that contains the best practices and provides the organizations
with those critical elements that makes their business processes effective.
The Capability Maturity Model (CMM) is a methodology used to develop and
refine an organization's software development process.
Currently two metrics are popularly being used widely to estimate size: lines of
code (LOC) and function point (FP).
Following functionalities are counted while counting the function points of the
system.
Data Functionality
Internal Logical Files (ILF)
External Interface Files (EIF)
Transaction Functionality
External Inputs (EI)
External Outputs (EO)
External Queries (EQ)
The size of a product in function points (FP) can be expressed as the weighted sum of
these five problem characteristics. The weights associated with the five
characteristics were proposed empirically and validated by the observations over
many projects.
Number of inputs: Each data item input by the user is counted. Data inputs should be
distinguished from user inquiries. Inquiries are user commands such as print-account-
balance.
Inquiries are counted separately. It must be noted that individual data items input
by the user are not considered in the calculation of the number of inputs, but a
group of related inputs are considered as a single input. For example, while
entering the data concerning an employee to an employee pay roll software; the
data items name, age, sex, address, phone number, etc. are together considered
as a single input. All these data items can be considered to be related, since they
pertain to a single employee.
Number of interfaces: Here the interfaces considered are the interfaces used to
exchange information with other systems. Examples of such interfaces are data
files on tapes, disks, communication links with other systems etc.
Once the unadjusted function point (UFP) is computed, the technical complexity
factor (TCF) is computed next. TCF refines the UFP measure by considering
fourteen other factors such as
1) Data Communication
2) Distributed Data Processing
3) Performance
4) Heavily Used Configuration
5) Transaction Role
6) Online Data Entry
7) End-User Efficiency
8) Online Update
9) Complex Processing
10) Reusability
11) Installation Ease
12) Operational Ease
13) Multiple Sites
14) Facilitate Change
FP=UFP*TCF
Question:Consider a project with the following functional units :
Number of user inputs = 50
Number of user outputs = 40
Number of user enquiries = 35
Number of user files = 06
Number of external interfaces = 04
Assuming all complexity adjustment factors and weighing factors as average, find the function
points for the project.
Solution:
External Outputs: 3
Assuming all complexity adjustment factors and weighing factors as low, find the function
points for the project.
Solution:
COCOMO
COCOMO (Constructive Cost Estimation Model) was proposed by Boehm.
According to Boehm, software cost estimation should be done through three
stages: Basic COCOMO, Intermediate COCOMO, and Complete COCOMO.
EFFORT = a1 x (KLOC) a2 PM
TDEV = b1 x (EFFORT) b2 Months
• a1, a2, b1, b2 are constants for each category of software products,
Person-month curve
AccordingtoBoehm,everylineofsourcetextshouldbecalculatedasoneLOCirrespectiveof
the actual number of instructions on that line. Thus, if a single instruction spans
several lines (say
nlines),itisconsideredtobenLOC.Thevaluesofa1 ,a2,b1,b2fordifferentcategoriesofprod
ucts (i.e. organic, semidetached, and embedded) as given by Boehm [1981] are
The development time versus the product size in KLOC is plotted in fig. below.
From fig., it can be observed that the development time is a sub linear function
of the size of the product, i.e. when the size of the product increases by two
times, the time to develop the product does not double but rises moderately.
This can be explained by the fact that for larger products, a larger number of
activities which can be carried out concurrently can be identified. The parallel
activities can be carried out simultaneously by the engineers. This reduces the time
to complete the project. Further, from fig., it can be observed that the development
time is roughly the same for all the three categories of products. For example, a
60 KLOC program can be developed in approximately 18 months, regardless of
whether it is of organic, semidetached, or embedded type.
From the effort estimation, the project cost can be obtained by multiplying the
required effort by the manpower cost per month. But, implicit in this project cost
computation is the assumption that the entire project cost is incurred on account of
the manpower cost alone. In addition to manpower cost, a project would incur
It is important to note that the effort and the duration estimations obtained using the COCOMO
model are called as nominal effort estimate and nominal duration estimate. The term nominal
implies that if anyone tries to complete the project in a time shorter than the estimated duration,
then the cost will increase drastically. But, if anyone completes the project over a longer period
of time than the estimated, then there is almost no decrease in the estimated costvalue.
costs due to hardware and software required for the project and the company
overheads for administration, office space, etc.
EXAMPLE:
Assume that the size of an org organic type software product has been estimated to
be 32,000 lines of source code. Assume that the average salary of software
engineers be Rs. 15,000/- per month. Determine the effort required to develop the
software product and the nominal development time.
The Lawrence Putnam model describes the time and effort requires finishing a software project of a
specified size. Putnam makes a use of a so-called The Norden/Rayleigh Curve to estimate project effort,
schedule & defect rate as shown in fig:
The Putnam model is an empirical software effort estimation model. Empirical models work by
collecting software project data (for example, effort and size) and fitting a curve to the data. Future effort
estimates are made by providing size and calculating the associated effort using the equation which fit the
original data (usually with some error).
Created by Lawrence Putnam, Sr. the Putnam model describes the time and effort required to finish a
software project of specified size. SLIM (Software LIfecycle Management) is the name given by Putnam
to the proprietary suite of tools his company QSM, Inc. has developed based on his model. It is one of the
earliest of these types of models developed, and is among the most widely used. Closely related software
parametric models are Constructive Cost Model (COCOMO), Parametric Review of Information for
Putnam noticed that software staffing profiles followed the well known Rayleigh distribution. Putnam
used his observation about productivity levels to derive the software equation:
K is the total effort expended (in PM) in product development, and L is the product estimate in KLOC .
td correlate to the time of system and integration testing. Therefore, td can be relatively considered as the
time required for developing the product.
Ck Is the state of technology constant and reflects requirements that impede the development of the
program.
The exact value of Ck for a specific task can be computed from the historical data of the organization
developing it.
Putnam proposed that optimal staff develop on a project should follow the Rayleigh curve. Only a small
number of engineers are required at the beginning of a plan to carry out planning and specification tasks.
As the project progresses and more detailed work are necessary, the number of engineers reaches a peak.
After implementation and unit testing, the number of project staff falls.
Where, K is the total effort expended (in PM) in the product development
Ck Is the state of technology constant and reflects constraints that impede the progress of the program
From the above expression, it can be easily observed that when the schedule of a project is compressed,
the required development effort as well as project development cost increases in proportion to the fourth
power of the degree of compression. It means that a relatively small compression in delivery schedule can
result in a substantial penalty of human effort as well as development cost.
For example, if the estimated development time is 1 year, then to develop the product in 6 months, the
total effort required to develop the product (and hence the project cost) increases 16 times.
It is the process of implementing the presumed software requirements with an intention to learn more
about the actual requirements or alternative design that satisfies the actual set of requirements .
1. Identify initial requirements: In this step, the software publisher decides what the software will
be able to do. The publisher considers who the user will likely be and what the user will want
from the product, then the publisher sends the project and specifications to a software designer or
developer.
2. Develop initial prototype: In step two, the developer will consider the requirements as proposed
by the publisher and begin to put together a model of what the finished product might look like.
An initial prototype may be as simple as a drawing on a whiteboard, or it may consist of sticky
notes on a wall, or it may be a more elaborate working model.
3. Review: Once the prototype is developed, the publisher has a chance to see what the product
might look like; how the developer has envisioned the publisher's specifications. In more
advanced prototypes, the end consumer may have an opportunity to try out the product and offer
suggestions for improvement. This is what we know of as beta testing.
4. Revise: The final step in the process is to make revisions to the prototype based on the feedback
of the publisher and/or beta testers.
There are three main categories of risks which can affect a software project:
1. PROJECTRISKS
Project risks concern varies forms of budgetary, schedule,
personnel, resource, and customer-related problems. An
important project risk is schedule slippage. Since, software is
intangible, it is very difficult to monitor and control a software
project. It is very difficult to control something which cannot be seen.
For any manufacturing project, such as manufacturing of cars, the
project manager can see the product taking shape. He can for
instance, see that the engine is fitted, after that the doors are fitted,
the car is getting painted, etc. Thus he can easily assess the progress
of the work and control it. The invisibility of the product being
developed is an important reason why many software projects suffer
from the risk of schedule slippage.
2. TECHNICALRISKS
Technical risks concern potential design, implementation,
interfacing, testing, and maintenance problems. Technical risks
also include ambiguous specification, incomplete specification,
changing specification, technical uncertainty, and technical
obsolescence. Most technical risks occur due to the
development team’s insufficient knowledge about the
project.
3. BUSINESS RISKS
This type of risks include risks of building an excellent product that
no one wants, losing budgetary or personnel commitments, etc.
RISK ASSESSMENT
The objective of risk assessment is to rank the risks in terms of their
damage causing potential. For risk assessment, first each risk should be
rated in two ways:
• The likelihood of a risk coming true (denoted asr).
• The consequence of the problems associated with that risk (denoted
ass).
Based on these two factors, the priority of each risk can be computed:
p=r*s
Where, p is the priority with which the risk must be handled, r is the
probability of the risk becoming true, and sis the severity of damage caused
due to the risk becoming true. If all identified risks are prioritized, then the
most likely and damaging risks can be handled first and more
comprehensive risk abatement procedures can be designed for these risks.
RISK CONTAINMENT
After all the identified risks of a project are assessed, plans must be made to
contain the most damaging and the most likely risks. Different risks require
different containment procedures. In fact, most risks require ingenuity on
the part of the project manager in tackling the risk.
RISK LEVERAGE
To choose between the different strategies of handling a risk, the project
manager must consider the cost of handling the risk and the corresponding
reduction of risk. For this the risk leverage of the different risks can be
computed.
Risk leverage is the difference in risk exposure divided by the cost of
reducing the risk. More formally,
RISK LEVERAGE = (RISK EXPOSURE BEFORE REDUCTION – RISK
EXPOSURE AFTER REDUCTION) / (COST OF REDUCTION)
Even though there are three broad ways to handle any risk, but still risk
handling requires a lot of ingenuity on the part of a project manager. As an
example, it can be considered the options available to contain an important
type of risk that occurs in many software projects – that of schedule
slippage. Therefore, these can be dealt with by increasing the visibility of
the software product. Visibility of a software product can be increased by
producing relevant documents during the development process wherever
meaningful and getting these documents reviewed by an appropriate team.
CONFIGURATION IDENTIFICATION
The project manager normally classifies the objects associated with a
software development effort into three main categories: controlled, pre
controlled, and uncontrolled. Controlled objects are those which are already
put under configuration control. One must follow some formal procedures
to change them. Pre controlled objects are not yet under configuration
control, but will eventually be under configuration control. Uncontrolled
objects are not and will not be subjected to configuration control.
Controllable objects include both controlled and pre controlled objects.
Typical controllable objects include:
• Requirements specification document
• Design documents
Tools used to build the system, such as compilers, linkers, lexical analyzers,
parsers, etc.
• Source code for each module
• Test cases
• Problem reports
CONFIGURATION CONTROL
Configuration control is the process of managing changes to controlled
objects. Configuration control is the part of a configuration management
system that most directly affects the day-to- day operations of developers. The
configuration control system prevents unauthorized changes toany controlled objects. In
order to change a controlled object such as a module, a developer can get a private copy
of the module by a reserve operation as shown in fig. 38.1. Configuration management
tools allow only one person to reserve a module at a time. Once an object is reserved, it
does not allow anyone else to reserve this module until the reserved module is restored
as shown in fig. 38.1. Thus, by preventing more than one engineer to simultaneously
reserve a module, the problems associated with concurrent access are solved.
It can be shown how the changes to any object that is under configuration
control can be achieved. The engineer needing to change a module first
obtains a private copy of the module through a reserve operation. Then, he
carries out all necessary changes on this private copy. However, restoring
the changed module to the system configuration requires the permission of
a change control board (CCB). The CCB is usually constituted from among
the development team members. For every change that needs to be carried
out, the CCB reviews the changes made to the controlled object and
certifies several things about the change:
1. Change is well-motivated.
2. Developer has considered and documented the effects of the change.
3. Changes interact well with the changes made by other developers.
4. Appropriate people (CCB) have validated the change, e.g. someone
has tested the changed code, and has verified that the change is
consistent with the requirement.