Software Measurement: Function Point Analysis (Albrect)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Software measurement

________________________________________________________________________

Software measurement
There are two significant approaches to measurement that project managers need to be
familiar with. These are Function Point Analysis (Albrecht, 1979) and COCOMO
(Boehm, 1981).

1.

Function Point Analysis (Albrect)

Function Point Analysis (Albrecht, 1979) provides a decomposition approach to


predicting the size of a system. It relies on a set of five counts of function point items
which Albrecht considered to be the principal measurable items in a system. These items
are weighted to reflect the type of project involved simple, average or complex as
illustrated in Figure 1.
Function Point items
Number of external inputs
Number of external outputs
Number of external inquiries
Number of internal logical files
Number of external interface
files.

Weight
Simple
x3
x4
x3
x7
x5

Average
x4
x5
x4
x 10
x7

Complex
x6
x7
x6
x 15
x 10

Figure 1 Function Point items and project category weightings


Using these weighting a count-total is calculated. A simple example of a calculation for
an Average project is shown in Figure 2. Here the function point item counts are
assumed at 25, 54, 18, 6 and 3 as shown in the figure. Multiplying by the weighting
factor a count-total for the project is calculated at 523.

Function Point items

Average
Item
count

Number of external inputs


Number of external outputs
Number of external inquiries
Number of internal files
Number of external interface files.
Total unadjusted function points (UFPs)

25
54
18
6
3

Calculation
for an
Average
project
25 x 4 = 100
54 x 5 = 270
18 x 4 = 72
6 x 10 = 60
3 x 7 = 21
523

Figure 2 Calculation of a typical count-total for an Average project.


This calculated UFPs value is now used in the function point formula

________________________________________________________________________
1

Software measurement

________________________________________________________________________
FP = UFPs x (0.65 + 0.01 x (DI1 to DI14))
Where
(DI1 to DI14) is a complexity adjustment value which is determined by using a 14 item
set of characteristics which Albrecht devised Figure 3. These characteristics established
subjective values which reflect organisational competence and expertise.
Characteristic
Data communications
Distributed data processing
Performance
Heavily used configuration
Transaction rate
On-line data entry
End user efficiency

DI

Characteristic
On-line update
Complex processing
Reusability
Installation ease
Operation ease
Multiple sites
Facilitate change
Total degree of influence = (DI1 to DI14)

DI

Figure 3 - 14 item set of characteristics for project complexity adjustment.


DI Values
Not present or No influence
Insignificant influence
Moderate influence
Average influence
Significant influence
Strong influence throughout

=0
=1
=2
=3
=4
=5

If we assume for this example that all 14 characteristics are scored at 3, then:
(DI1 to DI14) = 14 x 3 = 42
FP

= 523 x [0.65 + 0.01 x 42]


= 523 x [0.65 + 0.42]
= 560

At this point, the FP value is multiplied by the number of lines of code that it takes to
develop a function point, giving the size of the project in lines of code. The number of
lines of code that are required to develop a function point varies depending on the
programming language being used (Assembly = 1 FP requires 300 lines; Pascal = 1 FP
requires 90 lines) and estimators use published values for different languages. So, for
useful predictions to be arrived at it is necessary to calibrate function points to comply
with an organisations development environment.
Function Point Analysis is suitable for predicting system size where a requirements
specification is available. In such circumstances it is suitable for projects were historical
data is not available. The difficulty with it is the subjective nature of the complexity
adjustment value and the subject nature of the simple, average, complex classification of
projects. There are also difficulties relating to correct counting. In 1988 Charles Symons
________________________________________________________________________
2

Software measurement

________________________________________________________________________
proposed Mark II Function Points A new measure of information processing size.
Despite early difficulties Function Point Analysis is supported through the International
Function Point Users Group which regularly publishes rules and guidelines and hold
practitioner examinations.
Albrects Function Point Analysis also continues to evolve and even though it is an
acknowledged international standard (ISO 20968, 2002) there is still a disclaimer on the
Netherlands Software Metrics Users Association (NESMA) which reads
The method has been tried in practice. However, NESMA does not claim that
the method in its current form has been validated scientifically. Additional
research and practical use is necessary to demonstrate the validity of the
method.
Function Point Analysis continues to evolve through the work of The International
Function Point Users Group (IFPUG) and the work of Symons who developed Mark II
Function Points.

2.

COCOMO 81 and COCOMO II (Boehm)

Originally named COCOMO, the COnstructive COst MOdel was devised by Barry
Boehm in 1981 as a method for estimating project cost, effort, and schedule. It has since
been re-designated COCOMO 81. The metrics of COCOMO 81 are styled PersonMonths (PM), Time to Develop (TDEC) and Thousands of Delivered Source Instructions
(KDSI).
COCOMO 81 has three modes which classify different types of system projects as
follows:
Organic Batch programs; scientific models; and business models. Created by small
teams working in a familiar environment, where they have domain expertise.
Semidetached Most transaction processing systems; new operating system; database
management system; and ambitious inventory production control. Created by teams of
mixed personnel with limited or no experience of the system they are developing.
Embedded Large complex transaction processing system; ambitious very large
operating system; and avionics. That is, complex, high value, real time systems.
So, different formulae are needed for calculating COCOMO 81 values. There are three
complexity models of COCOMO 81 Basic, Intermediate and Advanced. The general
COCOMO 81 formulae for all modes are:
PM = x(KDSI)x1, where x and x1 are constants.
TDEV = y(PM)y1, where y and y1 are constants.
________________________________________________________________________
3

Software measurement

________________________________________________________________________

Basic COCOMO 81, the lowest level of COCOMO 81, uses a single-valued model to
compute PM as a function of program size expressed in estimated delivered source
instructions. It also calculates TDEV expressed in terms of PM.
The constants for the general formulae for Basic COCOMO 81 that have been established
by Boehm are:
1.05

Organic
Semidetached
Embedded

0.38

PM = 2.4(KDSI)
1.12
PM = 3.0(KDSI)
1.20
PM = 3.6(KDSI)

TDEV = 2.5(PM)
0.35
TDEV = 2.5(PM)
0.32
TDEV = 2.5(PM)

Boehm says that


"Basic COCOMO is good for rough order of magnitude estimates of
software costs, but its accuracy is necessarily limited because of its lack of
factors to account for differences in hardware constraints, personnel quality
and experience, use of modern tools and techniques, and other project
attributes known to have a significant influence on costs."
NASA, (2006)
The Intermediate COCOMO 81 model builds on the formulae of the Basic model.

First, the x-constant in the PM formula is adjusted and then an Effort Multiplier (EM)
is used to take account of factors that influence estimation. These factors are, product
attributes, computer attributes, personnel attributes and project attributes.
The constants for the general formulae for Intermediate COCOMO 81 are:
Organic
Semidetached
Embedded

1.05

0.38

PM = 2.4(KDSI)
x EM
1.12
PM = 3.0(KDSI)
x EM
1.20
PM = 3.6(KDSI)
x EM

TDEV = 2.5(PM)
0.35
TDEV = 2.5(PM)
0.32
TDEV = 2.5(PM)

The Effort Multiplier (also called cost drivers) are used to adjust the PM figure. There
are 15 sets of multipliers which contain 4, 5 or 6 values in each set. For example, one of
the product attributes is product complexity and the multipliers for this attribute are:
3 product complexity
Or
5 Main storage constraints

V. Low Low

Normal High

V. High E.High

0.70

1.00

1.15

1.30

1.65

1.00

1.06 1.21

1.56

0.85

As each multiplier is identified for each of the 15 sets they are multiplied together in
order to obtain one value to be used in the PM formula. So, a project that has very low

________________________________________________________________________
4

Software measurement

________________________________________________________________________
product complexity (0.70) and very high main storage constraint (1.21) has an Effort
Multiplier of 0.70 x 1.21 = 0.847.
The Detailed COCOMO 81 model incorporates all characteristics of the intermediate
version with an assessment of the cost driver's impact on each step (e.g., analysis, design)
of the software engineering process.
COCOMO is based on studies at the Californian automotive and IT company, TRW
which involved programs of 2000 to 100,000 lines of code. Extensive independent
reports of the use of COCOMO are found in the technical literature. Through the
research work of the Centre for Software Engineering (Founded by Dr. Boehm in 1993)
at the University of Southern California (USC), COCOMO has continued to evolve. It
has been renamed COCOMO II and consists of three submodels called the Applications
Composition, Early Design, and Post-architecture models (Boehm et al., 1995; Clark et
al., 1998; CSE, 2002).
These changes were necessary because systems were moving from mainframe overnight
batch processing to desktop-based real-time systems. New development methods
involved a greater emphasis on software reuse and involved building new systems from
off-the-shelf software components. Developers were also spending significantly more
effort in designing and managing the software development process (Boehm et al., 1995).
The authors claim that the baseline COCOMO II family of software cost estimation
models present a tailorable cost estimation capability well matched to the major current
and likely future software process development trends.

________________________________________________________________________
5

You might also like