Metrics and Models in software testing

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 41

Metrics and Models in Software Testing

How do we measure the progress of testing


How much time and resources are required for software testing?
What is the reliability of software at the time of release?
How many faults do we expect during testing?
Require a significant amount of effort
SOFTWARE METRICS

“What cannot be measured cannot be controlled”

Measure, Measurement and Metrics

As a machine(input,output)

“It is the process by which numbers or symbols are assigned to attributes of


entities in the real world in such a way as to describe them according to clearly
defined rules.”

How software can be measurable??


Measuring items

● Proformance
● Productivity
● Estimated cost
● Planing work items
Characteristics of metrics

● Simple and computable


● Consistent and
unambiguous
● Independent of
programming language
● Easy and cost effective to
obtain
Importance of metrics

Efficiency

Effectiveness

Act as indicators

Informed decisions and intelligent choices


Categories of Metrics

➔ Product Metrics
◆ Size (MBs, GBs)
◆ Complexity
◆ Design features
◆ Performance
◆ Efficiency, Reliability & Portability
➔ Process Metrics
◆ Effort required in the process
◆ Time need to produce the product
◆ Effectiveness of defect removal
◆ Number of defects found during testing
◆ Maturity of the process
Product Metrics for Testing

1. Number of failures experienced in a time interval


2. Time interval between failures
3. Cumulative failures experienced up to a specified time
4. Time of failure
5. Estimated time for testing
6. Actual testing time
Additional metrics such as:

1. % of time spend = Actual time spent / Estimated testing time * 100


2. Average time interval between failures
3. Maximum and minimum failures experienced in any time interval
4. Average number of failures experienced in time intervals
5. Time remaining to complete the testing
Process Metrics for Testing

1. Number of test cases designed


2. Number of test cases executed
3. Number of test cases passed
4. Number of test cases failed
5. Test case execution time
6. Total execution time
7. Time spent for the development of a test case
8. Total time spent for the development of all test cases
Additional metrics such as:

1. % of test cases executed


2. % of test cases passed
3. % of test cases failed
4. Total actual execution time / total estimated execution time
5. Average execution time of a test case
OBJECT ORIENTED METRICS USED IN TESTING

Object oriented metrics capture many attributes of a software product


and some of them are relevant in testing.

Measuring structural design attributes of a software system, such as

1. Coupling
2. Cohesion
3. Complexity
Coupling
Metrics
● increase complexity
● potential reuse
● maintainability

It gives info about attributes usage and method invocation of other


classes
Some coupling metrics are
● Coupling between objects
● Data Abstraction Coupling
● Message Passing Coupling
● Response for a Class
● Information flow-based coupling
● Information flow-based inheritance coupling
● Information flow-based non-inheritance coupling
● Fan-in
● Fan-out
Cohesion
Metrics
➔ Cohesion is a measure of the degree to which the elements of a
module are functionally related
➔ cohesion measure requires information about attribute usage and
method invocations within a class.
➔ In most of the situations, highly cohesive classes are easy to test.
Cohesion metrics are

● Lack of Cohesion of Methods


● Tight Class Cohesion
● Loose Class Cohesion
● Information based Cohesion
Inheritance
Metrics
➔ It requires information about ancestors and descendants of
a class.
➔ Also about methods overridden, inherited and added (i.e.
neither inherited nor overridden)
➔ A class has a higher number of children (or sub-classes)
more testing may be required in testing the methods of
that class.
Inheritance metrics are
● Number of Children
● Depth of Inheritance Tree
● Number of Parents
● Number of Descendants
● Number of Ancestors
● Number of Methods Overridden
● Number of Methods Inherited
● Number of Methods Added
Size Metrics
➔ It indicate the length of a class in terms of lines of source code and
methods used in the class.
➔ If a class has a larger number of methods with greater complexity,
then more test cases will be required to test that class.
➔ A class with a larger number of public methods will require
thorough testing of public methods as they may be used by other
classes.
Size metrics are

● Number of Attributes per Class


● Number of Methods per Class
● Weighted Methods per Class
● Number of public methods
● Number of non-public methods
● Lines Of Code
What should we measure
during testing?
Measurements During Testing AbdurRehman L1F18BSSE0035

● Time
● Quality of Source Code
● Source Code Coverage
● Test Case Defect Density
● Review Efficiency
Time
We may measure many things during testing with respect to time and some of
them are given as:

● Time required to run a test case


● Total time required to run a test suite
● Time available for testing
● Time interval between failures
● Cumulative failures experienced up to a given time
● Time of failure
● Failures experienced in a time interval
Quality of Formula:
Source Code: QSC=(WDB+WDA)/S
High quality code means that your QSC=Number of executed test cases
source code must perform well with
regard to the following: WDB = Number of weighted defects
found before release
● Optimization
● Readability WDA= Number of weighted defects
● Maintainability found after release
● Compatibility
● Security S= Size of the source code in terms
● Understandability of KLOC.
Formula:
Source Code SCC=(NSC/TNS) * 100
Coverage: SCC= % of source code coverage

Higher the value of this NSC= Number of statements of a


source code covered by test suite
metric, higher the confidence
about the effectiveness of a TNS=Total number of statements of a
test suite. source code
Formula:
Test Case Defect TDD=(NFT/NET) * 100
Density : TTD= Test case defect density

NFT= Number of failed test cases


Higher the value of this metric,
NET=Number of executed test cases
higher the confidence about
the effectiveness of a test
suite.
Formula:
RE=(NDF/TND) * 100
Review
RE= Review Efficiency
Efficiency:
Review efficiency is a metric NDF= Total number of defects found
that gives an insight on the during review
quality of the review process
carried out during verification. TND=Total number of project defects

Higher the value of this metric,


better is the review efficiency
Software Quality Attributes Prediction Models

Software quality depend on many attributes like:-


● Reliability
● Maintainability
● Testability
● Complexity etc.
A number of models are available for the prediction of one or more such
attributes of quality.
Reliability Model:-

Many models:-
● Focus on failures rather than faults.
● Time of failure and time between failures may help us to find reliability
of software.
Most of the model based on:-
● Execution time
Model on Execution time is
● Basic execution time model.
● Logarithmic poisson execution time model.
● The Jelinski Moranda model
Basic execution time model

This model was established by J.D. Musa in 1979, and it is based on execution time. The basic
execution model is the most popular and generally used reliability growth model, mainly
because:

● It is practical, simple, and easy to understand.


● Its parameters clearly relate to the physical world.
● It can be used for accurate reliability prediction.

The basic execution model determines failure behavior initially using execution time. Execution
time may later be converted in calendar time.
Logarithmic poisson execution time model

● The model incorporates both execution time and calendar time components,
each of which is derived.
● The model is evaluated, using actual data, and compared with other models.
● When execution time is more, the logarithmic poisson model may give larger
values of failure intensity than the basic model.
The Jelinski-Moranda (J-M) model

● The Jelinski-Moranda (J-M) model is one of the earliest software reliability models.
● Many existing software reliability models are variants or extensions of this basic
model.
Assumptions
● The number of initial errors is unknown but fixed and constant.
● Each error in the software is independent and equally likely to cause a
failure during a test.
● The software failure rate remains fixed over the ranges among fault
occurrences.
● The failure rate is corresponding to the number of faults that remain in
the software.
● A detected error is removed immediately, and no new mistakes are
introduced during the removal of the detected defect.
● Whenever a failure appears, the corresponding fault is reduced with
certainty.
Characteristics

● It is a binomial type model.


● It is certainly the earliest and one of the most well known black-box
models.
● Always yield an over-optimistic reliability prediction.
● Follow a perfect debugging step i.e the detected fault is removed with
certainty simple model.
Variation In JM Model

● JM model was the first prominent software reliability model.


● Several researchers showed interest and modify this model, using
different parameters such as failure rate, perfect debugging, imperfect
debugging, number of failures, etc.
An Example Of Fault Prediction Model

As we know that, software metrics can be used to:-


● Capture quality of object oriented design and source code.
● Provide way to evaluate the quality of software
● their use in earlier phases of software development can help organizations in
assessing a large software development quickly, at a low cost.
Fault Prediction Model

● Used to identify classes that are prone to have severe faults.


● Use model with respect to high sensitivity of faults to focus the testing on other parts
of the system that are likely to cause failures.
● To achieve help for planning and executing testing by focusing resources on the fault-prone
parts of the design and source code , model used to predict faulty classes should be used.
● We describe models used to find relationship between object oriented metrics and
fault proneness, and how such models can be of great help in planning and executing
testing activities.
Way Of Analysis
● Analysis to explore empirically the relationship between object oriented metrics and fault
proneness at class level. fault proneness is defined as the probability of fault detection in a
class.
● Use first associated defects with each class according to their severities.
● The value of severity qualifies the impact of the defect on the overall environment with 1
being most severe and 5 being least severe.
● The next step of our analysis found the combined effect of object oriented metrics
on fault proneness of class at various severity levels. We obtained from four
multivariate fault prediction models using LR methods.
➢ First one is for high severity faults.
➢ Second one is for medium severity faults.
➢ Third one is for low severity faults.
➢ Fourth one is for ungraded severity faults.
Validate the findings of our analysis

In order to validate the findings of our analysis, we use commonly used evaluation
measures:-
● Sensitivity
● Specificity
● Completeness
● Precision
● ROC analysis
Sensitivity

The ratio of number of classes correctly predicted as fault prone to the total number of
classes that are actually fault prone.

Recall = TFP / TFP FNFP


Completeness

The number of faults in classes classified fault-prone divided by total number of faults in the
system.

You might also like