Introduction To Software Testing: 1.1 Testing As An Engineering Activity
Introduction To Software Testing: 1.1 Testing As An Engineering Activity
Introduction To Software Testing: 1.1 Testing As An Engineering Activity
Page 1
IT2032 –SOFTWARE TESTING – UNIT I
BRANCH : CSE - VII SEM
CHAPTER 1
INTRODUCTION TO SOFTWARE TESTING
Meaning: Verification is usually associated with activities such as inspection and reviews of
software deliverables.
Errors
An error is a mistake, misconception, or misunderstanding on the part of a software developer.
Failures
A failure is the inability of a software system or component to perform its required functions within
specified performance requirements.
Test Cases
The usual approach to detecting defects in a piece of software is for the tester to select a set of input
data and then execute the software with the input data under a particular set of conditions. A test case
in a practical sense is a test-related item which contains the following information:
1. A set of test inputs. These are data items received from an external source by the code
under test. The external source can be hardware, software, or human.
2. Execution conditions. These are conditions required for running the test, for example, a
certain state of a database, or a configuration of a hardware device.
3. Expected outputs. These are the specified results to be produced by the code under test.
Test
A test is a group of related test cases, or a group of related test cases and test procedures.
Test Oracle
A test oracle is a document, or piece of software that allows testers to determine whether a test has
been passed or failed.
Test Bed
A test bed is an environment that contains all the hardware and software needed to test a software
component or a software system.
Software Quality
Two concise definitions for quality are found in the IEEE Standard Glossary of Software
Engineering Terminology:
1. Quality relates to the degree to which a system, system component, or process meets
specified requirements.
2. Quality relates to the degree to which a system, system component, or process meets
customer or user needs, or expectations. In order to determine whether a system, system
component, or process is of high quality we use what are called quality attributes. These are
characteristics that reflect quality. For software artifacts we can measure the degree to which
they possess a given quality attribute with quality metrics.
Quality metrics
A metric is a quantitative measure of the degree to which a system, system component, or process
possesses a given attribute.
There are product and process metrics. A very commonly used example of a software product metric
is software size, usually measured in lines of code (LOC). Two examples of commonly used process
metrics are costs and time required for a given task. Quality metrics are a special kind of metric.
A quality metric is a quantitative measurement of the degree to which an item possesses a given
quality attribute.
Many different quality attributes have been described for software. Some examples of quality
attributes with brief explanations are the following:
correctness—the degree to which the system performs its intended function
reliability—the degree to which the software is expected to perform its required functions
under stated conditions for a stated period of time
usability—relates to the degree of effort needed to learn, operate, prepare input, and interpret
output of the software
integrity—relates to the system’s ability to withstand both intentional and accidental attacks
portability—relates to the ability of the software to be transferred from one environment to
another
maintainability—the effort needed to make changes in the software
interoperability—the effort needed to link or couple one system to another.
Another quality attribute that should be mentioned here is testability. This attribute is of
more interest to developers/testers than to clients. It can be expressed in the following two ways:
1. the amount of effort needed to test the software to ensure it performs needed),
2. the ability of the software to reveal defects under testing conditions (some software is designed in
such a way that defects are well hidden during ordinary testing conditions).
Testers must work with analysts, designers and, developers throughout the software life system to
ensure that testability issues are addressed.
Reviews
In contrast to dynamic execution-based testing techniques that can be used to detect defects and
evaluate software quality, reviews are a type of static testing technique that can be used to evaluate
the quality of a software artifact such as a requirements document, a test plan, a design document, a
code component. Reviews are also a tool that can be applied to revealing defects in these types of
documents.
Definition: A review is a group meeting whose purpose is to evaluate a software artifact or a set of
software artifacts.
CHAPTER 2
TESTING FUNDAMENDALS
Principle1: Testing is the process of exercising a software component using a selected set of
tests cases, with the internet.
Revealing defects, and
Evaluating quality.
Software engineers have made great progress in developing methods to prevent and eliminate
defects. However, defects do occur, and they have a negative impact on a software quality.
This principle supports testing as an execution-based activity to detect defects.
The term defect as used in this and in subsequent principle represents any deviations in the
software that have negative impact on its functionality, performance, reliability, security and
other of its specified quality attributes.
Principle-2:When the test objectives is to detect defects, then a good test case is one that has a
high probability of revealing a yet undetected defects.
The goal for the test is to prove / disprove the hypothesis that is, determine if the specific
defect is present / absent.
A tester can justify the expenditure of the resources by careful test design so that principle
two is supported.
The test case is of no value unless there is an explicit statement of the expected outputs or
results.
Example:
A specific variable value must be observed or a certain panel button that must light up.
Principle-5: Test cases should be developed for both valid and invalid input conditions.
The tester must not assume that the software under test will always be provided with valid
inputs.
Inputs may be incorrect for several reasons.
Example:
Software users may have misunderstandings, or lack information about the nature of the
inputs. They often make typographical errors even when compute / correct information are available.
Device may also provide invalid inputs due to erroneous conditions and malfunctions.
Principle-7: Testing should be carried out by a group that is independent of the development
group.
Tester must realize that
1. Developers have a great deal of pride in their work and
2. On practical level it may be difficult for them to conceptualize where defects could be
found.
Principle-10: Testing activities should be integrated into the software life cycle.
It is no longer feasible to postpone testing activities until after the code has been written.
Test planning activities into the software lifecycle starting as early as in the requirements
analysis phases, and continue on throughout the software lifecycle in parallel with
development activities.
CHAPTER 3
DEFECT CLASSES AND DEFECT REPOSITORY
3.1 DEFECTS
Origins of Defects
Defects have determined effects on software users, and software engineers work very hard to
produce high-quality software with a low number of defects.
But even under the best of development circumstances errors are made, resulting in defects
beings injected in the software during the phase of the software lifecycle.
Defect Sources
Lack of education
Poor communication
Oversight
Transcription
Immature process
Impact of S/W artifacts
Errors
Faults defects
Failures
Impact from user’s view
Poor quality software
User dissatisfaction
Tester as doctors need to have knowledge about possible defects in order to develop defect
hypotheses, they use the hypotheses to;
Design test cases
Design test procedure
Assemble test sets
Select the testing levels appropriate for the tests
Evaluate the results of the test.
A successful testing experiment will prove the hypothesis is true that is, the hypothesized
defect was present. Then the software can be repaired.
Requirement specification
defect classes
Functional description
Features
Feature Interaction
Interface Description
Defect Repository
Defect classes
Severity
Occurrences
Defect reports/
analysis
Defect classes
Defect classes are classified into four types namely
1. requirement/specification defect class
2. design defect class
3. coding defect class
4. testing defect class
3. Failure interaction defects- these are due to an incorrect description of how the features
should interact.
4. Interface description defects- these occur in the description of how the target software
is to interface with external software, hardware and users.
2. Design defects
Some design defects are
1. Algorithms and processing defects- these occur when the processing steps in the algorithm
as described by the pseudo code are incorrect.
2. Control logic and sequence defects- Control defects occur when logic flow in the pseudo
code is not collect.
3. Data Defects- These are associated with in collect design of data structures.
4. Module Interface Description Defects- This include in correct, missing, and /or inconsistent
defects of parameter types.
5. Functional Description Defects- This includes incorrect missing, and/ or unclear defects of
design elements. These defects are best defected during a design review.
6. External Interface Description defects- these are derived four incorrect design description
for inter faces with COTS components, external software systems, databases, and hardware
devices.
3. Coding Defects
1. Algorithmic and processing Defects- Adding levels of programming detail to design, code-
related algorithmic and processing defects now include unchecked overflow and underflow
conditions , comparing inappropriate data types, converting one data type to another,
incorrect ordering of arithmetic operators , misuse or omission of parenthesis , precision loss
an incorrect use of signs.
2. Control logic and sequence Defects- On the coding level these would include incorrect
expression of case statements incorrect iterations of loops.
3. Typographical Defects- These are syntax errors.
4. Initialization Defects- These occur when initialization statements are omitted or are
incorrect this may occur because of misunderstandings or lack of communication between
programmers and / or programmers and designers, carelessness of the programming
environment.
5. Data Flow Defects- These are certain reasonable operational sequence that data should flow
through.
6. Data Defects- These are indicated by incorrect implementation of data structures.
7. Module Interface Defects- As in the case of module design elements, interface defects in the
code may be due to using incorrect or inconsistent parameter type an incorrect number of
parameters.
8. Code Documentation Defects – When the documentation does not reflect what the
programs actually does, or is in complete or ambiguous, this is called a code documentation
defect.
9. External hardware, software interface defects – These defects arise from problems related
to system called links to database, input/output sequence , memory usage , interrupts and
exception handling , data exchange with hardware , protocols , formats, interfaces with build
files , and fixing sequences.
4. Testing Defects
Defects are not confined to code and it related artifacts. Test plans , tests cases, test hardness
and test procedures can also contain defects . Defect in test plans are best detected using
review techniques.
1. Test hardness Defects
2. Test Case Design and Test Procedure Defects- These would encompass incorrect,
incomplete, missing, inappropriate test cases and test procedures.