Lec 10
Lec 10
Lec 10
2
The Software testing process has
2 distinct objectives –
Testing is a process of executing a program with the
intent of finding an error.
1.To demonstrate to the developer and the customer tha
the software meets its requirement
(validation testing)
2.To discover faults or defects in the software where the
behavior of the software is incorrect, undesirable or
does not conform to its specification (defect testing)
3
Basic Principles of Software Testing
• All tests should be traceable to customer requirements.
• Tests should be planned long before testing begins
• The Pareto principle (80:20 rule) applies to software
testing.
• Testing should begin “in the small” and progress toward
testing “in the large.”
• Exhaustive testing is not possible.
• To be most effective, testing should be conducted by an
independent third party.
4
The software testing process
6
Black box testing
• Also known as behavioral testing
• Focuses more on the functional requirements of
the software
• To determine if input is properly accepted and
output is correctly produced
• It has little regard for the internal logical structure
of the software and is applied at the later stages
of software testing
7
Black box testing
• Black-box testing attempts to find errors in the
following categories:
– incorrect or missing functions,
– interface errors,
– errors in data structures or external database access
– behaviour or performance errors,
– System initialization and termination errors.
8
Black-box testing
Inputs causing
anomalous
Input test da ta Ie beha viour
System
11
Testing Strategy
12
Testing strategy
• A strategy for software testing integrates the design of software
test cases into a well-planned series of steps that result in
successful development of the software
13
General Characteristics of Strategic Testing
• Testing begins at the component level and work outward
toward the integration of the entire computer-based system
System Testing
r s to
pe
de w
Validation Testing
co
o a ro
Br Nar
Integration Testing
Unit Testing
Code
re to
nc t
co strac
te
Design
Ab
Requirements Analysis
System Engineering 15
Levels of Testing for Conventional
Software
1. Unit testing
Concentrates on each component/function of the software as
implemented in the source code
2. Integration testing
Focuses on the design and construction of the software
architecture
3. Validation testing
Requirements are validated against the constructed software
4. System testing
The software and other system elements are tested as a whole
16
1. Unit Testing
• Focuses testing on the function or software module
• Concentrates on the internal processing logic and data
structures
• Is simplified when a module is designed with high
cohesion
– Reduces the number of test cases
– Allows errors to be more easily predicted and uncovered
17
Unit Test Environment
driver
interface
local data structures
Module boundary conditions
independent paths
error handling paths
stub stub
test cases
RESULTS
Drivers and Stubs for
Unit Testing
• Driver
– A simple main program that accepts test case data, passes such
data to the component being tested, and prints the returned results
• Stubs
– Serve to replace modules that are subordinate to (called by) the
component to be tested
19
Targets for Unit Test Cases
• Module interface
– Ensure that information flows properly into and out of the module
• Local data structures
– Ensure that data stored temporarily maintains its integrity during all
steps in an algorithm execution
• Boundary conditions
– Ensure that the module operates properly at boundary values
established to limit or restrict processing
• Independent paths
– Paths are exercised to ensure that all statements in a module have been
executed at least once
• Error handling paths
– Ensure that the algorithms respond correctly to specific error conditions
20
Common Computational Errors
in Execution Paths
• Misunderstood or incorrect arithmetic precedence
• Mixed mode operations (e.g., int, float, char)
• Incorrect initialization of values
• Precision inaccuracy and round-off errors
• Incorrect symbolic representation of an expression
(int vs. float)
21
Other Errors to Uncover
• Comparison of different data types
• Incorrect logical operators or precedence
• Expectation of equality when precision error makes
equality unlikely (using == with float types)
• Incorrect comparison of variables
• Improper or nonexistent loop termination
• Failure to exit when divergent iteration is encountered
• Boundary value violations
22
2. Integration Testing
• Defined as a systematic technique for constructing the software
architecture
– At the same time integration is occurring, conduct tests to uncover
errors associated with interfacing
Objective:
To take unit tested modules /components and build a program
structure based on the prescribed design
Two Approaches
– Non-incremental Integration Testing
– Incremental Integration Testing
23
Non-incremental Integration Testing
• Commonly called the “Big Bang” approach
• All components are combined in advance
• The entire program is tested as a whole
• Chaos results
• Many seemingly-unrelated errors are encountered
• Correction is difficult because isolation of causes is
complicated
• Once a set of errors are corrected, more errors occur, and testing
appears to enter an endless loop
24
Incremental Integration Testing
• Three kinds
– Top-down integration
• Develop the skeleton of the system and populate it with components.
– Bottom-up integration
• Integrate infrastructure components then add functional components.
– Sandwich integration
• The program is constructed and tested in small
increments
• Errors are easier to isolate and correct
• Interfaces are more likely to be tested completely
• A systematic test approach is applied to simplify error
localisation. 25
Incremental integration testing
A T1
T1
A
T1 T2
A B
T2
T2 B T3
T3
B C
T3 T4
C
T4
D T5
27
Regression Testing
• Each new addition or change to base lined software
may cause problems with functions that previously
worked flawlessly
30
Validation through Acceptance
testing
It is virtually impossible for software developers to
foresee how the customer will really use the system
31
Acceptance testing
• When custom software is built for one customer,
a series of acceptance tests are conducted to
enable the customer to validate all requirements.
32
Alpha Testing
– Conducted at the developer’s site by a customer
33
Beta Testing
– Conducted at one or more customers’ site
35
4. System testing
• It is a black box testing to validate the overall system accuracy
and completeness in performing the function as designed or
specified.
• Must ensure that all unit and integration test results are
reviewed and that all problems are resolved.
36
Types of system testing
i. Recovery testing
ii. Security testing
iii. Stress testing
iv. Performance testing
37
i. Recovery testing
38
ii. Security testing
Verifies that protection mechanisms built into a system to
protect it from improper access by hackers and frauds
39
iii. Stress testing
• Stressing the system often causes defects to come to light.
44
Debugging
• Occurs as a consequence of successful testing.
i.e. when a test case uncovers an error,
debugging is the process that results in the
removal of the error.
45
Debugging process
46
Debugging Process
• Debugging occurs as a consequence of successful testing
• It is still very much an art rather than a science
47
Why is Debugging so Difficult?
• The symptom and the cause may be geographically remote.
i.e., the symptom may appear in one part of a program, while the cause
may actually be located at a site that is far removed. Highly coupled
program structures exacerbate this situation.
• It may be difficult to accurately reproduce input
conditions, such as asynchronous real-time information
• The symptom may be irregular such as in embedded
systems involving both hardware and software
• The symptom may be due to causes that are distributed
across a number of tasks running on different processes
48
Debugging Approaches
Objective of debugging is to find and correct the cause of a software
error
50
#2: Backtracking
• Can be used successfully in small programs
• The method starts at the location where a symptom has
been uncovered
• The source code is then traced backward (manually) until
the location of the cause is found
51
#3: Cause Elimination
• Involves the use of induction or deduction
52
#3: Cause Elimination
• Data related to the error occurrence are organized to isolate
potential causes
53
Three Questions to ask Before
Correcting the Error
• Is the cause of the bug reproduced in another part of the
program?
– Similar errors may be occurring in other parts of the program
• What next bug might be introduced by the fix that I’m about to
make?
– The source code (and even the design) should be studied to assess the coupling
of logic and data structures related to the fix
• What could we have done to prevent this bug in the first place?
– This is the first step toward software quality assurance
– By correcting the process as well as the product, the bug will be removed from
the current program and may be eliminated from all future programs
54
Ensuring a Successful Software Test Strategy
• State testing objectives explicitly in measurable terms
• Understand the user of the software (through use cases) and
develop a profile for each user category
• Develop a testing plan that emphasizes “rapid cycle testing” (to
get quick feedback to control quality levels and adjust the test
strategy)
• Build “robust” software that is designed to test itself and can
diagnose certain kinds of errors
• Use effective formal technical reviews as a filter prior to testing
to reduce the amount of testing required
• Conduct formal technical reviews to assess the test strategy and
test cases themselves
• Develop a continuous improvement approach for the testing
process through the gathering of metrics
55
Key points
• The objective of software testing is to uncover errors. To
fulfil this objective, a series of test steps- unit, integration,
validation, and system tests are planned and executed.
• Testing can show the presence of faults in a system; it
cannot prove there are no remaining faults.
• Testing is a continuous process. Do test as early as possible
• Test activities must be carefully planned, controlled and
documented
• Unlike testing (a systematic, planned activity), debugging
must be viewed as an art. Beginning with a symptomatic
indication of a problem, the debugging activity must track
down the cause of an error..
56