Summary of Testing Phases PDF
Summary of Testing Phases PDF
Summary of Testing Phases PDF
This lesson would be the summing up of all the testing methods discussed in
previous chapters. We aim to consolidate the testing phases and their
procedures. At end of this lesson the reader might be able to understand testing
phases, multiphase testing model and working across multiple releases of a
software.
IEEE standards are most accepted in the software testing industry. However, it
is not mandatory that all software testing processes have to follow the standard.
Software testing has many different phases such as the test planning, test
specification and test reporting phase.
Test plan is the most important phase in the software testing process. It sets
the process rolling and describes the scope of the testing assignment, the
approach methodology, the resource requirement for testing and the project
plan or time schedule. The test plan outlines the test items, system features
testing or checking out the functionality of the system, the testing tasks,
responsibility matrix and the risks associated with the process. The testing task
is achieved by testing different types of test data. The steps that are followed in
system testing are program testing, string testing, system testing, system
documentation and user acceptance testing.
Test specification document helps in refining the test approach that has been
planned for executing the test plan. It identifies the test cases, procedures and
the pass/fail criteria for the assignment. The test case specification document
outlines the actual values required as input parameters in the testing process
and the expected outputs of the testing results. It also identifies the various
constraints related to the test case. It is important to note that test cases are re-
usable components and one test case can be used in various test designs. The
test procedure outlines all the processes that are required to test the system
and implement the test cases.
115
During the testing phase all the activities that occur are documented. There are
various reasons why clear documentation is required during testing. It helps
the development team to understand the bugs and fix them quickly. Incase
there is a change in the testing team it will help the new team members to
quickly understand the process and help in a quick transition. The overall
summary report of the testing process helps the entire project team to
understand the initial flaws in design and development and ensure that the
same errors are not repeated again. There are four types of testing documents.
The transmittal report which specifies the testing events being transmitted from
the development team to the testing team, the test log which is a very important
document and used to document the events that happened during execution,
test incident report which has a list of testing events that requires further
investigation and the test summary report which summarizes the overall testing
activities.
Many software testing companies follow the IEEE standard of software testing
when executing their testing projects. Software application development
companies may have their own testing templates which they use for their
testing requirements. Outsourcing the testing requirements to a third party
vendor helps in improving the quality of the software to the great extent. Also
an unbiased view helps to find out the many different loopholes that are
existent in the software system.
116
bug fixes. This results in releasing a bad quality product and lack of ownership
on issues. It also creates a repetition of test cases at various phases of testing
in case the quality requirements of that phase are not met. Having too strict
entry criteria solves this problem but a lack of parallelism in this case creates a
delay in the release of the product. These two extreme situations are depicted in
the figure 15.1.
The right approach is to allow product quality to decide when to start a
phase and entry criteria should facilitate both the quality requirements for a
particular phase and utilize the earliest opportunity for starting a particular
phase. The team performing the earlier phase has the owner ship to meet the
entry criteria of the following phase.
Some sample entry and exit criteria are given in table. Please note that
there is no entry and exit criteria for unit testing as it starts soon after the code
is ready to compile and the entry criteria for component testing can serve as
exit criteria for unit testing. However, unit test regression continues till the
product is released. The criterion given below enables the product quality to
decide on starting/completing test phases at the same time and creates many
avenues for allowing parallelism among test phases. The following figure 15.1
illustrates the three possible entry criteria of testing phases.
Integration
Testing
System Testing
Acceptance
Time line
117
Too Mild Entry Criteria
Unit Testing
Component
Testing
Integration
Testing
System Testing
Acceptance
Time line
Optimized Entry
Unit Testing
Criteria
Component
Testing
Integration
Testing
System Testing
Acceptance
Time line
Figure 15.1 Possible entry criteria for testing phases
The above three time charts mention about the time when a testing phase must
be started and when it must be ended. The duration of time taken to complete
each phase is classified into optimum, mild and strict. The Software Test
Engineer specific on his requirements and time schedule can follow one among
the above.
118
15.2 Entry/Exit Criteria for Testing Models
Table 15.1 Sample entry and exit criteria for component testing
Component Testing
Periodic unit test progress report No extreme and critical outstanding
showing 70% completion rate defects in features
Stable build (installable) with basic All 100% component test cases
features working executed with at least 98% pass ratio
Table 15.2 Sample entry and exit criteria for integration testing
Integration Testing
Periodic component test progress No extreme and critical outstanding
report (with at least 50% completion defects in features
ratio) with at least 70% pass rate
Table 15.3 Sample entry and exit criteria for Acceptance testing
119
Entry Criteria Exit Criteria
Acceptance Testing
Periodic integration test progress All 100% system test cases executed
report with at least 50% pass rate for with at least 98% pass ratio
starting system testing, 90% pass
All 100% acceptance test cases
rate for starting acceptance testing
executed with 100% pass rate
Stable build (production format) with Test summary report all phases
all features integrated consolidated (periodic) and they are
analyzed and defect trend showing
downward trend for last four weeks
No extreme and critical defects Performance, load test report for all
outstanding critical features, system
120
Figure 15.2 Testing with multiple releases
121
Compatibility
(Forward/Backward)
Localization Testing
Interoperability
API/ interface
testing
Performance testing
Load testing
Reliability
122
2. Specified components. A software component must have a specification
in order to be tested. Given any initial state of the component, in a
defined environment, for any fully-defined sequence of inputs and any
observed outcome, it shall be possible to establish whether or not the
component conforms to the specification.
Dynamic execution. Dynamic execution and analysis of the results of
execution must be focused.
Techniques and measures. define test case design techniques and test
measurement techniques. The techniques are defined to help users of
this Standard design test cases and to quantify the testing performed.
The definition of test case design techniques and measures provides for
common understanding in both the specification and comparison of
software testing.
Test process attributes. Describe attributes of the test process that
indicate the quality of the testing performed. These attributes are
selected to provide the means of assessing, comparing and improving test
quality.
Generic test process. define a generic test process. A generic process is
chosen to ensure that this Standard is applicable to the diverse
requirements of the software industry.
4. Performance Testing covers a broad range of engineering or functional
evaluations where a material, product, system, or person is not specified
by detailed material or component specifications: rather, emphasis is on
the final measurable performance characteristics.
Performance testing can refer to the assessment of the performance of a
human examinee. For example, a behind-the-wheel driving test is a
performance test of whether a person is able to perform the functions of
a competent driver of an automobile.
In the computer industry, software performance testing is used to
determine the speed or effectiveness of a computer, network, software
program or device. This process can involve quantitative tests done in a
lab, such as measuring the response time or the number of MIPS
(millions of instructions per second) at which a system functions.
Qualitative attributes such as reliability, scalability and interoperability
may also be evaluated. Performance testing is often done in conjunction
with stress testing.
123